You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
1.6 KiB
1.6 KiB
ofxOnnxRuntime
Updated version, working with Windows 11, CUDA 12.8, and ONNXRuntime 1.20.1
It is not the cleanest implementation, but it works!
In this implementation I have added the ability to batch process. This will only apply to models that allow it.
ONNX Runtime tiny wrapper for openFrameworks
Installation
- macOS
- copy
libonnxruntime.1.10.0.dylib
to/usr/local/lib
- Generate a project using ProjectGenerator.
- copy
- Windows
- There are two ways to install ONNX Runtime on your project.
- Install using NuGet
- I recommend this way in general.
- Generate a project using ProjectGenerator.
- Open
sln
file. - Right click your project on
Solution Explorer
pane, and then selectManage NuGet Packages...
. - From
Browse
tab, searchMicrosoft.ML.OnnxRuntime
(CPU) orMicrosoft.ML.OnnxRuntime.Gpu
(GPU) and install it.
- DLL direct download
- You can download prebuilt DLLs from here.
- Unzip downloaded
onnxruntime-win-x64-(gpu-)1.20.1.zip
and locate files onlibs\onnxruntime\lib\vs\x64\
. - Generate a project using ProjectGenerator, then all libs are linked correctly and all dlls are copied to
bin
.
Tested environment
- oF 0.11.2 + MacBookPro 2018 Intel + macOS Catalina
- oF 0.11.2 + VS2017 + Windows 10 + RTX2080Ti + CUDA 11.4
ToDo
- check M1 Mac (should work), Linux CPU&GPU
Reference Implementation
- I heavily referred Lite.AI.ToolKit implementation.