You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I am interested in training a model either in TensorFlow or PyTorch and using ONNX Runtime in order to run inference in a .NET application. I would like to be able to run inference on a Movidius VPU. I understand how to do this in Python with OpenVINO, but can I achieve this in .NET with ONNX Runtime? Thank you!
ep:OpenVINOissues related to OpenVINO execution providerapi:CSharpissues related to the C# API
1 participant
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello. I am interested in training a model either in TensorFlow or PyTorch and using ONNX Runtime in order to run inference in a .NET application. I would like to be able to run inference on a Movidius VPU. I understand how to do this in Python with OpenVINO, but can I achieve this in .NET with ONNX Runtime? Thank you!
Beta Was this translation helpful? Give feedback.
All reactions