You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then, as I need to run it on TensorRT, first I needed to check the converted onnx file are correct.
So, I implemented those unsupported ops with C++/CUDA for onnx-runtime and built, tested.
But in onnx, it still fails to recognize the custom onnx-runtime ops when running onnx-simplifier
....
....
onnx.checker.check_model(model)
File "/home/lee/.local/lib/python3.6/site-packages/onnx/checker.py", line 102, in check_model
C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for ASLFeatPluginX with domain_version of 13
I've read all tutorials related to the custom ops or pytorch custom ops but nothing can help me yet.
Can anybody tell me any hint for surviving this situation?
Thank you
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for ASLFeatPluginX with domain_version of 13
"""
Further information
Relevant Area (e.g. model usage, best practices, shape_inference, version_converter, training, test):
Ask a Question
Question
I have converted TF model to onnx which contains some unsupported ops with a tool as below;
python3 -m tf2onnx.convert --graphdef frozen_model_v2 --output aslfeatv2.onnx
Then, as I need to run it on TensorRT, first I needed to check the converted onnx file are correct.
So, I implemented those unsupported ops with C++/CUDA for onnx-runtime and built, tested.
But in onnx, it still fails to recognize the custom onnx-runtime ops when running onnx-simplifier
I've read all tutorials related to the custom ops or pytorch custom ops but nothing can help me yet.
Can anybody tell me any hint for surviving this situation?
Thank you
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for ASLFeatPluginX with domain_version of 13
"""
Further information
Relevant Area (e.g. model usage, best practices, shape_inference, version_converter, training, test):
Is this issue related to a specific model?
Model name (e.g. mnist): ASLFeat
Model opset (e.g. 7): 13
all related codes are in https://github.com/dedoogong/ASLFeat_TRT and the onnx model is linked to
https://github.com/dedoogong/ASLFeat_TRT/blob/main/aslfeatv2_op13_custom_const_padding_v3.onnx
Notes
Any additional information, code snippets.
The text was updated successfully, but these errors were encountered: