-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No results on jstson,Is it an environmental problem? help #212
Comments
Counld you please provide more env detail? |
感谢您的工作,送上我的star。python的问题已经解决了。而我在csrc/segment/normal中,使用yolo导出了onnx,然后通过自己的代码转换得到了engine文件,使用cmake.. make后,执行./yolov8-seg seg.engine test1.jpg,得到了LLVM ERROR: out of memory |
这个llvm的错误貌似与环境相关。能否给出驱动cuda cndnn gcc g++ cmake make ninja等版本呢? |
Hi, I followed the steps in the readme when using it and it works fine on my pc and laptop. But nothing is detected on the jetson agx orin. My environment is as follows: jetpack5.1.1, tensorrt8.5.2.2, torch1.14.0a0+44dac51c.nv23.2, torchvision 0.14.1a0+5e8e2f1, python3.8. After getting onnx (obtained through the erport-**.py of the warehouse) (I also tried to get onnx on the PC and then uploaded it to jetson), there were several warnings when detecting the model conversion, as follows :
Model summary (fused): 168 layers , 11131389 parameters, 0 gradients
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
====== Diagnostic Run torch.onnx.export version 1.14.0a0+44dac51c.nv23.02 ======
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 4 WARNING 0 ERROR =================== =====
there is no warning when the split model is converted to onnx (I also tried both without and without the --sim option). I first get the engine through a script. When saving, the terminal has the following information :
[04/26/2024-16:00:53] [TRT] [W] Check verbose logs for the list of affected weights.
[04/26/2024-16:00:53] [TRT] [W] - 1 weights are affected by this issue: Detected NaN values and converted them to corresponding FP16 NaN.
[04/26/2024-16:00:53] [TRT] [W] - 57 weights are affected by this issue: Detected subnormal FP16 values.
[04/26/2024-16:00:53] [TRT] [W] - 10 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.
Secondly, I used trtexec to get the engine. When running, I checked the output of the model through debug. The output of the infer-det.py model was all 0, and the output of the infer-seg.py model was all nan; I used c++ in csrc/jetson for inference. Again no results. In addition, I got some practices from other issues, such as installing onnxsim, such as using cpu instead of cuda:0 when converting to onnx. The final result is still no object! My model files are yolov8s.pt and yolov8s provided by ultralytics -seg.pt, I asked you for help because I didn’t know what the problem was.
The text was updated successfully, but these errors were encountered: