Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No results on jstson,Is it an environmental problem? help #212

Open
XianYang2547 opened this issue Apr 26, 2024 · 4 comments
Open

No results on jstson,Is it an environmental problem? help #212

XianYang2547 opened this issue Apr 26, 2024 · 4 comments

Comments

@XianYang2547
Copy link

Hi, I followed the steps in the readme when using it and it works fine on my pc and laptop. But nothing is detected on the jetson agx orin. My environment is as follows: jetpack5.1.1, tensorrt8.5.2.2, torch1.14.0a0+44dac51c.nv23.2, torchvision 0.14.1a0+5e8e2f1, python3.8. After getting onnx (obtained through the erport-**.py of the warehouse) (I also tried to get onnx on the PC and then uploaded it to jetson), there were several warnings when detecting the model conversion, as follows :
Model summary (fused): 168 layers , 11131389 parameters, 0 gradients
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
[W shape_type_inference.cpp:1913] Warning: The shape inference of TRT::EfficientNMS_TRT type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (function UpdateReliable)
====== Diagnostic Run torch.onnx.export version 1.14.0a0+44dac51c.nv23.02 ======
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 4 WARNING 0 ERROR =================== =====

there is no warning when the split model is converted to onnx (I also tried both without and without the --sim option). I first get the engine through a script. When saving, the terminal has the following information :
[04/26/2024-16:00:53] [TRT] [W] Check verbose logs for the list of affected weights.
[04/26/2024-16:00:53] [TRT] [W] - 1 weights are affected by this issue: Detected NaN values and converted them to corresponding FP16 NaN.
[04/26/2024-16:00:53] [TRT] [W] - 57 weights are affected by this issue: Detected subnormal FP16 values.
[04/26/2024-16:00:53] [TRT] [W] - 10 weights are affected by this issue: Detected values less than smallest positive FP16 subnormal value and converted them to the FP16 minimum subnormalized value.

Secondly, I used trtexec to get the engine. When running, I checked the output of the model through debug. The output of the infer-det.py model was all 0, and the output of the infer-seg.py model was all nan; I used c++ in csrc/jetson for inference. Again no results. In addition, I got some practices from other issues, such as installing onnxsim, such as using cpu instead of cuda:0 when converting to onnx. The final result is still no object! My model files are yolov8s.pt and yolov8s provided by ultralytics -seg.pt, I asked you for help because I didn’t know what the problem was.

@triple-Mu
Copy link
Owner

Counld you please provide more env detail?
How about fp32?

@XianYang2547
Copy link
Author

感谢您的工作,送上我的star。python的问题已经解决了。而我在csrc/segment/normal中,使用yolo导出了onnx,然后通过自己的代码转换得到了engine文件,使用cmake.. make后,执行./yolov8-seg seg.engine test1.jpg,得到了LLVM ERROR: out of memory
已放弃
结果,我该怎样修改呢。另外在detect的normal中,使用同样的方法,可以正常推理,但是结果完全错误

@triple-Mu
Copy link
Owner

triple-Mu commented Jun 18, 2024

感谢您的工作,送上我的star。python的问题已经解决了。而我在csrc/segment/normal中,使用yolo导出了onnx,然后通过自己的代码转换得到了engine文件,使用cmake.. make后,执行./yolov8-seg seg.engine test1.jpg,得到了LLVM ERROR: out of memory

已放弃

结果,我该怎样修改呢。另外在detect的normal中,使用同样的方法,可以正常推理,但是结果完全错误

这个llvm的错误貌似与环境相关。能否给出驱动cuda cndnn gcc g++ cmake make ninja等版本呢?

@XianYang2547
Copy link
Author

感谢您的工作,送上我的star。python的问题已经解决了。而我在csrc/segment/normal中,使用yolo导出了onnx,然后通过自己的代码转换得到了engine文件,使用cmake.. make后,执行./yolov8-seg seg.engine test1.jpg,得到了LLVM ERROR: out of memory
已放弃
结果,我该怎样修改呢。另外在detect的normal中,使用同样的方法,可以正常推理,但是结果完全错误

这个llvm的错误貌似与环境相关。能否给出驱动cuda cndnn gcc g++ cmake make ninja等版本呢?

再次感谢你的回复!我已经通过一种莫名的方式解决了:我通过在clion中运行segment/normal/main.cpp,想定位到发生错误的地方,我便加载了它对应的cmakelist,main.cpp直接运行成功了,之后make.. make后,执行./yolov8-seg seg.engine test1.jpg,也得到了结果。如下:
1

之前的错误为:
截图 2024-06-18 19-59-23
另外,在main.cpp中,直接#include yolov8-seg.hpp好像会找不到,我使用了#include "./include/yolov8-seg.hpp"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants