You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work. I'm trying to infer the model in c++. The platform is jetson orin nx. Everything works when I use the official yolov8s-pose model, but the bad_alloc error will appear when I use the pose model fine-tuned on personal dataset. Specifically, when my face enter the right side of the image, this error happens. I didn't change the model structure , and I see the size of official model's engine file is 26.0MB and the fine-tuned engine file is 26.1MB.
For your information, I export the model following the readme: 1. Use python script model.export(format="onnx") 2. run /usr/src/tensorrt/bin/trtexec.
The text was updated successfully, but these errors were encountered:
Thank you for your work. I'm trying to infer the model in c++. The platform is jetson orin nx. Everything works when I use the official yolov8s-pose model, but the bad_alloc error will appear when I use the pose model fine-tuned on personal dataset. Specifically, when my face enter the right side of the image, this error happens. I didn't change the model structure , and I see the size of official model's engine file is 26.0MB and the fine-tuned engine file is 26.1MB.
For your information, I export the model following the readme: 1. Use python script
model.export(format="onnx")
2. run/usr/src/tensorrt/bin/trtexec
.The text was updated successfully, but these errors were encountered: