Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad_alloc when inferring by fine-tuned pose model #251

Open
wcycqjy opened this issue Sep 14, 2024 · 0 comments
Open

Bad_alloc when inferring by fine-tuned pose model #251

wcycqjy opened this issue Sep 14, 2024 · 0 comments

Comments

@wcycqjy
Copy link

wcycqjy commented Sep 14, 2024

Thank you for your work. I'm trying to infer the model in c++. The platform is jetson orin nx. Everything works when I use the official yolov8s-pose model, but the bad_alloc error will appear when I use the pose model fine-tuned on personal dataset. Specifically, when my face enter the right side of the image, this error happens. I didn't change the model structure , and I see the size of official model's engine file is 26.0MB and the fine-tuned engine file is 26.1MB.

For your information, I export the model following the readme: 1. Use python script model.export(format="onnx") 2. run /usr/src/tensorrt/bin/trtexec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant