-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如何设置导出engine模型的batch呢? #210
Comments
是不是可以在pt转onnx时,设置batch的值。请问你可以推理视频或CSI视频流吗 |
好像是转onnx设置batch , 对于jetson TX1 NX 开发板,batch设置多少合适呢?可以推理视频,我试过例子了,作者的代码能跑通,但是是1batch |
看你的显存吧,显存大就可以多设置点,我也不是很懂。请问你用的哪个文件推理的视频啊,需要修改代码吗 |
我是用的deepstream -app 推理的,文件就是csrc/deepstream/deepstream_app_config.txt 我只是走一下流程 |
没全没有做batch>1的。deepstream应该支持动态batch的,您可以探索以后为仓库提pr吗? |
在 export-det.py 中的
default=[1, 3, 640, 640] 改成 default=[8, 3, 640, 640] 后续是否是静态 8 batch 推理,可以一次推理8张图片,是这样的吗
|
用triton不就行了 起多个实例 |
转onnx 再转engine 默认batch为1,如何自定义?batch=1多路推理速度很慢
The text was updated successfully, but these errors were encountered: