You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be amazing to support the deployment of MMSegmentation Mask2former models. I'm aware of #2347 but this does not solve the issue for semantic segmentation.
Related resources
No response
Additional context
Even though I realize is not supported, I tried to follow the steps to deploy a mask2former model and the result feels like is almost there. Using segmentation_tensorrt_static-512x512.py, I had to add onnx_config.opset_version=12 to be able to perform the einsum operation.
After that, everything compiles (with a lot of warnings) and provides valid output files. However, when I do the inference with the SDK I get a mask out of place, much bigger than the actual object and some pixels spaced between the pixels that actually conform the mask.
Looking forward to get this amazing feature to be able to run this model in production!
Thanks in advance
The text was updated successfully, but these errors were encountered:
Doing further investigation on my own, I have realized that this could be caused by a simpler problem when interpreting mmdeploy::Segmentor::Result. I was building the cv::Mat with the wrong type. I was assuming 8 bytes per pixel whereas actually are 32.
Conclusion
To deploy mmsegmentation mask2former:
Set onnx_config.opset_version=12 (or higher).
Get your mask like this:
mmdeploy::Segmentor::Result result = segmentor->Apply(input);
cv::Mat mask(result->height, result->width, CV_32S, result->mask);
Motivation
It would be amazing to support the deployment of MMSegmentation Mask2former models. I'm aware of #2347 but this does not solve the issue for semantic segmentation.
Related resources
No response
Additional context
Even though I realize is not supported, I tried to follow the steps to deploy a mask2former model and the result feels like is almost there. Using segmentation_tensorrt_static-512x512.py, I had to add
onnx_config.opset_version=12
to be able to perform the einsum operation.After that, everything compiles (with a lot of warnings) and provides valid output files. However, when I do the inference with the SDK I get a mask out of place, much bigger than the actual object and some pixels spaced between the pixels that actually conform the mask.
Looking forward to get this amazing feature to be able to run this model in production!
Thanks in advance
The text was updated successfully, but these errors were encountered: