-
Notifications
You must be signed in to change notification settings - Fork 623
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mmflow to mmdeploy #1606
base: master
Are you sure you want to change the base?
Add mmflow to mmdeploy #1606
Conversation
centroid_lvl = grid.reshape(B * H * W, 1, 1, 2) / 2 ** i | ||
coords_lvl = centroid_lvl + delta_lvl | ||
|
||
corr = rewrite_bilinear_sample(corr, coords_lvl, self.mode, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this rewriter just replace bilinear_sample
with rewrite_bilinear_sample
, why not just create a function rewriter for bilinear_sample
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks,I will have a try
return F.grid_sample(feat, grid, mode, padding_mode, align_corners) | ||
|
||
|
||
def coords_grid(batch: int, xx: Tensor, yy: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just call import the function instead of copy it.
locations from grid | ||
""" | ||
H, W = feat.shape[-2:] | ||
# if grid.shape[-1] != 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we update bilinear_sample?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
flow_result = flow_pred[-1] | ||
# flow maps with the shape [H, W, 2] | ||
flow_result = flow_result.permute(0, 2, 3, 1).squeeze(dim=0).clone().detach() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did this means we can not support multi-batch?
I think we can just output flow_result and leave the postprocess to the mmflow_model.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Output flow_result.permute(0, 2, 3, 1)
. Leave everything else in mmflow_model.py
End2EndModel.forward
Most tracerwarning can be ignored if the exporter can produce the model you want. |
I have remove these unnecessary rewrite ,and verify the result of onnx model. @grimoire @lvhan028 @ouonline @tehkillerbee |
@pedroHuang123 Hi, any update on this PR? |
hello, are there any problems with this PR? |
Some ci failed, could fix them? |
|
For lint error, please install pre-commit on your local host
Then run the following command to check the errors
|
Why the status still pending? |
c2040b2
to
b81da4d
Compare
b81da4d
to
776f762
Compare
I have fixed these ci check, please review the code.thanks |
show_result (bool): Whether to show result in windows, defaults | ||
to `False`. | ||
""" | ||
if len(result.shape) == 4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible that result is list
as it mentioned in the docstring?
@pedroHuang123 Hi, could you give us the model config link of mmflow you tested on this PR? |
Now I just complete the export of torch to onnx,not include SDK . and I find that it takes long time to export mmflow raft model to onnx model (one hours). I hope mmdeploy help analyze the cause. Thanks |
Please provide a script about how to convert the model. |
from mmdeploy.apis import torch2onnx device = 'cuda' 1. convert model to onnxtorch2onnx(imgs, work_dir, save_file, deploy_cfg, model_cfg,model_checkpoint, device) # 2. check onnx modelonnx_model = onnx.load(os.path.join(work_dir,save_file)) |
@@ -0,0 +1,5 @@ | |||
_base_ = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
structure of model config is configs/{codebase}/{deploy_config_filename}
. No algo name in the directory path, pls .change it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
structure of model config is
configs/{codebase}/{deploy_config_filename}
. No algo name in the directory path, pls .change it
I am sorry, what do you mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename file from configs/mmflow/raft/opticalflow_onnxruntime_static.py
to configs/mmflow/opticalflow_onnxruntime_static.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename file from
configs/mmflow/raft/opticalflow_onnxruntime_static.py
toconfigs/mmflow/opticalflow_onnxruntime_static.py
ok,I have renamed these configs
data_list = [] | ||
if isinstance(imgs[0], np.ndarray) and isinstance(imgs[1], np.ndarray): | ||
# directly add img and valid mask | ||
data = dict(img1=imgs[0], img2=imgs[1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does model inputs require two images for inference? What if imgs
has only one image or np.ndarray
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does model inputs require two images for inference? What if
imgs
has only one image ornp.ndarray
?
Yes, two images are required for inference
else: | ||
# add information into dict | ||
data = dict( | ||
img_info=dict(filename1=imgs[0], filename2=imgs[1]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch2onnx script fails at here
python tools/torch2onnx.py \
configs/mmflow/raft/opticalflow_onnxruntime_static.py \
../mmflow/configs/raft/raft_8x2_100k_mixed_368x768.py \
https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth \
../mmflow/demo/frame_0001.png --work-dir ./workdir/raft --device cpu
Traceback (most recent call last):
File "tools/torch2onnx.py", line 85, in <module>
main()
File "tools/torch2onnx.py", line 47, in main
torch2onnx(
File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
return self.call_function(func_name_, *args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 65, in torch2onnx
data, model_inputs = task_processor.create_input(img, input_shape)
File "/root/workspace/mmdeploy/mmdeploy/codebase/mmflow/deploy/flow.py", line 140, in create_input
img_info=dict(filename1=imgs[0], filename2=imgs[1]),
IndexError: list index out of range
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch2onnx script fails at here
python tools/torch2onnx.py \ configs/mmflow/raft/opticalflow_onnxruntime_static.py \ ../mmflow/configs/raft/raft_8x2_100k_mixed_368x768.py \ https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth \ ../mmflow/demo/frame_0001.png --work-dir ./workdir/raft --device cpu
Traceback (most recent call last): File "tools/torch2onnx.py", line 85, in <module> main() File "tools/torch2onnx.py", line 47, in main torch2onnx( File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap return self.call_function(func_name_, *args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function return self.call_function_local(func_name, *args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local return pipe_caller(*args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__ ret = func(*args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 65, in torch2onnx data, model_inputs = task_processor.create_input(img, input_shape) File "/root/workspace/mmdeploy/mmdeploy/codebase/mmflow/deploy/flow.py", line 140, in create_input img_info=dict(filename1=imgs[0], filename2=imgs[1]), IndexError: list index out of range
I have modified torch2onnx.py, you can run :
python tools/torch2onnx.py configs/mmflow/opticalflow_onnxruntime_static.py ../mmflow/configs/raft/raft_8x2_100k_mixed_368x768.py https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth D:\KdWork\OpenmmLab\mmflow\demo\frame_0001.png D:\KdWork\OpenmmLab\mmflow\demo\frame_0002.png --work-dir ./workdir/raft --device cuda
Returns: | ||
list: The predictions of model inference. | ||
""" | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function should be implemented to do inference for both pytorch models and backend models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function should be implemented to do inference for both pytorch models and backend models.
it is ok
mmflow still not added to mmdeploy? |
Motivation
I want to add mmflow to mmdeploy,now I have implemented the conversion of the mmflow raft model to onnx.
Modification
Add mmflow to mmdeploy/codebase and mmdeply/config