Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mmflow to mmdeploy #1606

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

pedroHuang123
Copy link

Motivation

I want to add mmflow to mmdeploy,now I have implemented the conversion of the mmflow raft model to onnx.

Modification

Add mmflow to mmdeploy/codebase and mmdeply/config

@CLAassistant
Copy link

CLAassistant commented Jan 4, 2023

CLA assistant check
All committers have signed the CLA.

centroid_lvl = grid.reshape(B * H * W, 1, 1, 2) / 2 ** i
coords_lvl = centroid_lvl + delta_lvl

corr = rewrite_bilinear_sample(corr, coords_lvl, self.mode,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this rewriter just replace bilinear_sample with rewrite_bilinear_sample, why not just create a function rewriter for bilinear_sample?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks,I will have a try

return F.grid_sample(feat, grid, mode, padding_mode, align_corners)


def coords_grid(batch: int, xx: Tensor, yy: Tensor) -> Tensor:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just call import the function instead of copy it.

locations from grid
"""
H, W = feat.shape[-2:]
# if grid.shape[-1] != 2:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we update bilinear_sample?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
I just try to avoid these warning.


flow_result = flow_pred[-1]
# flow maps with the shape [H, W, 2]
flow_result = flow_result.permute(0, 2, 3, 1).squeeze(dim=0).clone().detach()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this means we can not support multi-batch?
I think we can just output flow_result and leave the postprocess to the mmflow_model.py.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this warning will cause the conversion to fail
image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Output flow_result.permute(0, 2, 3, 1). Leave everything else in mmflow_model.py End2EndModel.forward

@grimoire
Copy link
Member

grimoire commented Jan 6, 2023

Most tracerwarning can be ignored if the exporter can produce the model you want.
Adding a rewriter to fix the warning might bring some trouble. The rewriter could broken if mmflow update their codes. DO NOT add rewriter unless it is necessary.

@pedroHuang123
Copy link
Author

I have remove these unnecessary rewrite ,and verify the result of onnx model. @grimoire @lvhan028 @ouonline @tehkillerbee

@RunningLeon
Copy link
Collaborator

@pedroHuang123 Hi, any update on this PR?

@pedroHuang123
Copy link
Author

@pedroHuang123 Hi, any update on this PR?

hello, are there any problems with this PR?

@RunningLeon
Copy link
Collaborator

RunningLeon commented Feb 1, 2023

@pedroHuang123 Hi, any update on this PR?

hello, are there any problems with this PR?

Some ci failed, could fix them?
BTW, pls sign the cla

@pedroHuang123
Copy link
Author

@pedroHuang123 Hi, any update on this PR?

hello, are there any problems with this PR?

Some ci failed, could fix them? BTW, pls sign the cla

@pedroHuang123 Hi, any update on this PR?

hello, are there any problems with this PR?

Some ci failed, could fix them? BTW, pls sign the cla
I have signed the cla,but I do not know how to fix these ci checks?
image

@lvhan028
Copy link
Collaborator

lvhan028 commented Feb 3, 2023

For lint error, please install pre-commit on your local host

pip install -U pre-commit
cd mmdeploy
pre-commit install

Then run the following command to check the errors

pre-commit run --all-files

@pedroHuang123
Copy link
Author

CLA assistant check Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.

DESKTOP-UJGOP82\kandao seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Why the status still pending?

@pedroHuang123 pedroHuang123 force-pushed the hhf/mmflow_contributing_code branch 3 times, most recently from c2040b2 to b81da4d Compare February 3, 2023 16:12
@pedroHuang123
Copy link
Author

@pedroHuang123 Hi, any update on this PR?

hello, are there any problems with this PR?

Some ci failed, could fix them? BTW, pls sign the cla

I have fixed these ci check, please review the code.thanks

show_result (bool): Whether to show result in windows, defaults
to `False`.
"""
if len(result.shape) == 4:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible that result is list as it mentioned in the docstring?

@RunningLeon
Copy link
Collaborator

@pedroHuang123 Hi, could you give us the model config link of mmflow you tested on this PR?

@pedroHuang123
Copy link
Author

pedroHuang123 commented Feb 6, 2023

@pedroHuang123
Copy link
Author

Now I just complete the export of torch to onnx,not include SDK . and I find that it takes long time to export mmflow raft model to onnx model (one hours). I hope mmdeploy help analyze the cause. Thanks

@grimoire
Copy link
Member

grimoire commented Feb 7, 2023

Please provide a script about how to convert the model.

@pedroHuang123
Copy link
Author

pedroHuang123 commented Feb 7, 2023

Please provide a script about how to convert the model.

from mmdeploy.apis import torch2onnx
import onnx
import os
imgs = ['mmflow/demo/frame_0001.png','mmflow/demo/frame_0002.png']
work_dir = 'work_dir/onnx/raft'
save_file = 'raft_mixed.onnx'
deploy_cfg = 'mmdeploy/configs/mmflow/raft/opticalflow_onnxruntime_static.py'
model_cfg = 'D:\KdWork\OpenmmLab\mmflow\configs\raft\raft_8x2_100k_mixed_368x768.py'
model_checkpoint = 'mmflow/checkpoints/raft_8x2_100k_mixed_368x768.pth'

device = 'cuda'

1. convert model to onnx

torch2onnx(imgs, work_dir, save_file, deploy_cfg, model_cfg,model_checkpoint, device)

# 2. check onnx model

onnx_model = onnx.load(os.path.join(work_dir,save_file))
try:
onnx.checker.check_model(onnx_model)
except Exception:
print("Model Incorrect")
else:
print("Model correct")
print("over")

@@ -0,0 +1,5 @@
_base_ = [
Copy link
Collaborator

@RunningLeon RunningLeon Feb 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

structure of model config is configs/{codebase}/{deploy_config_filename}. No algo name in the directory path, pls .change it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

structure of model config is configs/{codebase}/{deploy_config_filename}. No algo name in the directory path, pls .change it
I am sorry, what do you mean?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename file from configs/mmflow/raft/opticalflow_onnxruntime_static.py to configs/mmflow/opticalflow_onnxruntime_static.py

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename file from configs/mmflow/raft/opticalflow_onnxruntime_static.py to configs/mmflow/opticalflow_onnxruntime_static.py
ok,I have renamed these configs

data_list = []
if isinstance(imgs[0], np.ndarray) and isinstance(imgs[1], np.ndarray):
# directly add img and valid mask
data = dict(img1=imgs[0], img2=imgs[1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does model inputs require two images for inference? What if imgs has only one image or np.ndarray?

Copy link
Author

@pedroHuang123 pedroHuang123 Feb 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does model inputs require two images for inference? What if imgs has only one image or np.ndarray?

Yes, two images are required for inference

else:
# add information into dict
data = dict(
img_info=dict(filename1=imgs[0], filename2=imgs[1]),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch2onnx script fails at here

python tools/torch2onnx.py \
configs/mmflow/raft/opticalflow_onnxruntime_static.py \
../mmflow/configs/raft/raft_8x2_100k_mixed_368x768.py \
https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth  \
../mmflow/demo/frame_0001.png --work-dir ./workdir/raft --device cpu
Traceback (most recent call last):
  File "tools/torch2onnx.py", line 85, in <module>
    main()
  File "tools/torch2onnx.py", line 47, in main
    torch2onnx(
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 65, in torch2onnx
    data, model_inputs = task_processor.create_input(img, input_shape)
  File "/root/workspace/mmdeploy/mmdeploy/codebase/mmflow/deploy/flow.py", line 140, in create_input
    img_info=dict(filename1=imgs[0], filename2=imgs[1]),
IndexError: list index out of range

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch2onnx script fails at here

python tools/torch2onnx.py \
configs/mmflow/raft/opticalflow_onnxruntime_static.py \
../mmflow/configs/raft/raft_8x2_100k_mixed_368x768.py \
https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth  \
../mmflow/demo/frame_0001.png --work-dir ./workdir/raft --device cpu
Traceback (most recent call last):
  File "tools/torch2onnx.py", line 85, in <module>
    main()
  File "tools/torch2onnx.py", line 47, in main
    torch2onnx(
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/root/workspace/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 65, in torch2onnx
    data, model_inputs = task_processor.create_input(img, input_shape)
  File "/root/workspace/mmdeploy/mmdeploy/codebase/mmflow/deploy/flow.py", line 140, in create_input
    img_info=dict(filename1=imgs[0], filename2=imgs[1]),
IndexError: list index out of range

I have modified torch2onnx.py, you can run :
python tools/torch2onnx.py configs/mmflow/opticalflow_onnxruntime_static.py ../mmflow/configs/raft/raft_8x2_100k_mixed_368x768.py https://download.openmmlab.com/mmflow/raft/raft_8x2_100k_mixed_368x768.pth D:\KdWork\OpenmmLab\mmflow\demo\frame_0001.png D:\KdWork\OpenmmLab\mmflow\demo\frame_0002.png --work-dir ./workdir/raft --device cuda

Returns:
list: The predictions of model inference.
"""
pass
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function should be implemented to do inference for both pytorch models and backend models.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function should be implemented to do inference for both pytorch models and backend models.

it is ok

@Swayzzu
Copy link

Swayzzu commented Sep 15, 2023

mmflow still not added to mmdeploy?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants