diff --git a/docs/en/02-how-to-run/prebuilt_package_windows.md b/docs/en/02-how-to-run/prebuilt_package_windows.md index 77057caf05..08c5d14122 100644 --- a/docs/en/02-how-to-run/prebuilt_package_windows.md +++ b/docs/en/02-how-to-run/prebuilt_package_windows.md @@ -21,7 +21,7 @@ ______________________________________________________________________ -This tutorial takes `mmdeploy-1.2.0-windows-amd64.zip` and `mmdeploy-1.2.0-windows-amd64-cuda11.3.zip` as examples to show how to use the prebuilt packages. The former support onnxruntime cpu inference, the latter support onnxruntime-gpu and tensorrt inference. +This tutorial takes `mmdeploy-1.3.0-windows-amd64.zip` and `mmdeploy-1.3.0-windows-amd64-cuda11.8.zip` as examples to show how to use the prebuilt packages. The former support onnxruntime cpu inference, the latter support onnxruntime-gpu and tensorrt inference. The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference. @@ -81,8 +81,8 @@ In order to use `ONNX Runtime` backend, you should also do the following steps. 5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API). ```bash - pip install mmdeploy==1.2.0 - pip install mmdeploy-runtime==1.2.0 + pip install mmdeploy==1.3.0 + pip install mmdeploy-runtime==1.3.0 ``` :point_right: If you have installed it before, please uninstall it first. @@ -100,7 +100,7 @@ In order to use `ONNX Runtime` backend, you should also do the following steps. ![sys-path](https://user-images.githubusercontent.com/16019484/181463801-1d7814a8-b256-46e9-86f2-c08de0bc150b.png) :exclamation: Restart powershell to make the environment variables setting take effect. You can check whether the settings are in effect by `echo $env:PATH`. -8. Download SDK C/cpp Library mmdeploy-1.2.0-windows-amd64.zip +8. Download SDK C/cpp Library mmdeploy-1.3.0-windows-amd64.zip ### TensorRT @@ -109,17 +109,17 @@ In order to use `TensorRT` backend, you should also do the following steps. 5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API). ```bash - pip install mmdeploy==1.2.0 - pip install mmdeploy-runtime-gpu==1.2.0 + pip install mmdeploy==1.3.0 + pip install mmdeploy-runtime-gpu==1.3.0 ``` :point_right: If you have installed it before, please uninstall it first. 6. Install TensorRT related package and set environment variables - - CUDA Toolkit 11.1 - - TensorRT 8.2.3.0 - - cuDNN 8.2.1.0 + - CUDA Toolkit 11.8 + - TensorRT 8.6.1.6 + - cuDNN 8.6.0 Add the runtime libraries of TensorRT and cuDNN to the `PATH`. You can refer to the path setting of onnxruntime. Don't forget to install python package of TensorRT. @@ -129,7 +129,7 @@ In order to use `TensorRT` backend, you should also do the following steps. 7. Install pycuda by `pip install pycuda` -8. Download SDK C/cpp Library mmdeploy-1.2.0-windows-amd64-cuda11.3.zip +8. Download SDK C/cpp Library mmdeploy-1.3.0-windows-amd64-cuda11.8.zip ## Model Convert @@ -141,7 +141,7 @@ After preparation work, the structure of the current working directory should be ``` .. -|-- mmdeploy-1.2.0-windows-amd64 +|-- mmdeploy-1.3.0-windows-amd64 |-- mmpretrain |-- mmdeploy `-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth @@ -189,7 +189,7 @@ After installation of mmdeploy-tensorrt prebuilt package, the structure of the c ``` .. -|-- mmdeploy-1.2.0-windows-amd64-cuda11.3 +|-- mmdeploy-1.3.0-windows-amd64-cuda11.8 |-- mmpretrain |-- mmdeploy `-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth @@ -252,8 +252,8 @@ The structure of current working directory: ``` . -|-- mmdeploy-1.2.0-windows-amd64 -|-- mmdeploy-1.2.0-windows-amd64-cuda11.3 +|-- mmdeploy-1.3.0-windows-amd64 +|-- mmdeploy-1.3.0-windows-amd64-cuda11.8 |-- mmpretrain |-- mmdeploy |-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth @@ -324,7 +324,7 @@ The following describes how to use the SDK's C API for inference It is recommended to use `CMD` here. - Under `mmdeploy-1.2.0-windows-amd64\\example\\cpp\\build\\Release` directory: + Under `mmdeploy-1.3.0-windows-amd64\\example\\cpp\\build\\Release` directory: ``` .\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmpretrain\demo\demo.JPEG @@ -344,7 +344,7 @@ The following describes how to use the SDK's C API for inference It is recommended to use `CMD` here. - Under `mmdeploy-1.2.0-windows-amd64-cuda11.3\\example\\cpp\\build\\Release` directory + Under `mmdeploy-1.3.0-windows-amd64-cuda11.8\\example\\cpp\\build\\Release` directory ``` .\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmpretrain\demo\demo.JPEG diff --git a/docs/en/get_started.md b/docs/en/get_started.md index 7bd86b6d31..fd806ae45a 100644 --- a/docs/en/get_started.md +++ b/docs/en/get_started.md @@ -230,9 +230,9 @@ result = inference_model( You can directly run MMDeploy demo programs in the precompiled package to get inference results. ```shell -wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.0/mmdeploy-1.3.0-linux-x86_64-cuda11.3.tar.gz -tar xf mmdeploy-1.3.0-linux-x86_64-cuda11.3 -cd mmdeploy-1.3.0-linux-x86_64-cuda11.3 +wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.0/mmdeploy-1.3.0-linux-x86_64-cuda11.8.tar.gz +tar xf mmdeploy-1.3.0-linux-x86_64-cuda11.8 +cd mmdeploy-1.3.0-linux-x86_64-cuda11.8 # run python demo python example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg # run C/C++ demo diff --git a/docs/zh_cn/02-how-to-run/prebuilt_package_windows.md b/docs/zh_cn/02-how-to-run/prebuilt_package_windows.md index 1abf99d0b6..074a0f468b 100644 --- a/docs/zh_cn/02-how-to-run/prebuilt_package_windows.md +++ b/docs/zh_cn/02-how-to-run/prebuilt_package_windows.md @@ -23,7 +23,7 @@ ______________________________________________________________________ 目前,`MMDeploy`在`Windows`平台下提供`cpu`以及`cuda`两种Device的预编译包,其中`cpu`版支持使用onnxruntime cpu进行推理,`cuda`版支持使用onnxruntime-gpu以及tensorrt进行推理,可以从[Releases](https://github.com/open-mmlab/mmdeploy/releases)获取。。 -本篇教程以`mmdeploy-1.2.0-windows-amd64.zip`和`mmdeploy-1.2.0-windows-amd64-cuda11.3.zip`为例,展示预编译包的使用方法。 +本篇教程以`mmdeploy-1.3.0-windows-amd64.zip`和`mmdeploy-1.3.0-windows-amd64-cuda11.8.zip`为例,展示预编译包的使用方法。 为了方便使用者快速上手,本教程以分类模型(mmpretrain)为例,展示两种预编译包的使用方法。 @@ -89,8 +89,8 @@ ______________________________________________________________________ 5. 安装`mmdeploy`(模型转换)以及`mmdeploy_runtime`(模型推理Python API)的预编译包 ```bash - pip install mmdeploy==1.2.0 - pip install mmdeploy-runtime==1.2.0 + pip install mmdeploy==1.3.0 + pip install mmdeploy-runtime==1.3.0 ``` :point_right: 如果之前安装过,需要先卸载后再安装。 @@ -108,7 +108,7 @@ ______________________________________________________________________ ![sys-path](https://user-images.githubusercontent.com/16019484/181463801-1d7814a8-b256-46e9-86f2-c08de0bc150b.png) :exclamation: 重启powershell让环境变量生效,可以通过 echo $env:PATH 来检查是否设置成功。 -8. 下载 SDK C/cpp Library mmdeploy-1.2.0-windows-amd64.zip +8. 下载 SDK C/cpp Library mmdeploy-1.3.0-windows-amd64.zip ### TensorRT @@ -117,8 +117,8 @@ ______________________________________________________________________ 5. 安装`mmdeploy`(模型转换)以及`mmdeploy_runtime`(模型推理Python API)的预编译包 ```bash - pip install mmdeploy==1.2.0 - pip install mmdeploy-runtime-gpu==1.2.0 + pip install mmdeploy==1.3.0 + pip install mmdeploy-runtime-gpu==1.3.0 ``` :point_right: 如果之前安装过,需要先卸载后再安装 @@ -137,7 +137,7 @@ ______________________________________________________________________ 7. 安装pycuda `pip install pycuda` -8. 下载 SDK C/cpp Library mmdeploy-1.2.0-windows-amd64-cuda11.3.zip +8. 下载 SDK C/cpp Library mmdeploy-1.3.0-windows-amd64-cuda11.8.zip ## 模型转换 @@ -149,7 +149,7 @@ ______________________________________________________________________ ``` .. -|-- mmdeploy-1.2.0-windows-amd64 +|-- mmdeploy-1.3.0-windows-amd64 |-- mmpretrain |-- mmdeploy `-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth @@ -197,7 +197,7 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device) ``` .. -|-- mmdeploy-1.2.0-windows-amd64-cuda11.3 +|-- mmdeploy-1.3.0-windows-amd64-cuda11.8 |-- mmpretrain |-- mmdeploy `-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth @@ -260,8 +260,8 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device) ``` . -|-- mmdeploy-1.2.0-windows-amd64 -|-- mmdeploy-1.2.0-windows-amd64-cuda11.3 +|-- mmdeploy-1.3.0-windows-amd64 +|-- mmdeploy-1.3.0-windows-amd64-cuda11.8 |-- mmpretrain |-- mmdeploy |-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth @@ -340,7 +340,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet 这里建议使用cmd,这样如果exe运行时如果找不到相关的dll的话会有弹窗 - 在mmdeploy-1.2.0-windows-amd64\\example\\cpp\\build\\Release目录下: + 在mmdeploy-1.3.0-windows-amd64\\example\\cpp\\build\\Release目录下: ``` .\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmpretrain\demo\demo.JPEG @@ -360,7 +360,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet 这里建议使用cmd,这样如果exe运行时如果找不到相关的dll的话会有弹窗 - 在mmdeploy-1.2.0-windows-amd64-cuda11.3\\example\\cpp\\build\\Release目录下: + 在mmdeploy-1.3.0-windows-amd64-cuda11.8\\example\\cpp\\build\\Release目录下: ``` .\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmpretrain\demo\demo.JPEG diff --git a/docs/zh_cn/get_started.md b/docs/zh_cn/get_started.md index 7097d5b434..05ef2b0c27 100644 --- a/docs/zh_cn/get_started.md +++ b/docs/zh_cn/get_started.md @@ -113,14 +113,14 @@ mim install "mmcv>=2.0.0rc2" ```shell # 1. 安装 MMDeploy 模型转换工具(含trt/ort自定义算子) -pip install mmdeploy==1.2.0 +pip install mmdeploy==1.3.0 # 2. 安装 MMDeploy SDK推理工具 # 根据是否需要GPU推理可任选其一进行下载安装 # 2.1 支持 onnxruntime 推理 -pip install mmdeploy-runtime==1.2.0 +pip install mmdeploy-runtime==1.3.0 # 2.2 支持 onnxruntime-gpu tensorrt 推理 -pip install mmdeploy-runtime-gpu==1.2.0 +pip install mmdeploy-runtime-gpu==1.3.0 # 3. 安装推理引擎 # 3.1 安装推理引擎 TensorRT @@ -223,10 +223,10 @@ result = inference_model( 你可以直接运行预编译包中的 demo 程序,输入 SDK Model 和图像,进行推理,并查看推理结果。 ```shell -wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.2.0/mmdeploy-1.2.0-linux-x86_64-cuda11.3.tar.gz -tar xf mmdeploy-1.2.0-linux-x86_64-cuda11.3 +wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.0/mmdeploy-1.3.0-linux-x86_64-cuda11.8.tar.gz +tar xf mmdeploy-1.3.0-linux-x86_64-cuda11.8 -cd mmdeploy-1.2.0-linux-x86_64-cuda11.3 +cd mmdeploy-1.3.0-linux-x86_64-cuda11.8 # 运行 python demo python example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg # 运行 C/C++ demo diff --git a/mmdeploy/version.py b/mmdeploy/version.py index a7c77d5ee5..bdf914fb98 100644 --- a/mmdeploy/version.py +++ b/mmdeploy/version.py @@ -1,7 +1,7 @@ # Copyright (c) OpenMMLab. All rights reserved. from typing import Tuple -__version__ = '1.2.0' +__version__ = '1.3.0' short_version = __version__ diff --git a/tools/package_tools/packaging/mmdeploy_runtime/version.py b/tools/package_tools/packaging/mmdeploy_runtime/version.py index a3c6ea2b01..535755a831 100644 --- a/tools/package_tools/packaging/mmdeploy_runtime/version.py +++ b/tools/package_tools/packaging/mmdeploy_runtime/version.py @@ -1,2 +1,2 @@ # Copyright (c) OpenMMLab. All rights reserved. -__version__ = '1.2.0' +__version__ = '1.3.0'