Skip to content

Commit

Permalink
bump version
Browse files Browse the repository at this point in the history
  • Loading branch information
RunningLeon committed Sep 21, 2023
1 parent 3813c85 commit bf0bf0f
Show file tree
Hide file tree
Showing 6 changed files with 40 additions and 40 deletions.
32 changes: 16 additions & 16 deletions docs/en/02-how-to-run/prebuilt_package_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

______________________________________________________________________

This tutorial takes `mmdeploy-1.2.0-windows-amd64.zip` and `mmdeploy-1.2.0-windows-amd64-cuda11.3.zip` as examples to show how to use the prebuilt packages. The former support onnxruntime cpu inference, the latter support onnxruntime-gpu and tensorrt inference.
This tutorial takes `mmdeploy-1.3.0-windows-amd64.zip` and `mmdeploy-1.3.0-windows-amd64-cuda11.8.zip` as examples to show how to use the prebuilt packages. The former support onnxruntime cpu inference, the latter support onnxruntime-gpu and tensorrt inference.

The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference.

Expand Down Expand Up @@ -81,8 +81,8 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API).
```bash
pip install mmdeploy==1.2.0
pip install mmdeploy-runtime==1.2.0
pip install mmdeploy==1.3.0
pip install mmdeploy-runtime==1.3.0
```

:point_right: If you have installed it before, please uninstall it first.
Expand All @@ -100,7 +100,7 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
![sys-path](https://user-images.githubusercontent.com/16019484/181463801-1d7814a8-b256-46e9-86f2-c08de0bc150b.png)
:exclamation: Restart powershell to make the environment variables setting take effect. You can check whether the settings are in effect by `echo $env:PATH`.

8. Download SDK C/cpp Library mmdeploy-1.2.0-windows-amd64.zip
8. Download SDK C/cpp Library mmdeploy-1.3.0-windows-amd64.zip

### TensorRT

Expand All @@ -109,17 +109,17 @@ In order to use `TensorRT` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API).

```bash
pip install mmdeploy==1.2.0
pip install mmdeploy-runtime-gpu==1.2.0
pip install mmdeploy==1.3.0
pip install mmdeploy-runtime-gpu==1.3.0
```

:point_right: If you have installed it before, please uninstall it first.

6. Install TensorRT related package and set environment variables

- CUDA Toolkit 11.1
- TensorRT 8.2.3.0
- cuDNN 8.2.1.0
- CUDA Toolkit 11.8
- TensorRT 8.6.1.6
- cuDNN 8.6.0

Add the runtime libraries of TensorRT and cuDNN to the `PATH`. You can refer to the path setting of onnxruntime. Don't forget to install python package of TensorRT.
Expand All @@ -129,7 +129,7 @@ In order to use `TensorRT` backend, you should also do the following steps.
7. Install pycuda by `pip install pycuda`
8. Download SDK C/cpp Library mmdeploy-1.2.0-windows-amd64-cuda11.3.zip
8. Download SDK C/cpp Library mmdeploy-1.3.0-windows-amd64-cuda11.8.zip
## Model Convert
Expand All @@ -141,7 +141,7 @@ After preparation work, the structure of the current working directory should be
```
..
|-- mmdeploy-1.2.0-windows-amd64
|-- mmdeploy-1.3.0-windows-amd64
|-- mmpretrain
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -189,7 +189,7 @@ After installation of mmdeploy-tensorrt prebuilt package, the structure of the c
```
..
|-- mmdeploy-1.2.0-windows-amd64-cuda11.3
|-- mmdeploy-1.3.0-windows-amd64-cuda11.8
|-- mmpretrain
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -252,8 +252,8 @@ The structure of current working directory:
```
.
|-- mmdeploy-1.2.0-windows-amd64
|-- mmdeploy-1.2.0-windows-amd64-cuda11.3
|-- mmdeploy-1.3.0-windows-amd64
|-- mmdeploy-1.3.0-windows-amd64-cuda11.8
|-- mmpretrain
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -324,7 +324,7 @@ The following describes how to use the SDK's C API for inference
It is recommended to use `CMD` here.
Under `mmdeploy-1.2.0-windows-amd64\\example\\cpp\\build\\Release` directory:
Under `mmdeploy-1.3.0-windows-amd64\\example\\cpp\\build\\Release` directory:
```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmpretrain\demo\demo.JPEG
Expand All @@ -344,7 +344,7 @@ The following describes how to use the SDK's C API for inference
It is recommended to use `CMD` here.
Under `mmdeploy-1.2.0-windows-amd64-cuda11.3\\example\\cpp\\build\\Release` directory
Under `mmdeploy-1.3.0-windows-amd64-cuda11.8\\example\\cpp\\build\\Release` directory
```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmpretrain\demo\demo.JPEG
Expand Down
6 changes: 3 additions & 3 deletions docs/en/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,9 +230,9 @@ result = inference_model(
You can directly run MMDeploy demo programs in the precompiled package to get inference results.

```shell
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.0/mmdeploy-1.3.0-linux-x86_64-cuda11.3.tar.gz
tar xf mmdeploy-1.3.0-linux-x86_64-cuda11.3
cd mmdeploy-1.3.0-linux-x86_64-cuda11.3
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.0/mmdeploy-1.3.0-linux-x86_64-cuda11.8.tar.gz
tar xf mmdeploy-1.3.0-linux-x86_64-cuda11.8
cd mmdeploy-1.3.0-linux-x86_64-cuda11.8
# run python demo
python example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# run C/C++ demo
Expand Down
26 changes: 13 additions & 13 deletions docs/zh_cn/02-how-to-run/prebuilt_package_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ______________________________________________________________________

目前,`MMDeploy``Windows`平台下提供`cpu`以及`cuda`两种Device的预编译包,其中`cpu`版支持使用onnxruntime cpu进行推理,`cuda`版支持使用onnxruntime-gpu以及tensorrt进行推理,可以从[Releases](https://github.com/open-mmlab/mmdeploy/releases)获取。。

本篇教程以`mmdeploy-1.2.0-windows-amd64.zip``mmdeploy-1.2.0-windows-amd64-cuda11.3.zip`为例,展示预编译包的使用方法。
本篇教程以`mmdeploy-1.3.0-windows-amd64.zip``mmdeploy-1.3.0-windows-amd64-cuda11.8.zip`为例,展示预编译包的使用方法。

为了方便使用者快速上手,本教程以分类模型(mmpretrain)为例,展示两种预编译包的使用方法。

Expand Down Expand Up @@ -89,8 +89,8 @@ ______________________________________________________________________
5. 安装`mmdeploy`(模型转换)以及`mmdeploy_runtime`(模型推理Python API)的预编译包
```bash
pip install mmdeploy==1.2.0
pip install mmdeploy-runtime==1.2.0
pip install mmdeploy==1.3.0
pip install mmdeploy-runtime==1.3.0
```

:point_right: 如果之前安装过,需要先卸载后再安装。
Expand All @@ -108,7 +108,7 @@ ______________________________________________________________________
![sys-path](https://user-images.githubusercontent.com/16019484/181463801-1d7814a8-b256-46e9-86f2-c08de0bc150b.png)
:exclamation: 重启powershell让环境变量生效,可以通过 echo $env:PATH 来检查是否设置成功。

8. 下载 SDK C/cpp Library mmdeploy-1.2.0-windows-amd64.zip
8. 下载 SDK C/cpp Library mmdeploy-1.3.0-windows-amd64.zip

### TensorRT

Expand All @@ -117,8 +117,8 @@ ______________________________________________________________________
5. 安装`mmdeploy`(模型转换)以及`mmdeploy_runtime`(模型推理Python API)的预编译包

```bash
pip install mmdeploy==1.2.0
pip install mmdeploy-runtime-gpu==1.2.0
pip install mmdeploy==1.3.0
pip install mmdeploy-runtime-gpu==1.3.0
```

:point_right: 如果之前安装过,需要先卸载后再安装
Expand All @@ -137,7 +137,7 @@ ______________________________________________________________________

7. 安装pycuda `pip install pycuda`

8. 下载 SDK C/cpp Library mmdeploy-1.2.0-windows-amd64-cuda11.3.zip
8. 下载 SDK C/cpp Library mmdeploy-1.3.0-windows-amd64-cuda11.8.zip

## 模型转换

Expand All @@ -149,7 +149,7 @@ ______________________________________________________________________

```
..
|-- mmdeploy-1.2.0-windows-amd64
|-- mmdeploy-1.3.0-windows-amd64
|-- mmpretrain
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -197,7 +197,7 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)

```
..
|-- mmdeploy-1.2.0-windows-amd64-cuda11.3
|-- mmdeploy-1.3.0-windows-amd64-cuda11.8
|-- mmpretrain
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -260,8 +260,8 @@ export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)

```
.
|-- mmdeploy-1.2.0-windows-amd64
|-- mmdeploy-1.2.0-windows-amd64-cuda11.3
|-- mmdeploy-1.3.0-windows-amd64
|-- mmdeploy-1.3.0-windows-amd64-cuda11.8
|-- mmpretrain
|-- mmdeploy
|-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -340,7 +340,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

这里建议使用cmd,这样如果exe运行时如果找不到相关的dll的话会有弹窗

在mmdeploy-1.2.0-windows-amd64\\example\\cpp\\build\\Release目录下:
在mmdeploy-1.3.0-windows-amd64\\example\\cpp\\build\\Release目录下:

```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmpretrain\demo\demo.JPEG
Expand All @@ -360,7 +360,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

这里建议使用cmd,这样如果exe运行时如果找不到相关的dll的话会有弹窗

在mmdeploy-1.2.0-windows-amd64-cuda11.3\\example\\cpp\\build\\Release目录下:
在mmdeploy-1.3.0-windows-amd64-cuda11.8\\example\\cpp\\build\\Release目录下:

```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmpretrain\demo\demo.JPEG
Expand Down
12 changes: 6 additions & 6 deletions docs/zh_cn/get_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,14 +113,14 @@ mim install "mmcv>=2.0.0rc2"

```shell
# 1. 安装 MMDeploy 模型转换工具(含trt/ort自定义算子)
pip install mmdeploy==1.2.0
pip install mmdeploy==1.3.0

# 2. 安装 MMDeploy SDK推理工具
# 根据是否需要GPU推理可任选其一进行下载安装
# 2.1 支持 onnxruntime 推理
pip install mmdeploy-runtime==1.2.0
pip install mmdeploy-runtime==1.3.0
# 2.2 支持 onnxruntime-gpu tensorrt 推理
pip install mmdeploy-runtime-gpu==1.2.0
pip install mmdeploy-runtime-gpu==1.3.0

# 3. 安装推理引擎
# 3.1 安装推理引擎 TensorRT
Expand Down Expand Up @@ -223,10 +223,10 @@ result = inference_model(
你可以直接运行预编译包中的 demo 程序,输入 SDK Model 和图像,进行推理,并查看推理结果。

```shell
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.2.0/mmdeploy-1.2.0-linux-x86_64-cuda11.3.tar.gz
tar xf mmdeploy-1.2.0-linux-x86_64-cuda11.3
wget https://github.com/open-mmlab/mmdeploy/releases/download/v1.3.0/mmdeploy-1.3.0-linux-x86_64-cuda11.8.tar.gz
tar xf mmdeploy-1.3.0-linux-x86_64-cuda11.8

cd mmdeploy-1.2.0-linux-x86_64-cuda11.3
cd mmdeploy-1.3.0-linux-x86_64-cuda11.8
# 运行 python demo
python example/python/object_detection.py cuda ../mmdeploy_model/faster-rcnn ../mmdetection/demo/demo.jpg
# 运行 C/C++ demo
Expand Down
2 changes: 1 addition & 1 deletion mmdeploy/version.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple

__version__ = '1.2.0'
__version__ = '1.3.0'
short_version = __version__


Expand Down
2 changes: 1 addition & 1 deletion tools/package_tools/packaging/mmdeploy_runtime/version.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Copyright (c) OpenMMLab. All rights reserved.
__version__ = '1.2.0'
__version__ = '1.3.0'

0 comments on commit bf0bf0f

Please sign in to comment.