Skip to content

Releases: open-mmlab/mmdeploy

MMDeploy Release V0.4.0

01 Apr 10:18
9306bce
Compare
Choose a tag to compare

Features

  • Support MMPose model inference in SDK: HRNet, LiteHRNet and MSPN
  • Support MMDetection3D: PointPillars and CenterPoint(pillar)
  • Support Andoid platform so as to benefit the development of android apps
  • Support fcn_unet deployment with dynamic shape
  • Support TorchScript

Improvements

  • Optimize TRTMultiLevelRoiAlign plugin
  • Remove RoiAlign plugin for ONNXRuntime
  • Add DCN TensorRT plugin
  • Update pad logic in detection heads
  • Refactor the rewriter module of Model Converter
  • Suppress CMAKE_CUDA_ARCHITECTURES warnings
  • Update cmake scripts to ensure that the thirdparty packages are relocatable

Bug fixes

  • Fix the crash on the headless installation
  • Correct the deployment configs for MMSegmentation
  • Optimize prepocess module and fix the potential use-after-free issue
  • Resolve the compatibility with torch 1.11
  • Fix the errors when deploying yolox model
  • Fix the errors occurred during docker build

Documents

  • Reorganize the build documents. Add more details about how to build MMDeploy on Linx, Windows and Android platforms
  • Publish two chapters about the knowledge of model deployment
  • Update the supported model list, including MMSegmentation,MMPose and MMDetection3D
  • Translate the tutorial of "How to support new backends" into Chinese
  • Update the FAQ

Contributors

@irexyc @lvhan028 @RunningLeon @hanrui1sensetime @AllentDan @grimoire @lzhangzz @SemyonBevzuk @VVsssssk @SingleZombie @raykindle @yydc-0 @haofanwang @LJoson @PeterH0323

MMDeploy Release V0.3.0

28 Feb 10:30
34879e6
Compare
Choose a tag to compare

Features

  • Support for windows platform.(#106)
  • Support mmpose codebase.(#94)
  • Support GFL model from mmdetection.(#124)
  • Support export hardsigmoid in torch<=1.8.(#169)

Improvements

  • Support mmocr v0.4+.(#115)
  • Upgrade isort in pre-commit config.(#141)
  • Opimize delta2bboxes.(#152)

Bug fixes

  • Fix onnxruntime wrapper for gpu inference. (#123)
  • Fix ci.(#144)
  • Fix tests for OpenVINO with python 3.6. (#125)
  • Added TensorRT version check. (#133)
  • Fix a type error when computing scale_factor in rewriting interpolate.(#185)

Documents

  • Add Chinese documents How_to_support_new_model.md and How_to_write_config.md
    (#147,#137)

Contributors

A total of 19 developers contributed to this release.

@grimoire @RunningLeon @AllentDan @lvhan028 @hhaAndroid @SingleZombie @lzhangzz @hanrui1sensetime @VVsssssk @SemyonBevzuk @ypwhs @TheSeriousProgrammer @matrixgame2018 @tehkillerbee @uniyushu @haofanwang @ypwhs @zhouzaida @q3394101

MMDeploy Release V0.2.0

28 Jan 04:43
230596b
Compare
Choose a tag to compare

Features

  • Support Nvidia Jetson deployment. (Nano, TX2, Xavier)
  • Add Python interface for SDK inference. (#27)
  • Support yolox on ncnn. (#29)
  • Support segmentation model UNet. (#77)
  • Add docker files. (#67)

Improvements

  • Add coverage report, CI to GitHub repository. (#16, #34, #35)
  • Refactor the config utilities. (#12, #36)
  • Remove redundant copy operation when converting model. (#61)
  • Simplify single batch NMS. (#99)

Documents

  • Now our English and Chinese documents are available on readthedocs: English 简体中文
  • Benchmark and tutorial for Nvidia Jetson Nano. (#71)
  • Fix docstring, links in documents. (#18, #32, #60, #84)
  • More documents for TensorRT and OpenVINO. (#96, #102)

Bug fixes

  • Avoid outputting empty tensor in NMS for ONNX Runtime. (#42)
  • Fix TensorRT 7 SSD. (#49)
  • Fix mmseg dynamic shape. (#57)
  • Fix bugs about pplnn. (#40, #74)

Contributors

A total of 14 developers contributed to this release.

@grimoire @RunningLeon @AllentDan @SemyonBevzuk @lvhan028 @hhaAndroid @Stephenfang51 @SingleZombie @lzhangzz @hanrui1sensetime @VVsssssk @zhiqwang @tehkillerbee @Echo-minn

MMDeploy Release V0.1.0

27 Dec 06:30
26d40fe
Compare
Choose a tag to compare

Major Features

  • Fully support OpenMMLab models

    We provide a unified model deployment toolbox for the codebases in OpenMMLab. The supported codebases are listed as below, and more will be added in the future

    • MMClassification (== 0.19.0)
    • MMDetection (== 2.19.0)
    • MMSegmentation (== 0.19.0)
    • MMEditing (== 0.11.0)
    • MMOCR (== 0.3.0)
  • Multiple inference backends are available

    Models can be exported and run in different backends. The following ones are supported, and more will be taken into consideration

    • ONNX Runtime (>= 1.8.0)
    • TensorRT (>= 7.2)
    • PPLNN (== 0.3.0)
    • ncnn (== 20211208)
    • OpenVINO (2021 4 LTS)
  • Efficient and highly scalable SDK Framework by C/C++

    All kinds of modules in SDK can be extensible, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on.

Contributors

A total of 11 developers contributed to this release.

@grimoire @lvhan028 @AllentDan @VVsssssk @SemyonBevzuk @lzhangzz @RunningLeon @SingleZombie @del-zhenwu @zhouzaida @hanrui1sensetime