Skip to content

Releases: openvinotoolkit/openvino

2023.0.0.dev20230119

25 Jan 14:40
53672e7
Compare
Choose a tag to compare
2023.0.0.dev20230119 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2023.0.0.dev20230119 pre-release version here:

Release documentation is available here: https://docs.openvino.ai/nightly/

2022.3.0

21 Dec 21:17
9752faf
Compare
Choose a tag to compare

Major Features and Improvements Summary

This is a Long-Term Support (LTS) release. LTS releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy  v.2 to get details.

  • 2022.3 LTS release provides functional bug fixes, and capability changes for the previous 2022.2 release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability and higher inferencing performance with less code changes.
  • Broader model and hardware support – Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
    • Full support for 4th Generation Intel® Xeon® Scalable processor family (code name Sapphire Rapids) for deep learning inferencing workloads from edge to cloud.
    • Full support for Intel’s discrete graphics cards, such as Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.
    • Improved performance when leveraging throughput hint on CPU plugin for 12th and 13th Generation Intel® Core™ processor family (code named Alder Lake and Raptor Lake).
    • Enhanced “Cumulative throughput” and selection of compute modes added to AUTO functionality, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
  • Expanded model coverage - Optimize & deploy with ease across an expanded range of deep learning models.
    • Broader support for NLP models and use cases like text to speech and voice recognition.
    • Continued performance enhancements for computer vision models Including StyleGAN2, Stable Diffusion, PyTorch RAFT and YOLOv7.
    • Significant quality and model performance improvements on Intel GPUs compared to the previous OpenVINO toolkit release.
    • New Jupyter notebook tutorials for Stable Diffusion text-to-image generation, YOLOv7 optimization and 3D Point Cloud Segmentation.
  • Improved API and More Integrations – Easier to adopt and maintain code. Requires fewer code changes, aligns better with frameworks, & minimizes conversion
    • Preview of TensorFlow Front End – Load TensorFlow models directly into OpenVINO Runtime and easily export OpenVINO IR format without offline conversion. New “–use_new_frontend” flag enables this preview – see further details below in Model Optimizer section of release notes.
    • NEW: Hugging Face Optimum Intel – Gain the performance benefits of OpenVINO (including NNCF) when using Hugging Face Transformers. Initial release supports PyTorch models.
    • Intel® oneAPI Deep Neural Network Library (oneDNN) has been updated to 2.7 for further refinements and significant improvements in performance for the latest Intel CPU and GPU processors.
    • Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.
  • Note: Intel® Movidius ™ VPU based products are not supported in this release, but will be added back in a future OpenVINO 2022.3.1 LTS update. In the meantime, for support on those products please use OpenVINO 2022.1.
  • Note: Macintosh* computers using the M1* processor can now install OpenVINO and use the OpenVINO ARM* Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn't fall under the LTS 2-year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html

You can find OpenVINO™ toolkit 2022.3 release here:

Release documentation is available here: https://docs.openvino.ai/2022.3/

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-2022-3-lts-relnotes.html

2022.3.0.dev20221125

29 Nov 14:46
4f0b846
Compare
Choose a tag to compare
2022.3.0.dev20221125 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2022.3.0.dev20221125 pre-release version here:

Release documentation is available here: https://docs.openvino.ai/nightly/

2022.3.0.dev20221103

15 Nov 15:06
a5a9698
Compare
Choose a tag to compare
2022.3.0.dev20221103 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2022.3.0.dev20221103 pre-release version here:

Release documentation is available here: https://docs.openvino.ai/nightly/

* - sha256 sums for archives

82b275a2a72daf41b6cbdfdcf9c853bf9fe6507e623b39713e38394dfca4a8df  l_openvino_toolkit_debian9_arm_2022.3.0.dev20221103_armhf.tgz
398078a0fd7c30515e1fbc7120838448c6d300fcf7b3f6cc976ea08954db8fdf  l_openvino_toolkit_rhel8_2022.3.0.dev20221103_x86_64.tgz
c5a1026cc6d211b48d64c15ad24bbac83d14d74c840e0fbcedb168ec06b1d6ee  l_openvino_toolkit_ubuntu18_2022.3.0.dev20221103_x86_64.tgz
2ac96d451222fd07789df9f8bbdae7d6c0a10b83607c3d780800c04ee7cb4c91  l_openvino_toolkit_ubuntu20_2022.3.0.dev20221103_x86_64.tgz
16bbb025f5d145b3ebd0c84859a04a1f0f67d2bab21347998cd1d23cf3f2fd8e  m_openvino_toolkit_osx_2022.3.0.dev20221103_x86_64.tgz
96bb611a69a89d74848418cdca1afb719b795143e20e9a039f949c8ea147be9b  w_openvino_toolkit_windows_2022.3.0.dev20221103_x86_64.zip

2022.2.0

22 Sep 18:59
af16ea1
Compare
Choose a tag to compare

Major Features and Improvements Summary

In this standard release, we’ve fine-tuned our largest update (2022.1) in 4 years to include support for Intel’s latest CPUs and discrete GPUs for more AI innovation and opportunity.

Note: This release intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details. For the latest LTS release visit our selector tool.

  • Broader model and hardware support - Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.

    • NEW: Support for Intel 13th Gen Core Processor for desktop (code named Raptor Lake).
    • NEW: Preview support for Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series and Intel® Arc™ GPU for DL inferencing workloads in intelligent cloud, edge and media analytics workloads. Hundreds of models enabled.
    • NEW: Test your model performance with preview support for Intel 4th Generation Xeon® processors (code named Sapphire Rapids).
    • Broader support for NLP models and use cases like text to speech and voice recognition. Reduced memory consumption when using Dynamic Input Shapes on CPU. Improved efficiency for NLP applications.
  • Frameworks Integrations – More options that provide minimal code changes to align with your existing frameworks

    • OpenVINO Execution Provider for ONNX Runtime gives ONNX Runtime developers more choice for performance optimizations by making it easy to add OpenVINO with minimal code changes.
    • NEW: Accelerate PyTorch models with ONNX Runtime using OpenVINO™ integration with ONNX Runtime for PyTorch (OpenVINO™ Torch-ORT). Now PyTorch developers can stay within their framework and benefit from OpenVINO performance gains.
    • OpenVINO Integration with TensorFlow now supports more deep learning models with improved inferencing performance.
    • NOTE: The above frameworks integrations are not included in the install packages. Please visit the respective github links for more information. These products are intended for those who have not yet installed native OpenVINO

  • More portability and performance - See a performance boost straight away with automatic device discovery, load balancing & dynamic inference parallelism across CPU, GPU, and more.

    • NEW: Introducing new performance hint (”Cumulative throughput”) in AUTO device, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
    • NEW: Introducing Intel® FPGA AI Suite support which enables real-time, low-latency, and low-power deep learning inference in this easy-to-use package  
      • NOTE: The Intel® FPGA AI Suite is not included in our distribution packages, please request information here to learn more.

You can find OpenVINO™ toolkit 2022.2 release here:

Release documentation is available here: https://docs.openvino.ai/2022.2/

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html

2022.2.0.dev20220829

31 Aug 17:50
41a404f
Compare
Choose a tag to compare
2022.2.0.dev20220829 Pre-release
Pre-release

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2022.2.0.dev20220829 pre-release version here:

Release documentation is available here: https://docs.openvino.ai/nightly/

* - sha256 sums for archives

24c30f2a108fae008956c6eed9f288955b161ba882202a119860fd42fa6f8e8d  l_openvino_toolkit_rhel8_2022.2.0.dev20220829.tgz
e4886c6a61d38449fb976e8ba6eb30f1d3b7d7b06e1952a6c74f14f340062b1c  l_openvino_toolkit_ubuntu18_2022.2.0.dev20220829.tgz
42d11fff6eaa0b339cb21fe7acf568b7180cde3a33bdafaf56dbd5c4b51173bf  l_openvino_toolkit_ubuntu20_2022.2.0.dev20220829.tgz
a4cca48b44a98bbeebc9dea3049d322b3e965c3d799c872ce4342a0ba11a7d1c  m_openvino_toolkit_osx_2022.2.0.dev20220829.tgz
ae5a67d35d6b970058dad739c234458e35415073dec94d7b70c13a56dc1e87a6  w_openvino_toolkit_windows_2022.2.0.dev20220829.zip

2022.1.1

11 Aug 10:56
39aba80
Compare
Choose a tag to compare

Minor updates and bug fixes for specific use cases and scenarios.

This release provides functional bug fixes and capability updates from the previous release 2022.1 that enable developers

Note: This is a standard release intended for developers that prefer the very latest version of OpenVINO. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). Visit Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details on the latest LTS releases.

Component updates:

  • OpenVINO runtime:
    • Added a way to unload TBB libraries upon OpenVINO library unloading - use ov::force_tbb_terminate option to ov::Core
    • Added a way to unload OpenVINO frontend libraries for cases when IR / ONNX / PDPD files are read in runtime. Users should call ov::shutdown once they finish to work with OpenVINO library to free all the resources.

You can find the OpenVINO™ 2022.1.1 release here:

  • Download archives* with OpenVINO™ Runtime for C/C++Download archives* with OpenVINO™ Runtime for C/C++
  • Github Repository

Limitations of this release:

  • Windows OS, Linux and MacOS
  • Intel® Movidius™ Myriad™ X plugin is not included
  • The Intel® version of OpenCV will not be included, please visit this guide on how to use the community version
  • For product specifications, please visit release notes on OpenVINO toolkit 2022.1

2022.1

22 Mar 21:57
cdb9bec
Compare
Choose a tag to compare

Major Features and Improvements Summary

This release is the biggest upgrade in 3.5 years! Read the release notes below for a summary of changes.

2022.1 release provides functional bug fixes, and capability changes for the previous 2021.4.2 LTS release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability, and higher inferencing performance with fewer code changes.

Note: This is a standard release intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are also available. A new LTS version is released every year and is supported for 2 years (1 year of bug fixes, and 2 years for security patches). ead Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details. Latest LTS releases: 2020.x LTS and 2021.x LTS.

  • Updated, cleaner API:

    • New OpenVINO API 2.0 was introduced. The API aligns OpenVINO inputs/outputs with frameworks. Input and output tensors use native framework layouts and element types. Old Inference Engine and nGraph APIs are available but will be deprecated in a future release down the road.

    • inference_engine, inference_engine_transformations, inferencengine_lp_transformations and ngraph libraries were merged to common openvino library. Other libraries were renamed. Please, use common ov:: namespace inside all OpenVINO components. See how to implement Inference Pipeline using OpenVINO API v2.0 for details.

    • Model Optimizer’s API parameters have been reduced to minimize complexity. Performance has been significantly improved for model conversion on ONNX models.

    • It’s highly recommended to migrate to API 2.0 because it already has additional features and this list will be extended later. The following list of additional features is supported by API 2.0:

      • Working with dynamic shapes. The feature is quite useful for best performance for Neural Language Processing (NLP) models, super-resolution models, and other which accepts dynamic input shapes. Note: Models compiled with dynamic shapes may show reduced performance and consume more memory than models configured with a static shape on the same input tensor size. Setting upper bounds to reshape the model for dynamic shapes or splitting the input into several parts is recommended.

      • Preprocessing of the model to add preprocessing operations to the inference models and fully occupy the accelerator and free CPU resources.

    • Read the Transition Guide for migrating to the new API 2.0.

  • Portability and Performance:

    • New AUTO plugin self-discovers available system inferencing capacity based on model requirements, so applications no longer need to know its compute environment in advance.

    • The OpenVINO™ performance hints are the new way to configure the performance with portability in mind. The hints “reverse” the direction of the configuration in the right fashion: rather than map the application needs to the low-level performance settings, and keep an associated application logic to configure each possible device separately, the idea is to express a target scenario with a single config key and let the device to configure itself in response. As the hints are supported by every OpenVINO™ device, this is a completely portable and future-proof solution.

    • Automatic batching functionality via code hints automatically scale batch size based on XPU and available memory.

  • Broader Model Support:

    • With Dynamic Input Shapes capabilities on CPU, OpenVINO will be able to adapt to multiple input dimensions in a single model providing more complete NLP support. Dynamic Shapes support on additional XPUs expected in a future dot release.
  • New Models with focus on NLP and a new category, Anomaly detection, and support for conversion and inference of select PaddlePaddle models:

    • Pre-trained Models: Anomaly segmentation focus on industrial inspection making Speech denoising trainable plus updates on speech recognition and speech synthesis

    • Combined Demo: Noise reduction + speech recognition + question answering + translation+ text to speech

    • Public Models: Focus on NLP ContextNet, Speech-Transformer, HiFi-GAN, Glow-TTS, FastSpeech2, and Wav2Vec

  • Built with 12th Gen Intel® Core™ 'Alder Lake' in mind. Supports the hybrid architecture to deliver enhancements for high-performance inferencing on CPU & integrated GPU

You can find OpenVINO™ toolkit 2022.1 release here:

Release documentation is available here: https://docs.openvino.ai/2022.1/

Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html

2022.1.0.dev20220316

16 Mar 16:11
0b08b9a
Compare
Choose a tag to compare
2022.1.0.dev20220316 Pre-release
Pre-release

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2022.1.0.dev20220316 pre-release version here:

Release documentation is available here: https://docs.openvino.ai/nightly/

* - sha256 sums for archives

b54c5bcfaa078a54bc9b73f4605706167b57bdde183819e7752bf24f86463759  w_openvino_toolkit_windows_dev_2022.1.0_dev20220316.zip
0db5073c0c0e2d8df1fcd7a13ba142b28f020377b00407f5a6b00b4ad4c919e4  m_openvino_toolkit_osx_dev_2022.1.0_dev20220316.tgz
ea73acdfdcdeb88965fc9163a6da5a2ef287744073850ad9dbc2016116435913  l_openvino_toolkit_ubuntu20_dev_2022.1.0_dev20220316.tgz
e00a2d0359a784caacbc321cea5ab23f15ad6cf583dd359390cb1f4eb4e46515  l_openvino_toolkit_ubuntu18_dev_2022.1.0_dev20220316.tgz
d3ff2c050fa33df093547dad7124e39428772b05ac4d76c0686a7511580e056e  l_openvino_toolkit_rhel8_dev_2022.1.0_dev20220316.tgz

cb5be734f7cf2ea9a48ead91313f19edc513be982a5a90bf967d334ac5700d2b  openvino_opencv_windows.tgz
3c2d9defba4db131c4d023bf2df171a5de5a335147d53db6c4c1805ea5da8466  openvino_opencv_ubuntu20.tgz
b7cb06a207cbf513173125a223d6fd1ade2da52dc6a86fafc9d8c76467447a34  openvino_opencv_ubuntu18.tgz
8cfc8d9e39b2e9b91d4b6faa1bbe561d21a9524b748faf1bd0c9285093fb8363  openvino_opencv_osx.tgz

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

2022.1.0.dev20220302

04 Mar 16:02
7cd3c8e
Compare
Choose a tag to compare
2022.1.0.dev20220302 Pre-release
Pre-release

OpenVINO™ toolkit pre-release definition:

  • It is introduced to get early feedback from the community.
  • The scope and functionality of the pre-release version is subject to change in the future.
  • Using the pre-release in production is strongly discouraged.

You can find OpenVINO™ toolkit 2022.1.0.dev20220302 pre-release version here:

Release documentation is available here: https://docs.openvino.ai/nightly/

* - sha256 sums for archives

78f42300b84b66db551bf650a122f7f793d4c1bdf57fa6ac7e7a2ef1eb19a897  openvino_windows_dev_2022.1.0-6935-7cd3c8e86e9.zip
4c070ea22816d852a9335249a27562b2d85bf54fd8358a674d6f895c022136bc  openvino_ubuntu20_dev_2022.1.0-6935-7cd3c8e86e9.tgz
43ecc39cc3bff027e758ca8951e26efe976d90669e01d22f65f059537793b1ca  openvino_ubuntu18_dev_2022.1.0-6935-7cd3c8e86e9.tgz
7f750e07c7a5e5e6ab7d1fd1aa113b059a39793e094da8aa5a984c29f7060dc7  openvino_rhel8_dev_2022.1.0-6935-7cd3c8e86e9.tgz
f0d16b7c7d7e3c41715903a3515138ad5ed9b15d998bcccc23ce7124d668b6e4  openvino_osx_dev_2022.1.0-6935-7cd3c8e86e9.tgz

ba26dfa5d81eb31c42ec8b8a55ba5e7c6dc225ef05aef8c3dd82a814138fa4bd  openvino_opencv_windows.tgz
29c5ff6e9a3a840642aee76308cff37df48fcafa197fe5acd8bd6cfa11197e7f  openvino_opencv_ubuntu20.tgz
4b0d4aecf9cf0957380bd84a907e033294835d176933b164c47bc95f8862b0b3  openvino_opencv_ubuntu18.tgz
4d9337347f30eb5e865ec5059485c766bfc8bdfcc5329bedd0959fb106efdbe1  openvino_opencv_osx.tgz

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.