2022.2.0
Major Features and Improvements Summary
In this standard release, we’ve fine-tuned our largest update (2022.1) in 4 years to include support for Intel’s latest CPUs and discrete GPUs for more AI innovation and opportunity.
Note: This release intended for developers that prefer the very latest features and leading performance. Standard releases will continue to be made available three to four times a year. Long Term Support (LTS) releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy to get details. For the latest LTS release visit our selector tool.
-
Broader model and hardware support - Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
- NEW: Support for Intel 13th Gen Core Processor for desktop (code named Raptor Lake).
- NEW: Preview support for Intel’s discrete graphics cards, Intel® Data Center GPU Flex Series and Intel® Arc™ GPU for DL inferencing workloads in intelligent cloud, edge and media analytics workloads. Hundreds of models enabled.
- NEW: Test your model performance with preview support for Intel 4th Generation Xeon® processors (code named Sapphire Rapids).
- Broader support for NLP models and use cases like text to speech and voice recognition. Reduced memory consumption when using Dynamic Input Shapes on CPU. Improved efficiency for NLP applications.
-
Frameworks Integrations – More options that provide minimal code changes to align with your existing frameworks
- OpenVINO Execution Provider for ONNX Runtime gives ONNX Runtime developers more choice for performance optimizations by making it easy to add OpenVINO with minimal code changes.
- NEW: Accelerate PyTorch models with ONNX Runtime using OpenVINO™ integration with ONNX Runtime for PyTorch (OpenVINO™ Torch-ORT). Now PyTorch developers can stay within their framework and benefit from OpenVINO performance gains.
- OpenVINO Integration with TensorFlow now supports more deep learning models with improved inferencing performance.
-
NOTE: The above frameworks integrations are not included in the install packages. Please visit the respective github links for more information. These products are intended for those who have not yet installed native OpenVINO
-
More portability and performance - See a performance boost straight away with automatic device discovery, load balancing & dynamic inference parallelism across CPU, GPU, and more.
- NEW: Introducing new performance hint (”Cumulative throughput”) in AUTO device, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
- NEW: Introducing Intel® FPGA AI Suite support which enables real-time, low-latency, and low-power deep learning inference in this easy-to-use package
-
NOTE: The Intel® FPGA AI Suite is not included in our distribution packages, please request information here to learn more.
-
You can find OpenVINO™ toolkit 2022.2 release here:
- Download archives* with OpenVINO™ Runtime for C/C++
- OpenVINO™ Runtime for Python:
pip install openvino==2022.2.0
- OpenVINO™ Development tools:
pip install openvino-dev==2022.2.0
Release documentation is available here: https://docs.openvino.ai/2022.2/
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino-relnotes.html