Skip to content

2023.1.0

Compare
Choose a tag to compare
@artanokhov artanokhov released this 18 Sep 09:20
· 4673 commits to master since this release
47b736f

Summary of major features and improvements

  • More Generative AI options with Hugging Face and improved PyTorch model support.
    • NEW: Your PyTorch solutions are now even further enhanced with OpenVINO. You’ve got more options and you no longer need to convert to ONNX for deployment. Developers can now use their API of choice - PyTorch or OpenVINO for added performance benefits. Additionally, users can automatically import and convert PyTorch models for quicker deployment. You can continue to make the most of OpenVINO tools for advanced model compression and deployment advantages, ensuring flexibility and a range of options.
    • torch.compile (preview) – OpenVINO is now available as a backend through PyTorch torch.compile, empowering developers to utilize OpenVINO toolkit through PyTorch APIs. This feature has also been integrated into the Automatic1111 Stable Diffusion Web UI, helping developers achieve accelerated performance for Stable Diffusion 1.5 and 2.1 on Intel CPUs and GPUs in both Native Linux and Windows OS platforms.
    • Optimum Intel – Hugging Face and Intel continue to enhance top generative AI models by optimizing execution, making your models run faster and more efficiently on both CPU and GPU. OpenVINO serves as a runtime for inferencing execution. New PyTorch auto import and conversion capabilities have been enabled, along with support for weights compression to achieve further performance gains.
  • Broader LLM model support and more model compression techniques
    • Enhanced performance and accessibility for Generative AI: Runtime performance and memory usage have been significantly optimized, especially for Large Language models (LLMs). Models used for chatbots, instruction following, code generation, and many more, including prominent models like BLOOM, Dolly, Llama 2, GPT-J, GPTNeoX, ChatGLM, and Open-Llama have been enabled.
    • Improved LLMs on GPU – Model coverage for dynamic shapes support has been expanded, further helping the performance of generative AI workloads on both integrated and discrete GPUs. Furthermore, memory reuse and weight memory consumption for dynamic shapes have been improved.
    • Neural Network Compression Framework (NNCF) now includes an 8-bit weights compression method, making it easier to compress and optimize LLM models. SmoothQuant method has been added for more accurate and efficient post-training quantization for Transformer-based models.
  • More portability and performance to run AI at the edge, in the cloud or locally.
    • NEW: Support for Intel(R) Core(TM) Ultra (codename Meteor Lake). This new generation of Intel CPUs is tailored to excel in AI workloads with a built-in inference accelerators.
    • Integration with MediaPipe – Developers now have direct access to this framework for building multipurpose AI pipelines. Easily integrate with OpenVINO Runtime and OpenVINO Model Server to enhance performance for faster AI model execution. You also benefit from seamless model management and version control, as well as custom logic integration with additional calculators and graphs for tailored AI solutions. Lastly, you can scale faster by delegating deployment to remote hosts via gRPC/REST interfaces for distributed processing.

Support Change and Deprecation Notices

  • OpenVINO™ Development Tools package (pip install openvino-dev) is currently being deprecated and will be removed from installation options and distribution channels with 2025.0. For more info, see the documentation for Legacy Features.
  • Tools:
    • Accuracy Checker is deprecated and will be discontinued with 2024.0.
    • Post-Training Optimization Tool (POT)  has been deprecated and will be discontinued with 2024.0.
  • Runtime:
    • Intel® Gaussian & Neural Accelerator (Intel® GNA) is being deprecated, the GNA plugin will be discontinued with 2024.0.
    • OpenVINO C++/C/Python 1.0 APIs will be discontinued with 2024.0.
    • Python 3.7 will be discontinued with 2023.2 LTS release.

You can find OpenVINO™ toolkit 2023.1 release here:

Release documentation is available here: https://docs.openvino.ai/2023.1
Release Notes are available here: https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-1.html