Skip to content

Intel® Optimizations for TensorFlow 2.6.0

Compare
Choose a tag to compare
@rsketine rsketine released this 26 Aug 15:56
338d5f6

This release of Intel® Optimized TensorFlow is based on the TensorFlow v2.6.0 tag and is built with support for oneDNN (oneAPI Deep Neural Network Library). For features and fixes that were introduced in TensorFlow 2.6.0, please see the TensorFlow 2.6.0 release notes also. This build was built fromv2.6.0.

Major features:

  • Native format is enabled for all data types.
  • Single binary with runtime environment variable (TF_ENABLE_ONEDNN_OPTS=1) is enabled for all data types.
  • Enabled Windows OpenMP support for Intel-oneDNN to improve performance on CPUs

Improvements:

  • Native format support is extended to the following
  • Int8 data type is enabled with native format.
  • Added support for Conv2DBackpropFilterWithBias Fusion in native format.
  • Enabled quantizedConcatV2 with native format.
  • Enabled dequantize op
  • Enabled quantized pooling ops
  • Enabled quantized Conv ops
  • Upgraded oneDNN to v2.3_rc2
  • FusedMatMul and Sigmoid are enabled for CPU.
  • Updated the oneDNN auto_mixed_precision_lists to allow more ops in bfloat16. This significantly reduces the number of Cast ops in models running bf16 inference with auto_mixed_precision and improves broad model performance.
  • Enhanced pattern matcher for grappler graph optimization
  • Build issues on Mac are fixed for CPU optimizations
  • Removed static macro-INTEL_MKL and changed to IsMKLEnabled,
  • Added check for dtype as MklMatMul supports bfloat16 and float32. whereas the default type is float64.

Bug fixes:

Versions and components

Known issues

  • Open issues: open issues for oneDNN optimizations
  • Bfloat16 is not guaranteed to work on AVX or AVX2.