Skip to content

Intel® Optimizations for TensorFlow* 1.15 UP3 Maintenance Release

Compare
Choose a tag to compare
@chuanqi129 chuanqi129 released this 05 Jul 03:15
· 108727 commits to master since this release

This maintenance release of Intel® Optimizations for TensorFlow* 1.15 UP3 Release is based on the TensorFlow v1.15.0up3 tag (https://github.com/Intel-tensorflow/tensorflow.git) as built with support for oneAPI Deep Neural Network Library (oneDNN v2.2.4). This revision contains the following features and fixes:

New functionality and usability improvements:

• Support oneDNN version 2.2.4 and integration work with TensorFlow.
• Add Conv2D + BiasAdd + Relu/LeakyRelu + Add INT8 kernel.
• Fused sigmoid + mul into swish.
• Support quantized s8 pooling.
• Support the MKL runtime disable.

Bug fixes:

• Fixing a UT bug by removing the libtensorflow_framework.so dependency on oneDNN.
• Fix shape inference for QuantizedConv2D-like operations.

Additional security and performance patches:

• Remove the aws-crt-cpp and cJSON dependency.
• Below components are updated to the newest version:
• libjpeg-turbo: 2.1.0
• org_sqlite: 3350500
• curl: 7.77.0

Known issues:

• Remove AWS support temporarily to fix the security caused by aws-crt-cpp and cJSON. AWS S3 file system is unavailable in v1.15.0up3. Please use v1.15.0up2 if you need this feature.
• INT8 Conv with unsigned int8 input in oneDNN v2.2.4 may produce wrong result on the sever without VNNI hardware capability. Consider functionality and performance, we strongly suggest to only use this operation on the server(CLX, ICX and future Xeon) with VNNI.

Best known methods:

• Gelu API:
If model uses gelu op, suggest to use new API ‘tf.nn.gelu’ instead of small operations in python model code. An example is below.
https://github.com/IntelAI/models/blob/master/models/language_modeling/tensorflow/bert_large/inference/generic_ops.py#L88-L106
• Freeze graph
Freeze graph is an important step to improve inference performance. But the steps vary from model to model. A freeze graph script of BERT base inference classifier is provided as reference: https://github.com/IntelAI/models/blob/master/models/language_modeling/tensorflow/bert_large/inference/export_classifier.py
• MKL runtime disable
Set environment variable “export TF_DISABLE_MKL=1” to switch from oneDNN to Eigen backend at runtime. Rebuilding v1.15.0up3 with extra bazel option will get complete experience of this feature:
bazel build --cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --config=opt --copt=-O3 --copt=-Wformat --copt=-Wformat-security --copt=-fstack-protector --copt=-fPIC --copt=-fpic --linkopt=-znoexecstack --linkopt=-zrelro --linkopt=-znow --linkopt=-fstack-protector --config=mkl --copt=-march=native --define=tensorflow_mkldnn_contraction_kernel=1 //tensorflow/tools/pip_package:build_pip_package