diff --git a/README.md b/README.md index 8619306a96dc09..0c90404e5ad100 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,5 @@ # [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository -[![Stable release](https://img.shields.io/badge/version-2020.1-green.svg)](https://github.com/opencv/dldt/releases/tag/2020.1) +[![Stable release](https://img.shields.io/badge/version-2020.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2020.3.0) [![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE) This toolkit allows developers to deploy pre-trained deep learning models @@ -36,7 +36,7 @@ with us doing the following steps: * Make sure you can build the product and run all tests and samples with your patch * In case of a larger feature, provide relevant unit tests and one or more sample -* Submit a pull request at https://github.com/opencv/dldt/pulls +* Submit a pull request at https://github.com/openvinotoolkit/openvino/pulls We will review your contribution and, if any additional fixes or modifications are necessary, may give some feedback to guide you. Your pull request will be @@ -46,7 +46,7 @@ merged into GitHub* repositories if accepted. Please report questions, issues and suggestions using: * The `openvino` [tag on StackOverflow]\* -* [GitHub* Issues](https://github.com/opencv/dldt/issues) +* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues) * [Forum](https://software.intel.com/en-us/forums/computer-vision) --- diff --git a/build-instruction.md b/build-instruction.md index 3d5cfe136f2f21..12103ce9875004 100644 --- a/build-instruction.md +++ b/build-instruction.md @@ -28,7 +28,6 @@ - [Add Inference Engine to Your Project](#add-inference-engine-to-your-project) - [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2) - [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os) - - [For Windows](#for-windows-1) - [Next Steps](#next-steps) - [Additional Resources](#additional-resources) @@ -60,12 +59,12 @@ The software was validated on: - [CMake]\* 3.11 or higher - GCC\* 4.8 or higher to build the Inference Engine - Python 2.7 or higher for Inference Engine Python API wrapper -- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]. +- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352]. ### Build Steps 1. Clone submodules: ```sh - cd dldt + cd openvino git submodule update --init --recursive ``` 2. Install build dependencies using the `install_dependencies.sh` script in the @@ -78,7 +77,7 @@ The software was validated on: ``` 3. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to - [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441] + [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352] before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Compute Runtime for OpenCL™ Driver. @@ -172,10 +171,10 @@ Native compilation of the Inference Engine is the most straightforward solution. sudo apt-get install -y git cmake libusb-1.0-0-dev ``` -2. Go to the cloned `dldt` repository: +2. Go to the cloned `openvino` repository: ```bash - cd dldt + cd openvino ``` 3. Initialize submodules: @@ -262,15 +261,15 @@ with the following content: 5. Run Docker\* container with mounted source code folder from host: ```bash - docker run -it -v /absolute/path/to/dldt:/dldt ie_cross_armhf /bin/bash + docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash ``` 6. While in the container: - 1. Go to the cloned `dldt` repository: + 1. Go to the cloned `openvino` repository: ```bash - cd dldt + cd openvino ``` 2. Create a build folder: @@ -291,8 +290,8 @@ with the following content: ``` 7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries - in the `dldt/bin/armv7l/` directory and the OpenCV* - installation in the `dldt/inference-engine/temp`. + in the `openvino/bin/armv7l/` directory and the OpenCV* + installation in the `openvino/inference-engine/temp`. >**NOTE**: Native applications that link to cross-compiled Inference Engine library require an extra compilation flag `-march=armv7-a`. @@ -381,8 +380,8 @@ cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^ 6. Before running the samples, add paths to the TBB and OpenCV binaries used for the build to the `%PATH%` environment variable. By default, TBB binaries are - downloaded by the CMake-based script to the `/inference-engine/temp/tbb/bin` - folder, OpenCV binaries to the `/inference-engine/temp/opencv_4.3.0/opencv/bin` + downloaded by the CMake-based script to the `/inference-engine/temp/tbb/bin` + folder, OpenCV binaries to the `/inference-engine/temp/opencv_4.3.0/opencv/bin` folder. ### Additional Build Options @@ -437,7 +436,7 @@ cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^ call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017 set CXX=icl set CC=icl -:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by dldt cmake script +:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script set TBBROOT= cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release .. cmake --build . --config Release @@ -461,7 +460,7 @@ The software was validated on: 1. Clone submodules: ```sh - cd dldt + cd openvino git submodule update --init --recursive ``` 2. Install build dependencies using the `install_dependencies.sh` script in the @@ -545,7 +544,7 @@ This section describes how to build Inference Engine for Android x86 (64-bit) op 2. Clone submodules ```sh - cd dldt + cd openvino git submodule update --init --recursive ``` @@ -610,7 +609,7 @@ before running the Inference Engine build: For CMake projects, set the `InferenceEngine_DIR` environment variable: ```sh -export InferenceEngine_DIR=/path/to/dldt/build/ +export InferenceEngine_DIR=/path/to/openvino/build/ ``` Then you can find Inference Engine by `find_package`: @@ -660,20 +659,6 @@ sudo ldconfig rm 97-myriad-usbboot.rules ``` -### For Windows - -For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, -install the Movidius™ VSC driver: - -1. Go to the `/inference-engine/thirdparty/movidius/MovidiusDriver` - directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT - repository was cloned. -2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from - the pop-up menu. - -You have installed the driver for your Intel® Movidius™ Neural Compute Stick -or Intel® Neural Compute Stick 2. - ## Next Steps Congratulations, you have built the Inference Engine. To get started with the @@ -706,7 +691,7 @@ This target collects all dependencies, prepares the nGraph package and copies it [Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit [CMake]:https://cmake.org/download/ -[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441 +[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352]:https://github.com/intel/compute-runtime/releases/tag/20.13.16352 [MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz [MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip) [OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download diff --git a/get-started-linux.md b/get-started-linux.md index 3aef12a98e2fa6..c0a6a712d68c35 100644 --- a/get-started-linux.md +++ b/get-started-linux.md @@ -1,7 +1,7 @@ # Get Started with OpenVINO™ Deep Learning Deployment Toolkit (DLDT) on Linux* This guide provides you with the information that will help you to start using -the DLDT on Linux\*. With this guide, you will learn how to: +the OpenVINO on Linux\*. With this guide, you will learn how to: 1. [Configure the Model Optimizer](#configure-the-model-optimizer) 2. [Prepare a model for sample inference](#prepare-a-model-for-sample-inference) @@ -10,13 +10,13 @@ the DLDT on Linux\*. With this guide, you will learn how to: 3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application) ## Prerequisites -1. This guide assumes that you have already cloned the `dldt` repo and +1. This guide assumes that you have already cloned the `openvino` repo and successfully built the Inference Engine and Samples using the [build instructions](inference-engine/README.md). 2. The original structure of the repository directories remains unchanged. -> **NOTE**: Below, the directory to which the `dldt` repository is cloned is -referred to as ``. +> **NOTE**: Below, the directory to which the `openvino` repository is cloned is +referred to as ``. ## Configure the Model Optimizer @@ -53,7 +53,7 @@ If you see error messages, check for any missing dependencies. 1. Go to the Model Optimizer prerequisites directory: ```sh -cd /model_optimizer/install_prerequisites +cd /model_optimizer/install_prerequisites ``` 2. Run the script to configure the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi\*, and ONNX: @@ -68,7 +68,7 @@ Configure individual frameworks separately **ONLY** if you did not select 1. Go to the Model Optimizer prerequisites directory: ```sh -cd /model_optimizer/install_prerequisites +cd /model_optimizer/install_prerequisites ``` 2. Run the script for your model framework. You can run more than one script: @@ -162,12 +162,12 @@ as `` below) with the Model Downloader: **For CPU (FP32):** ```sh - python3 /model_optimizer/mo.py --input_model /classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir + python3 /model_optimizer/mo.py --input_model /classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir ``` **For GPU and MYRIAD (FP16):** ```sh - python3 /model_optimizer/mo.py --input_model /classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir + python3 /model_optimizer/mo.py --input_model /classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir ``` After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `` directory. @@ -184,14 +184,14 @@ Now you are ready to run the Image Classification Sample Application. The Inference Engine sample applications are automatically compiled when you built the Inference Engine using the [build instructions](inference-engine/README.md). -The binary files are located in the `/inference-engine/bin/intel64/Release` +The binary files are located in the `/inference-engine/bin/intel64/Release` directory. To run the Image Classification sample application with an input image on the prepared IR: 1. Go to the samples build directory: ```sh - cd /inference-engine/bin/intel64/Release + cd /inference-engine/bin/intel64/Release 2. Run the sample executable with specifying the `car.png` file from the `/scripts/demo/` directory as an input