Skip to content

Commit

Permalink
change repo name to openvino in readme files
Browse files Browse the repository at this point in the history
  • Loading branch information
Alexey Suhov committed Jun 2, 2020
1 parent 9221f41 commit 2fe9b15
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 45 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository
[![Stable release](https://img.shields.io/badge/version-2020.1-green.svg)](https://github.com/opencv/dldt/releases/tag/2020.1)
[![Stable release](https://img.shields.io/badge/version-2020.3-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2020.3.0)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)

This toolkit allows developers to deploy pre-trained deep learning models
Expand Down Expand Up @@ -36,7 +36,7 @@ with us doing the following steps:

* Make sure you can build the product and run all tests and samples with your patch
* In case of a larger feature, provide relevant unit tests and one or more sample
* Submit a pull request at https://github.com/opencv/dldt/pulls
* Submit a pull request at https://github.com/openvinotoolkit/openvino/pulls

We will review your contribution and, if any additional fixes or modifications
are necessary, may give some feedback to guide you. Your pull request will be
Expand All @@ -46,7 +46,7 @@ merged into GitHub* repositories if accepted.
Please report questions, issues and suggestions using:

* The `openvino` [tag on StackOverflow]\*
* [GitHub* Issues](https://github.com/opencv/dldt/issues)
* [GitHub* Issues](https://github.com/openvinotoolkit/openvino/issues)
* [Forum](https://software.intel.com/en-us/forums/computer-vision)

---
Expand Down
49 changes: 17 additions & 32 deletions build-instruction.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
- [Add Inference Engine to Your Project](#add-inference-engine-to-your-project)
- [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
- [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
- [For Windows](#for-windows-1)
- [Next Steps](#next-steps)
- [Additional Resources](#additional-resources)

Expand Down Expand Up @@ -60,12 +59,12 @@ The software was validated on:
- [CMake]\* 3.11 or higher
- GCC\* 4.8 or higher to build the Inference Engine
- Python 2.7 or higher for Inference Engine Python API wrapper
- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441].
- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352].

### Build Steps
1. Clone submodules:
```sh
cd dldt
cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
Expand All @@ -78,7 +77,7 @@ The software was validated on:
```
3. By default, the build enables the Inference Engine GPU plugin to infer models
on your Intel® Processor Graphics. This requires you to
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352]
before running the build. If you don't want to use the GPU plugin, use the
`-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the
Intel® Graphics Compute Runtime for OpenCL™ Driver.
Expand Down Expand Up @@ -172,10 +171,10 @@ Native compilation of the Inference Engine is the most straightforward solution.
sudo apt-get install -y git cmake libusb-1.0-0-dev
```

2. Go to the cloned `dldt` repository:
2. Go to the cloned `openvino` repository:

```bash
cd dldt
cd openvino
```

3. Initialize submodules:
Expand Down Expand Up @@ -262,15 +261,15 @@ with the following content:
5. Run Docker\* container with mounted source code folder from host:

```bash
docker run -it -v /absolute/path/to/dldt:/dldt ie_cross_armhf /bin/bash
docker run -it -v /absolute/path/to/openvino:/openvino ie_cross_armhf /bin/bash
```

6. While in the container:

1. Go to the cloned `dldt` repository:
1. Go to the cloned `openvino` repository:

```bash
cd dldt
cd openvino
```

2. Create a build folder:
Expand All @@ -291,8 +290,8 @@ with the following content:
```

7. Press **Ctrl+D** to exit from Docker. You can find the resulting binaries
in the `dldt/bin/armv7l/` directory and the OpenCV*
installation in the `dldt/inference-engine/temp`.
in the `openvino/bin/armv7l/` directory and the OpenCV*
installation in the `openvino/inference-engine/temp`.

>**NOTE**: Native applications that link to cross-compiled Inference Engine
library require an extra compilation flag `-march=armv7-a`.
Expand Down Expand Up @@ -381,8 +380,8 @@ cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^

6. Before running the samples, add paths to the TBB and OpenCV binaries used for
the build to the `%PATH%` environment variable. By default, TBB binaries are
downloaded by the CMake-based script to the `<dldt_repo>/inference-engine/temp/tbb/bin`
folder, OpenCV binaries to the `<dldt_repo>/inference-engine/temp/opencv_4.3.0/opencv/bin`
downloaded by the CMake-based script to the `<openvino_repo>/inference-engine/temp/tbb/bin`
folder, OpenCV binaries to the `<openvino_repo>/inference-engine/temp/opencv_4.3.0/opencv/bin`
folder.

### Additional Build Options
Expand Down Expand Up @@ -437,7 +436,7 @@ cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
set CXX=icl
set CC=icl
:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by dldt cmake script
:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by openvino cmake script
set TBBROOT=
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
Expand All @@ -461,7 +460,7 @@ The software was validated on:
1. Clone submodules:
```sh
cd dldt
cd openvino
git submodule update --init --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the
Expand Down Expand Up @@ -545,7 +544,7 @@ This section describes how to build Inference Engine for Android x86 (64-bit) op
2. Clone submodules
```sh
cd dldt
cd openvino
git submodule update --init --recursive
```
Expand Down Expand Up @@ -610,7 +609,7 @@ before running the Inference Engine build:
For CMake projects, set the `InferenceEngine_DIR` environment variable:
```sh
export InferenceEngine_DIR=/path/to/dldt/build/
export InferenceEngine_DIR=/path/to/openvino/build/
```
Then you can find Inference Engine by `find_package`:
Expand Down Expand Up @@ -660,20 +659,6 @@ sudo ldconfig
rm 97-myriad-usbboot.rules
```
### For Windows
For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2,
install the Movidius™ VSC driver:
1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver`
directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT
repository was cloned.
2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from
the pop-up menu.
You have installed the driver for your Intel® Movidius™ Neural Compute Stick
or Intel® Neural Compute Stick 2.
## Next Steps
Congratulations, you have built the Inference Engine. To get started with the
Expand Down Expand Up @@ -706,7 +691,7 @@ This target collects all dependencies, prepares the nGraph package and copies it
[Intel® Distribution of OpenVINO™]:https://software.intel.com/en-us/openvino-toolkit
[CMake]:https://cmake.org/download/
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.41.14441]:https://github.com/intel/compute-runtime/releases/tag/19.41.14441
[Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 20.13.16352]:https://github.com/intel/compute-runtime/releases/tag/20.13.16352
[MKL-DNN repository]:https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz
[MKL-DNN repository for Windows]:(https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip)
[OpenBLAS]:https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download
Expand Down
20 changes: 10 additions & 10 deletions get-started-linux.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Get Started with OpenVINO™ Deep Learning Deployment Toolkit (DLDT) on Linux*

This guide provides you with the information that will help you to start using
the DLDT on Linux\*. With this guide, you will learn how to:
the OpenVINO on Linux\*. With this guide, you will learn how to:

1. [Configure the Model Optimizer](#configure-the-model-optimizer)
2. [Prepare a model for sample inference](#prepare-a-model-for-sample-inference)
Expand All @@ -10,13 +10,13 @@ the DLDT on Linux\*. With this guide, you will learn how to:
3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application)

## Prerequisites
1. This guide assumes that you have already cloned the `dldt` repo and
1. This guide assumes that you have already cloned the `openvino` repo and
successfully built the Inference Engine and Samples using the
[build instructions](inference-engine/README.md).
2. The original structure of the repository directories remains unchanged.

> **NOTE**: Below, the directory to which the `dldt` repository is cloned is
referred to as `<DLDT_DIR>`.
> **NOTE**: Below, the directory to which the `openvino` repository is cloned is
referred to as `<OPENVINO_DIR>`.
## Configure the Model Optimizer

Expand Down Expand Up @@ -53,7 +53,7 @@ If you see error messages, check for any missing dependencies.

1. Go to the Model Optimizer prerequisites directory:
```sh
cd <DLDT_DIR>/model_optimizer/install_prerequisites
cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
```
2. Run the script to configure the Model Optimizer for Caffe,
TensorFlow, MXNet, Kaldi\*, and ONNX:
Expand All @@ -68,7 +68,7 @@ Configure individual frameworks separately **ONLY** if you did not select

1. Go to the Model Optimizer prerequisites directory:
```sh
cd <DLDT_DIR>/model_optimizer/install_prerequisites
cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
```
2. Run the script for your model framework. You can run more than one script:

Expand Down Expand Up @@ -162,12 +162,12 @@ as `<models_dir>` below) with the Model Downloader:

**For CPU (FP32):**
```sh
python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
```

**For GPU and MYRIAD (FP16):**
```sh
python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
```
After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `<ir_dir>` directory.

Expand All @@ -184,14 +184,14 @@ Now you are ready to run the Image Classification Sample Application.

The Inference Engine sample applications are automatically compiled when you
built the Inference Engine using the [build instructions](inference-engine/README.md).
The binary files are located in the `<DLDT_DIR>/inference-engine/bin/intel64/Release`
The binary files are located in the `<OPENVINO_DIR>/inference-engine/bin/intel64/Release`
directory.

To run the Image Classification sample application with an input image on the prepared IR:

1. Go to the samples build directory:
```sh
cd <DLDT_DIR>/inference-engine/bin/intel64/Release
cd <OPENVINO_DIR>/inference-engine/bin/intel64/Release
2. Run the sample executable with specifying the `car.png` file from the
`<OPENVINO_DIR>/scripts/demo/` directory as an input
Expand Down

0 comments on commit 2fe9b15

Please sign in to comment.