From e20234b9b6f95f24583918f5a20907d81c26a29c Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 10:37:11 +0300 Subject: [PATCH 01/14] Update for install guides: - Human rights notice - Components descriptions update & console scripts - Extras requirements definition - Change verification step to Model Optimizer call --- docs/install_guides/pypi-openvino-dev.md | 41 +++++++++++++++++------- docs/install_guides/pypi-openvino-rt.md | 3 ++ 2 files changed, 33 insertions(+), 11 deletions(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index f04bdf3a21f8c0..64ff02ddadf074 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -4,16 +4,22 @@ Copyright © 2018-2021 Intel Corporation “Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf) for the Software Package, which may also include notices, disclaimers, or license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details. +>Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. + + ## Introduction OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud. **The Developer Package Includes the Following Components Installed by Default:** -| Component | Description | -|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | -| Additional Tools | A set of tools to work with your models including [Accuracy Checker utility](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html), [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html), [Benchmark Tool](../../inference-engine/samples/benchmark_app/README.md) | +| Component | Console Script | Description | +|------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | +| [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | +| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion dataset to suitable for metric evaluation format used in Accuracy Checker. | +| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to re-train or fine-tune models. | +| [Model Downloader and other OMZ tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patch them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for conversion the models that are stored not in the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is helper utility for dumping information about the models in a stable machine-readable format.| **The Runtime Package Includes the Following Components Installed by Dependency:** @@ -78,20 +84,32 @@ python -m pip install --upgrade pip ### Step 4. Install the Package -Run the command below:
+To install and configure components of the development package for working with specific frameworks, use the `pip install openvino-dev[extras]` command, where `extras` is a list of extras from the table below: + +| DL Framework | Extras | +| :------------------------------------------------------------------------------- | :-------------------------------| +| [TensorFlow 1.x*](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | +| [TensorFlow 2.x*](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | +| [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx | +| [Caffe*](https://caffe.berkeleyvision.org/) | caffe | +| [Caffe 2*](https://caffe2.ai/) | caffe2 | +| [MXNet*](https://mxnet.apache.org/) | mxnet | +| [PyTorch*](https://pytorch.org/) | pytorch | +| [Kaldi*](https://kaldi-asr.org/) | kaldi | +For example, to install and configure components for working with TensorFlow 2.x, MXNet and Caffe, use the folowing command: ```sh - pip install openvino-dev - ``` + pip install openvino-dev[tensorflow2,mxnet,caffe] + ``` ### Step 5. Verify that the Package is Installed Run the command below (this may take a few seconds): ```sh -pot -h +mo -h ``` -You will see the help message for Post-Training Optimization Tool if installation finished successfully. +You will see the help message for Model Optimizer if installation finished successfully. ## Troubleshooting @@ -108,5 +126,6 @@ sudo apt-get install libpython3.7 ## Additional Resources -- Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit) -- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org) +- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit). +- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org). +- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks). \ No newline at end of file diff --git a/docs/install_guides/pypi-openvino-rt.md b/docs/install_guides/pypi-openvino-rt.md index 6e22d74157cd28..bfc37abfb980bf 100644 --- a/docs/install_guides/pypi-openvino-rt.md +++ b/docs/install_guides/pypi-openvino-rt.md @@ -4,6 +4,8 @@ Copyright © 2018-2021 Intel Corporation “Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf) for the Software Package, which may also include notices, disclaimers, or license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details. +>Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right. + ## Introduction OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud. @@ -103,4 +105,5 @@ sudo apt-get install libpython3.7 - [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit). - [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org). +- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks). From b5f9c361ac8cc990c3cdf41e58a5a27eb5e09c5f Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 13:27:48 +0300 Subject: [PATCH 02/14] Update docs/install_guides/pypi-openvino-dev.md Co-authored-by: Helena Kloosterman --- docs/install_guides/pypi-openvino-dev.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 64ff02ddadf074..18578bdc5a6bae 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -19,7 +19,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | | [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion dataset to suitable for metric evaluation format used in Accuracy Checker. | | [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to re-train or fine-tune models. | -| [Model Downloader and other OMZ tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patch them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for conversion the models that are stored not in the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is helper utility for dumping information about the models in a stable machine-readable format.| +| [Model Downloader and other OMZ tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models that are stored not in the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| **The Runtime Package Includes the Following Components Installed by Dependency:** @@ -128,4 +128,4 @@ sudo apt-get install libpython3.7 - [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit). - [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org). -- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks). \ No newline at end of file +- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks). From 1aa226d41a1c4630d259b0199dbbefaac7748721 Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 16:58:42 +0300 Subject: [PATCH 03/14] Update docs/install_guides/pypi-openvino-dev.md Co-authored-by: Roman Donchenko --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 18578bdc5a6bae..313a8d72acc399 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -92,7 +92,7 @@ To install and configure components of the development package for working with | [TensorFlow 2.x*](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | | [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx | | [Caffe*](https://caffe.berkeleyvision.org/) | caffe | -| [Caffe 2*](https://caffe2.ai/) | caffe2 | +| [Caffe2*](https://caffe2.ai/) | caffe2 | | [MXNet*](https://mxnet.apache.org/) | mxnet | | [PyTorch*](https://pytorch.org/) | pytorch | | [Kaldi*](https://kaldi-asr.org/) | kaldi | From 4b468f312fd5c84c5f69702c47a5dd09b59b123e Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 16:59:05 +0300 Subject: [PATCH 04/14] Update docs/install_guides/pypi-openvino-dev.md Co-authored-by: Roman Donchenko --- docs/install_guides/pypi-openvino-dev.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 313a8d72acc399..513db140e19b39 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -88,8 +88,8 @@ To install and configure components of the development package for working with | DL Framework | Extras | | :------------------------------------------------------------------------------- | :-------------------------------| -| [TensorFlow 1.x*](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | -| [TensorFlow 2.x*](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | +| [TensorFlow* 1.x](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | +| [TensorFlow* 2.x](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | | [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx | | [Caffe*](https://caffe.berkeleyvision.org/) | caffe | | [Caffe2*](https://caffe2.ai/) | caffe2 | From 4ecbc357a9cb5a93943cfbd0a9e61a37d6fc3fba Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 17:20:32 +0300 Subject: [PATCH 05/14] Update docs/install_guides/pypi-openvino-dev.md Co-authored-by: Roman Donchenko --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 513db140e19b39..958eaf71acd946 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -17,7 +17,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio |------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | -| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion dataset to suitable for metric evaluation format used in Accuracy Checker. | +| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion of datasets to the format suitable for metric evaluation that is used in Accuracy Checker. | | [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to re-train or fine-tune models. | | [Model Downloader and other OMZ tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models that are stored not in the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| From 925841ddb497d676b48b6618e2cd65dd2b5f3d1d Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 17:24:49 +0300 Subject: [PATCH 06/14] Update docs/install_guides/pypi-openvino-dev.md Co-authored-by: Roman Donchenko --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 958eaf71acd946..b5a9ee478187b8 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -86,7 +86,7 @@ python -m pip install --upgrade pip To install and configure components of the development package for working with specific frameworks, use the `pip install openvino-dev[extras]` command, where `extras` is a list of extras from the table below: -| DL Framework | Extras | +| DL Framework | Extra | | :------------------------------------------------------------------------------- | :-------------------------------| | [TensorFlow* 1.x](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | | [TensorFlow* 2.x](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | From 1aa9a2a7b7fe640286eb19abd41675d50fe4f58e Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 17:27:25 +0300 Subject: [PATCH 07/14] order --- docs/install_guides/pypi-openvino-dev.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 64ff02ddadf074..4f718867fe3905 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -19,7 +19,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | | [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion dataset to suitable for metric evaluation format used in Accuracy Checker. | | [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to re-train or fine-tune models. | -| [Model Downloader and other OMZ tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patch them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for conversion the models that are stored not in the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is helper utility for dumping information about the models in a stable machine-readable format.| +| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patch them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for conversion the models that are stored in a format other than the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of a full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| **The Runtime Package Includes the Following Components Installed by Dependency:** @@ -88,14 +88,14 @@ To install and configure components of the development package for working with | DL Framework | Extras | | :------------------------------------------------------------------------------- | :-------------------------------| -| [TensorFlow 1.x*](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | -| [TensorFlow 2.x*](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | -| [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx | | [Caffe*](https://caffe.berkeleyvision.org/) | caffe | | [Caffe 2*](https://caffe2.ai/) | caffe2 | +| [Kaldi*](https://kaldi-asr.org/) | kaldi | | [MXNet*](https://mxnet.apache.org/) | mxnet | +| [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx | | [PyTorch*](https://pytorch.org/) | pytorch | -| [Kaldi*](https://kaldi-asr.org/) | kaldi | +| [TensorFlow 1.x*](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | +| [TensorFlow 2.x*](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | For example, to install and configure components for working with TensorFlow 2.x, MXNet and Caffe, use the folowing command: ```sh From 5a43026863e8b08d5949e94c4865a280b7eba662 Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 24 May 2021 17:41:38 +0300 Subject: [PATCH 08/14] fix grammar --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 12dec2ee633ced..43db164eacc357 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -19,7 +19,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | | [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion of datasets to the format suitable for metric evaluation that is used in Accuracy Checker. | | [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to re-train or fine-tune models. | -| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patch them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for conversion the models that are stored in a format other than the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of a full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| +| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models that are stored in a format other than the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of a full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| **The Runtime Package Includes the Following Components Installed by Dependency:** From 96db371e0693460a1588daf3e9886606475581bc Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Wed, 26 May 2021 18:56:35 +0300 Subject: [PATCH 09/14] Update according to recommendations from InfoDev --- docs/install_guides/pypi-openvino-dev.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 43db164eacc357..b27248f3bf3260 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -17,9 +17,9 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio |------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | -| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool which allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion of datasets to the format suitable for metric evaluation that is used in Accuracy Checker. | -| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to re-train or fine-tune models. | -| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used instead of training your own models to speed-up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models that are stored in a format other than the Inference Engine IR format into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of a full-precision models in the IR format into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| +| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion of datasets to the format suitable for metric evaluation used in Accuracy Checker. | +| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | +| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. Use these free pre-trained models instead of training your own models to speed up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them with Model Optimizer to make them more usable. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models stored in a format other than the Intermediate Representation (IR) into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision IR models into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| **The Runtime Package Includes the Following Components Installed by Dependency:** @@ -84,7 +84,7 @@ python -m pip install --upgrade pip ### Step 4. Install the Package -To install and configure components of the development package for working with specific frameworks, use the `pip install openvino-dev[extras]` command, where `extras` is a list of extras from the table below: +To install and configure the components of the development package for working with specific frameworks, use the `pip install openvino-dev[extras]` command, where `extras` is a list of extras from the table below: | DL Framework | Extra | | :------------------------------------------------------------------------------- | :-------------------------------| @@ -97,12 +97,12 @@ To install and configure components of the development package for working with | [TensorFlow* 1.x](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow | | [TensorFlow* 2.x](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 | -For example, to install and configure components for working with TensorFlow 2.x, MXNet and Caffe, use the folowing command: +For example, to install and configure the components for working with TensorFlow 2.x, MXNet and Caffe, use the following command: ```sh pip install openvino-dev[tensorflow2,mxnet,caffe] - ``` + ``` -### Step 5. Verify that the Package is Installed +### Step 5. Verify that the Package Is Installed Run the command below (this may take a few seconds): ```sh From e1abc584741931d84757b30237936a7b8a9a1a75 Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Wed, 26 May 2021 19:05:55 +0300 Subject: [PATCH 10/14] high-quality --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index b27248f3bf3260..9cb20d2c672ab9 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -19,7 +19,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | | [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion of datasets to the format suitable for metric evaluation used in Accuracy Checker. | | [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | -| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. Use these free pre-trained models instead of training your own models to speed up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them with Model Optimizer to make them more usable. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models stored in a format other than the Intermediate Representation (IR) into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision IR models into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| +| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. Use these free pre-trained models instead of training your own models to speed up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them with Model Optimizer to make them more usable. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models stored in a format other than the Intermediate Representation (IR) into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision IR models into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| **The Runtime Package Includes the Following Components Installed by Dependency:** From d172801198289da3b7ece2c9c523c8d04911e9c2 Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Wed, 26 May 2021 21:38:47 +0300 Subject: [PATCH 11/14] Caffe2* --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 9cb20d2c672ab9..0c2497c596bbdb 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -89,7 +89,7 @@ To install and configure the components of the development package for working w | DL Framework | Extra | | :------------------------------------------------------------------------------- | :-------------------------------| | [Caffe*](https://caffe.berkeleyvision.org/) | caffe | -| [Caffe* 2](https://caffe2.ai/) | caffe2 | +| [Caffe2*](https://caffe2.ai/) | caffe2 | | [Kaldi*](https://kaldi-asr.org/) | kaldi | | [MXNet*](https://mxnet.apache.org/) | mxnet | | [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx | From 7442fe33cae692de1a4ebf4ed6a3fb8ae3aeae37 Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Thu, 27 May 2021 17:33:28 +0300 Subject: [PATCH 12/14] Update document style Add additional verification step --- .../install_guides/installing-openvino-pip.md | 2 +- docs/install_guides/pypi-openvino-dev.md | 30 +++++++++++-------- docs/install_guides/pypi-openvino-rt.md | 14 ++++----- 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/docs/install_guides/installing-openvino-pip.md b/docs/install_guides/installing-openvino-pip.md index 7e36a0d77563ce..7a639faff86120 100644 --- a/docs/install_guides/installing-openvino-pip.md +++ b/docs/install_guides/installing-openvino-pip.md @@ -5,7 +5,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio Intel® Distribution of OpenVINO™ Toolkit provides the following packages available for installation through the PyPI repository: * Runtime package with the Inference Engine inside: [https://pypi.org/project/openvino/](https://pypi.org/project/openvino/). -* Developers package that includes the runtime package as a dependency, Model Optimizer, Accuracy Checker and Post-Training Optimization Tool: [https://pypi.org/project/openvino-dev](https://pypi.org/project/openvino-dev). +* Developer package that includes the runtime package as a dependency, Model Optimizer and other developer tools: [https://pypi.org/project/openvino-dev](https://pypi.org/project/openvino-dev). ## Additional Resources diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 0c2497c596bbdb..55e57b68c286c7 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -11,7 +11,7 @@ license terms for third party or open source software included in or with the So OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud. -**The Developer Package Includes the Following Components Installed by Default:** +The **developer package** includes the following components installed by default: | Component | Console Script | Description | |------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -21,7 +21,8 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio | [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | | [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. Use these free pre-trained models instead of training your own models to speed up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them with Model Optimizer to make them more usable. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models stored in a format other than the Intermediate Representation (IR) into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision IR models into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.| -**The Runtime Package Includes the Following Components Installed by Dependency:** + +**Developer package** also provides the **runtime package** installed as a dependency. The runtime package includes the following components: | Component | Description | |-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -104,20 +105,25 @@ For example, to install and configure the components for working with TensorFlow ### Step 5. Verify that the Package Is Installed -Run the command below (this may take a few seconds): -```sh -mo -h -``` +- To verify that the **developer package** is properly installed, run the command below (this may take a few seconds): + ```sh + mo -h + ``` + You will see the help message for Model Optimizer if installation finished successfully. -You will see the help message for Model Optimizer if installation finished successfully. +- To verify that Inference Engine from the **runtime package** is available, run the command below: + ```sh + python -c "from openvino.inference_engine import IECore" + ``` + You will not see any error messages if installation finished successfully. ## Troubleshooting -#### Error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio" +### Error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio" On Windows* some dependencies may require compilation from source when installing. To resolve this issue, you need to install [Build Tools for Visual Studio* 2019](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019) and repeat package installation. -#### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory +### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory To resolve missing external dependency on Ubuntu*, execute the following command: ```sh @@ -126,6 +132,6 @@ sudo apt-get install libpython3.7 ## Additional Resources -- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit). -- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org). -- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks). +- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit) +- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org) +- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks) diff --git a/docs/install_guides/pypi-openvino-rt.md b/docs/install_guides/pypi-openvino-rt.md index bfc37abfb980bf..7c20f9167b01ec 100644 --- a/docs/install_guides/pypi-openvino-rt.md +++ b/docs/install_guides/pypi-openvino-rt.md @@ -15,7 +15,7 @@ The Intel® Distribution of OpenVINO™ toolkit\*: - Supports heterogeneous execution across Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs - Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels -**The Runtime Package Includes the Following Components Installed by Default:** +The **runtime package** includes the following components installed by default: | Component | Description | |-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -79,7 +79,7 @@ Run the command below:
pip install openvino ``` -### Step 5. Verify that the Package is Installed +### Step 5. Verify that the Package Is Installed Run the command below: ```sh @@ -90,11 +90,11 @@ You will not see any error messages if installation finished successfully. ## Troubleshooting -#### Error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio" +### Error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio" On Windows* some dependencies may require compilation from source when installing. To resolve this issue, you need to install [Build Tools for Visual Studio* 2019](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019) and repeat package installation. -#### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory +### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory To resolve missing external dependency on Ubuntu*, execute the following command: ```sh @@ -103,7 +103,7 @@ sudo apt-get install libpython3.7 ## Additional Resources -- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit). -- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org). -- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks). +- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit) +- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org) +- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks) From 56e53ac3fea2990c5ab0a5dfbda31598931826a2 Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Thu, 27 May 2021 17:45:41 +0300 Subject: [PATCH 13/14] specify Ubuntu version for troubleshooting --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 55e57b68c286c7..7115bf55372576 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -125,7 +125,7 @@ On Windows* some dependencies may require compilation from source when installin ### ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory -To resolve missing external dependency on Ubuntu*, execute the following command: +To resolve missing external dependency on Ubuntu* 18.04, execute the following command: ```sh sudo apt-get install libpython3.7 ``` From 8fd86b4ece9fafc1fb8c73fa9be62c87c7bc2b1e Mon Sep 17 00:00:00 2001 From: Sergey Lyubimtsev Date: Mon, 31 May 2021 12:52:14 +0300 Subject: [PATCH 14/14] Add reference to POT API. --- docs/install_guides/pypi-openvino-dev.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 7115bf55372576..89bb5f3db614a3 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -18,7 +18,7 @@ The **developer package** includes the following components installed by default | [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | | [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | | [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and an impressive set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility for offline conversion of datasets to the format suitable for metric evaluation used in Accuracy Checker. | -| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | +| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. Optimizations are also available through the [API](https://docs.openvinotoolkit.org/latest/pot_compression_api_README.html). | | [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. Use these free pre-trained models instead of training your own models to speed up the development and production deployment process. The principle of the tool is as follows: it downloads model files from online sources and, if necessary, patches them with Model Optimizer to make them more usable. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting the models stored in a format other than the Intermediate Representation (IR) into that format using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision IR models into low-precision versions using Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models in a stable machine-readable format.|