Skip to content

Commit

Permalink
Updates
Browse files Browse the repository at this point in the history
  • Loading branch information
ilya-lavrenov committed Mar 21, 2022
1 parent 7051316 commit 4cb00ef
Show file tree
Hide file tree
Showing 5 changed files with 131 additions and 134 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -236,11 +236,11 @@ This step is optional. It modifies the nGraph function to a device-specific oper

Let's explore quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model. Use [Model Downloader](@ref omz_tools_downloader) tool to download the `fp16` model from [OpenVINO™ Toolkit - Open Model Zoo repository](https://github.com/openvinotoolkit/open_model_zoo):
```sh
./downloader.py --name resnet-50-tf --precisions FP16-INT8
omz_downloader --name resnet-50-tf --precisions FP16-INT8
```
After that you should quantize model by the [Model Quantizer](@ref omz_tools_downloader) tool.
```sh
./quantizer.py --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
omz_quantizer --model_dir public/resnet-50-tf --dataset_dir <DATASET_DIR> --precisions=FP16-INT8
```

### Inference
Expand Down
216 changes: 106 additions & 110 deletions docs/OV_Runtime_UG/deployment/deployment-manager-tool.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,23 @@
# Using Deployment Manager {#openvino_docs_install_guides_deployment_manager_tool}
# Deployment Manager {#openvino_docs_install_guides_deployment_manager_tool}

The Deployment Manager is a Python* command-line tool that creates a deployment package by assembling the model, IR files, your application, and associated dependencies into a runtime package for your target device. This tool is delivered within the Intel® Distribution of OpenVINO™ toolkit for Linux*, Windows* and macOS* release packages and is available after installation in the `<INSTALL_DIR>/tools/deployment_manager` directory.

## Prerequisites

* Intel® Distribution of OpenVINO™ toolkit
* To run inference on a target device other than CPU, device drivers must be pre-installed:
* **For Linux**, see the following sections in the [installation instructions for Linux](../../install_guides/installing-openvino-linux.md):
* Steps for Intel® Processor Graphics (GPU) section
* Steps for Intel® Neural Compute Stick 2 section
* Steps for Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
* **For Linux**, see the following sections in the [installation instructions for Linux](../../install_guides/installing-openvino-linux.md):
* Steps for [Intel® Processor Graphics (GPU)](../../install_guides/configurations-for-intel-gpu.md) section
* Steps for [Intel® Neural Compute Stick 2 section](../../install_guides/configurations-for-ncs2.md)
* Steps for [Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](../../install_guides/installing-openvino-config-ivad-vpu.md)
* Steps for [Intel® Gaussian & Neural Accelerator (GNA)](../../install_guides/configurations-for-intel-gna.md)
* **For Windows**, see the following sections in the [installation instructions for Windows](../../install_guides/installing-openvino-windows.md):
* Steps for Intel® Processor Graphics (GPU)
* Steps for the Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
* **For macOS**, see the following section in the [installation instructions for macOS](../../install_guides/installing-openvino-macos.md):
* Steps for Intel® Neural Compute Stick 2 section
> **IMPORTANT**: The operating system on the target system must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04.
* Steps for [Intel® Processor Graphics (GPU)](../../install_guides/configurations-for-intel-gpu.md)
* Steps for the [Intel® Vision Accelerator Design with Intel® Movidius™ VPUs](../../install_guides/installing-openvino-config-ivad-vpu.md)
* **For macOS**, see the following section in the [installation instructions for macOS](../../install_guides/installing-openvino-macos.md):
* Steps for [Intel® Neural Compute Stick 2 section](../../install_guides/configurations-for-ncs2.md)

> **IMPORTANT**: The operating system on the target system must be the same as the development system on which you are creating the package. For example, if the target system is Ubuntu 18.04, the deployment package must be created from the OpenVINO™ toolkit installed on Ubuntu 18.04.
> **TIP**: If your application requires additional dependencies, including the Microsoft Visual C++ Redistributable, use the ['--user_data' option](https://docs.openvino.ai/latest/openvino_docs_install_guides_deployment_manager_tool.html#run-standard-cli-mode) to add them to the deployment archive. Install these dependencies on the target host before running inference.
Expand All @@ -31,77 +32,74 @@ There are two ways to create a deployment package that includes inference-relate
.. raw:: html

<div class="collapsible-section" data-title="Click to expand/collapse">

@endsphinxdirective

Interactive mode provides a user-friendly command-line interface that will guide you through the process with text prompts.

1. To launch the Deployment Manager in interactive mode, open a new terminal window, go to the Deployment Manager tool directory and run the tool script without parameters:

@sphinxdirective

.. tab:: Linux

.. code-block:: sh

cd <INSTALL_DIR>/tools/deployment_manager
./deployment_manager.py
.. tab:: Windows

.. code-block:: bat

cd <INSTALL_DIR>\tools\deployment_manager
.\deployment_manager.py

.. tab:: macOS

.. code-block:: sh

cd <INSTALL_DIR>/tools/deployment_manager
./deployment_manager.py

@sphinxdirective

.. tab:: Linux

.. code-block:: sh

cd <INSTALL_DIR>/tools/deployment_manager
./deployment_manager.py

.. tab:: Windows

.. code-block:: bat

cd <INSTALL_DIR>\tools\deployment_manager
.\deployment_manager.py

.. tab:: macOS

.. code-block:: sh

cd <INSTALL_DIR>/tools/deployment_manager
./deployment_manager.py

@endsphinxdirective

2. The target device selection dialog is displayed:

![Deployment Manager selection dialog](../../img/selection_dialog.png)

Use the options provided on the screen to complete selection of the target devices and press **Enter** to proceed to the package generation dialog. if you want to interrupt the generation process and exit the program, type **q** and press **Enter**.

3. Once you accept the selection, the package generation dialog is displayed:

![Deployment Manager configuration dialog](../../img/configuration_dialog.png)

The target devices you have selected at the previous step appear on the screen. To go back and change the selection, type **b** and press **Enter**. Use the options provided to configure the generation process, or use the default settings.

* `o. Change output directory` (optional): Path to the output directory. By default, it's set to your home directory.

* `u. Provide (or change) path to folder with user data` (optional): Path to a directory with user data (IRs, models, datasets, etc.) files and subdirectories required for inference, which will be added to the deployment archive. By default, it's set to `None`, which means you will separately copy the user data to the target system.

* `t. Change archive name` (optional): Deployment archive name without extension. By default, it is set to `openvino_deployment_package`.

4. Once all the parameters are set, type **g** and press **Enter** to generate the package for the selected target devices. To interrupt the generation process and exit the program, type **q** and press **Enter**.

The script successfully completes and the deployment package is generated in the specified output directory.
The script successfully completes and the deployment package is generated in the specified output directory.


@sphinxdirective

.. raw:: html
.. raw:: html

</div>

</div>

@endsphinxdirective

### Run Standard CLI Mode

@sphinxdirective

.. raw:: html

<div class="collapsible-section" data-title="Click to expand/collapse">

@endsphinxdirective

Alternatively, you can run the Deployment Manager tool in the standard CLI mode. In this mode, you specify the target devices and other parameters as command-line arguments of the Deployment Manager Python script. This mode facilitates integrating the tool in an automation pipeline.
Expand All @@ -113,29 +111,29 @@ To launch the Deployment Manager tool in the standard mode, open a new terminal
.. tab:: Linux

.. code-block:: sh

cd <INSTALL_DIR>/tools/deployment_manager
./deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]
.. tab:: Windows
./deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]

.. tab:: Windows

.. code-block:: bat

cd <INSTALL_DIR>\tools\deployment_manager
.\deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]
.. tab:: macOS

.. tab:: macOS

.. code-block:: sh

cd <INSTALL_DIR>/tools/deployment_manager
./deployment_manager.py <--targets> [--output_dir] [--archive_name] [--user_data]

@endsphinxdirective

The following options are available:

* `<--targets>` (required): List of target devices to run inference. To specify more than one target, separate them with spaces. For example: `--targets cpu gpu vpu`. You can get a list of currently available targets by running the program with the `-h` option.
* `<--targets>` (required): List of target devices to run inference. To specify more than one target, separate them with spaces. For example: `--targets cpu gpu vpu`. You can get a list of currently available targets by running the program with the `-h` option.

* `[--output_dir]` (optional): Path to the output directory. By default, it is set to your home directory.

Expand All @@ -147,46 +145,45 @@ The script successfully completes, and the deployment package is generated in th

@sphinxdirective

.. raw:: html
.. raw:: html

</div>

</div>

@endsphinxdirective

## Deploy Package on Target Systems

After the Deployment Manager has successfully completed, you can find the generated `.tar.gz` (for Linux or macOS) or `.zip` (for Windows) package in the output directory you specified.
After the Deployment Manager has successfully completed, you can find the generated `.tar.gz` (for Linux or macOS) or `.zip` (for Windows) package in the output directory you specified.

To deploy the OpenVINO Runtime components from the development machine to the target system, perform the following steps:

1. Copy the generated archive to the target system using your preferred method.

2. Unpack the archive into the destination directory on the target system (if your archive name is different from the default shown below, replace the `openvino_deployment_package` with the name you use).
@sphinxdirective

.. tab:: Linux

.. code-block:: sh

tar xf openvino_deployment_package.tar.gz -C <destination_dir>

.. tab:: Windows

.. code-block:: bat

Use the archiver of your choice to unzip the file.

.. tab:: macOS

.. code-block:: sh

tar xf openvino_deployment_package.tar.gz -C <destination_dir>

@endsphinxdirective

The package is unpacked to the destination directory and the following files and subdirectories are created:

@sphinxdirective

.. tab:: Linux

.. code-block:: sh

tar xf openvino_deployment_package.tar.gz -C <destination_dir>

.. tab:: Windows

.. code-block:: bat

Use the archiver of your choice to unzip the file.

.. tab:: macOS

.. code-block:: sh

tar xf openvino_deployment_package.tar.gz -C <destination_dir>

@endsphinxdirective

The package is unpacked to the destination directory and the following files and subdirectories are created:

* `setupvars.sh` — Copy of `setupvars.sh`
* `runtime` — Contains the OpenVINO runtime binary files.
* `install_dependencies` — Snapshot of the `install_dependencies` directory from the OpenVINO installation directory.
Expand All @@ -197,32 +194,31 @@ The package is unpacked to the destination directory and the following files and
cd <destination_dir>/openvino/install_dependencies
sudo -E ./install_openvino_dependencies.sh
```

4. Set up the environment variables:

@sphinxdirective

.. tab:: Linux

.. code-block:: sh

cd <destination_dir>/openvino/
source ./setupvars.sh

.. tab:: Windows

.. code-block:: bat

cd <destination_dir>\openvino\
.\setupvars.bat

.. tab:: macOS

.. code-block:: sh

cd <destination_dir>/openvino/
source ./setupvars.sh

@endsphinxdirective
@sphinxdirective

.. tab:: Linux

.. code-block:: sh

cd <destination_dir>/openvino/
source ./setupvars.sh

.. tab:: Windows

.. code-block:: bat

cd <destination_dir>\openvino\
.\setupvars.bat

.. tab:: macOS

.. code-block:: sh

cd <destination_dir>/openvino/
source ./setupvars.sh

@endsphinxdirective

You have now finished the deployment of the OpenVINO Runtime components to the target system.
17 changes: 8 additions & 9 deletions docs/OV_Runtime_UG/deployment/deployment_intro.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Deployment with OpenVINO {#openvino_deployment_guide}
# Deploy with OpenVINO {#openvino_deployment_guide}

@sphinxdirective

Expand All @@ -11,12 +11,12 @@

@endsphinxdirective

Once the [OpenVINO application development](../integrate_with_your_application.md) has finished, usually users need to deploy their applications to end users. There are several ways how to achieve that:
Once the [OpenVINO application development](../integrate_with_your_application.md) has been finished, usually application developers need to deploy their applications to end users. There are several ways how to achieve that:

- Centralized distribution, when all applications will reuse the same OpenVINO package in bounds of the same OpenVINO version (e.i. if several applications require several different OpenVINO versions, several packages are installed; but if several applications work on top of the version of OpenVINO runtime - a single packages is installed):
- Using Debian / RPM packages is a recommended way for a family of Linux operation systems
- _Centralized distribution_, when all applications will reuse the same OpenVINO package in bounds of a single OpenVINO version:
- Using Debian / RPM packages, a recommended way for a family of Linux operation systems
- Using pip package manager on PyPi, default approach for Python-based applications
- Local distribution, when all applications will use their own copies of OpenVINO libraries independently on OpenVINO version:
- _Local distribution_, when all applications will use their own copies of OpenVINO libraries independently on OpenVINO version:
- Using Docker images
- Using [OpenVINO Deployment manager](deployment-manager-tool.md) providing a convinient way for local distribution
- Using advanced [Local distribution](local-distribution.md), the approach works for all operation systems
Expand All @@ -34,15 +34,14 @@ The table below shows which distribution type can be used depending on target op
| [Local distribution](local-distribution.md) | All operation systems |
| [Build OpenVINO statically and link into the final app](https://github.com/openvinotoolkit/openvino/wiki/StaticLibraries) | All operation systems |

Dependning on the distribution type, the granularity of OpenVINO packages may vary: PyPi distribution [OpenVINO has a single package `openvino`](https://pypi.org/project/openvino/) containing all the runtime libraries and plugins, while more configurable ways like [Local distribution](local-distribution.md) provide higher granularity, so it is important to now some details about the set of libraries which are part of OpenVINO Runtime package:
Dependning on the distribution type, the granularity of OpenVINO packages may vary: PyPi distribution [OpenVINO has a single package 'openvino'](https://pypi.org/project/openvino/) containing all the runtime libraries and plugins, while more configurable ways like [Local distribution](local-distribution.md) provide higher granularity, so it is important to now some details about the set of libraries which are part of OpenVINO Runtime package:

- The main library `openvino` is used by C++ user's applications to link against with. The library provides all OpenVINO Runtime public API for both OpenVINO API 2.0 and Inference Engine, nGraph APIs.
> **NOTE**: for C language applications `openvino_c` is additionally required for distribution
- The main library `openvino` is used by C++ user's applications to link against with. The library provides all OpenVINO Runtime public API for both OpenVINO API 2.0 and Inference Engine, nGraph APIs. For C language applications `openvino_c` is additionally required for distribution.

- The _optional_ plugin libraries like `openvino_intel_cpu_plugin` (matching `openvino_.+_plugin` pattern) are used to provide inference capabilities on specific devices or additional capabitilies like [Hetero execution](../hetero_execution.md) or [Multi-Device execution](../multi_device.md).
- The _optional_ plugin libraries like `openvino_ir_frontnend` (matching `openvino_.+_frontend`) are used to provide capabilities to read models of different file formats like OpenVINO IR, ONNX or Paddle.

The _optional_ mean that if the application does not use the capability enabled by the plugin, the plugin's library or package with the plugin is not needed at the final distribution.
The _optional_ means that if the application does not use the capability enabled by the plugin, the plugin's library or package with the plugin is not needed in the final distribution.

This information covers granularity aspects of majority distribution types, more information is only needed and provided in [Local Distribution](local-distribution.md).

Expand Down
Loading

0 comments on commit 4cb00ef

Please sign in to comment.