Skip to content

Commit

Permalink
Cherry-pick 6405
Browse files Browse the repository at this point in the history
Feature/azaytsev/mo devguide changes (openvinotoolkit#6405)

* MO devguide edits

* MO devguide edits

* MO devguide edits

* MO devguide edits

* MO devguide edits

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Experimenting with videos

* Additional edits

* Additional edits

* Updated the workflow diagram

* Minor fix

* Experimenting with videos

* Updated the workflow diagram

* Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer

* Rolled back

* Revert "Rolled back"

This reverts commit 6a4a3e1.

* Revert "Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer"

This reverts commit 0810bd5.

* Fixed ie_docs.xml, Removed  Prepare_Trained_Model, changed the title for Config_Model_Optimizer

* Fixed ie_docs.xml

* Minor fix

* <details> tag issue

* <details> tag issue

* Fix <details> tag issue

* Fix <details> tag issue

* Fix <details> tag issue
# Conflicts:
#	thirdparty/ade
  • Loading branch information
andrew-zaytsev committed Aug 13, 2021
1 parent dcf4dbc commit 02013fc
Show file tree
Hide file tree
Showing 8 changed files with 56 additions and 218 deletions.
141 changes: 28 additions & 113 deletions docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Original file line number Diff line number Diff line change
@@ -1,135 +1,50 @@
# Model Optimizer Developer Guide {#openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide}

## Introduction

Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.

Model Optimizer process assumes you have a network model trained using a supported deep learning framework. The scheme below illustrates the typical workflow for deploying a trained deep learning model:
Model Optimizer process assumes you have a network model trained using supported deep learning frameworks: Caffe*, TensorFlow*, Kaldi*, MXNet* or converted to the ONNX* format. Model Optimizer produces an Intermediate Representation (IR) of the network, which can be inferred with the [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md).

> **NOTE**: Model Optimizer does not infer models. Model Optimizer is an offline tool that runs before the inference takes place.
The scheme below illustrates the typical workflow for deploying a trained deep learning model:

![](img/workflow_steps.png)

Model Optimizer produces an Intermediate Representation (IR) of the network, which can be read, loaded, and inferred with the Inference Engine. The Inference Engine API offers a unified API across a number of supported Intel® platforms. The Intermediate Representation is a pair of files describing the model:
The IR is a pair of files describing the model:

* <code>.xml</code> - Describes the network topology

* <code>.bin</code> - Contains the weights and biases binary data.

Below is a simple command running Model Optimizer to generate an IR for the input model:

```sh
python3 mo.py --input_model INPUT_MODEL
```
To learn about all Model Optimizer parameters and conversion technics, see the [Converting a Model to IR](prepare_model/convert_model/Converting_Model.md) page.

> **TIP**: You can quick start with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref
> openvino_docs_get_started_get_started_dl_workbench) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is the OpenVINO™ toolkit UI that enables you to
> import a model, analyze its performance and accuracy, visualize the outputs, optimize and prepare the model for
> deployment on various Intel® platforms.
## What's New in the Model Optimizer in this Release?

* Common changes:
* Implemented several optimization transformations to replace sub-graphs of operations with HSwish, Mish, Swish and SoftPlus operations.
* Model Optimizer generates IR keeping shape-calculating sub-graphs **by default**. Previously, this behavior was triggered if the "--keep_shape_ops" command line parameter was provided. The key is ignored in this release and will be deleted in the next release. To trigger the legacy behavior to generate an IR for a fixed input shape (folding ShapeOf operations and shape-calculating sub-graphs to Constant), use the "--static_shape" command line parameter. Changing model input shape using the Inference Engine API in runtime may fail for such an IR.
* Fixed Model Optimizer conversion issues resulted in non-reshapeable IR using the Inference Engine reshape API.
* Enabled transformations to fix non-reshapeable patterns in the original networks:
* Hardcoded Reshape
* In Reshape(2D)->MatMul pattern
* Reshape->Transpose->Reshape when the pattern can be fused to the ShuffleChannels or DepthToSpace operation
* Hardcoded Interpolate
* In Interpolate->Concat pattern
* Added a dedicated requirements file for TensorFlow 2.X as well as the dedicated install prerequisites scripts.
* Replaced the SparseToDense operation with ScatterNDUpdate-4.
* ONNX*:
* Enabled an ability to specify the model output **tensor** name using the "--output" command line parameter.
* Added support for the following operations:
* Acosh
* Asinh
* Atanh
* DepthToSpace-11, 13
* DequantizeLinear-10 (zero_point must be constant)
* HardSigmoid-1,6
* QuantizeLinear-10 (zero_point must be constant)
* ReduceL1-11, 13
* ReduceL2-11, 13
* Resize-11, 13 (except mode="nearest" with 5D+ input, mode="tf_crop_and_resize", and attributes exclude_outside and extrapolation_value with non-zero values)
* ScatterND-11, 13
* SpaceToDepth-11, 13
* TensorFlow*:
* Added support for the following operations:
* Acosh
* Asinh
* Atanh
* CTCLoss
* EuclideanNorm
* ExtractImagePatches
* FloorDiv
* MXNet*:
* Added support for the following operations:
* Acosh
* Asinh
* Atanh
* Kaldi*:
* Fixed bug with ParallelComponent support. Now it is fully supported with no restrictions.

> **NOTE:**
> [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
## Table of Contents

* [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)
* [Configuring Model Optimizer](prepare_model/Config_Model_Optimizer.md)
* [Converting a Model to Intermediate Representation (IR)](prepare_model/convert_model/Converting_Model.md)
* [Converting a Model Using General Conversion Parameters](prepare_model/convert_model/Converting_Model_General.md)
* [Converting Your Caffe* Model](prepare_model/convert_model/Convert_Model_From_Caffe.md)
* [Converting Your TensorFlow* Model](prepare_model/convert_model/Convert_Model_From_TensorFlow.md)
* [Converting BERT from TensorFlow](prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md)
* [Converting GNMT from TensorFlow](prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md)
* [Converting YOLO from DarkNet to TensorFlow and then to IR](prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md)
* [Converting Wide and Deep Models from TensorFlow](prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md)
* [Converting FaceNet from TensorFlow](prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md)
* [Converting DeepSpeech from TensorFlow](prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md)
* [Converting Language Model on One Billion Word Benchmark from TensorFlow](prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md)
* [Converting Neural Collaborative Filtering Model from TensorFlow*](prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md)
* [Converting TensorFlow* Object Detection API Models](prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md)
* [Converting TensorFlow*-Slim Image Classification Model Library Models](prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md)
* [Converting CRNN Model from TensorFlow*](prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md)
* [Converting Your MXNet* Model](prepare_model/convert_model/Convert_Model_From_MxNet.md)
* [Converting a Style Transfer Model from MXNet](prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md)
* [Converting Your Kaldi* Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md)
* [Converting Your ONNX* Model](prepare_model/convert_model/Convert_Model_From_ONNX.md)
* [Converting Faster-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md)
* [Converting Mask-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md)
* [Converting GPT2 ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_GPT2.md)
* [Converting Your PyTorch* Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md)
* [Converting F3Net PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_F3Net.md)
* [Converting QuartzNet PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md)
* [Converting YOLACT PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md)
* [Model Optimizations Techniques](prepare_model/Model_Optimization_Techniques.md)
* [Cutting parts of the model](prepare_model/convert_model/Cutting_Model.md)
* [Sub-graph Replacement in Model Optimizer](prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
* [Supported Framework Layers](prepare_model/Supported_Frameworks_Layers.md)
* [Intermediate Representation and Operation Sets](IR_and_opsets.md)
* [Operations Specification](../ops/opset.md)
* [Intermediate Representation suitable for INT8 inference](prepare_model/convert_model/IR_suitable_for_INT8_inference.md)
* [Model Optimizer Extensibility](prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md)
* [Extending Model Optimizer with New Primitives](prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md)
* [Extending Model Optimizer with Caffe Python Layers](prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md)
* [Extending Model Optimizer with Custom MXNet* Operations](prepare_model/customize_model_optimizer/Extending_MXNet_Model_Optimizer_with_New_Primitives.md)
* [Legacy Mode for Caffe* Custom Layers](prepare_model/customize_model_optimizer/Legacy_Mode_for_Caffe_Custom_Layers.md)
* [Model Optimizer Frequently Asked Questions](prepare_model/Model_Optimizer_FAQ.md)
## Videos

* [Known Issues](Known_Issues_Limitations.md)

**Typical Next Step:** [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)

## Video: Model Optimizer Concept

[![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly

## Video: Model Optimizer Basic Operation
[![](https://img.youtube.com/vi/BBt1rseDcy0/0.jpg)](https://www.youtube.com/watch?v=BBt1rseDcy0)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/BBt1rseDcy0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<table>
<tr>
<td><iframe width="220" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></td>
<td><iframe width="220" src="https://www.youtube.com/embed/BBt1rseDcy0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></td>
<td><iframe width="220" src="https://www.youtube.com/embed/RF8ypHyiKrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></td>
</tr>
<tr>
<td><strong>Model Optimizer Concept</strong>. <br>Duration: 3:56</td>
<td><strong>Model Optimizer Basic<br> Operation</strong>. <br>Duration: 2:57.</td>
<td><strong>Choosing the Right Precision</strong>. <br>Duration: 4:18.</td>
</tr>
</table>
\endhtmlonly

## Video: Choosing the Right Precision
[![](https://img.youtube.com/vi/RF8ypHyiKrY/0.jpg)](https://www.youtube.com/watch?v=RF8ypHyiKrY)
\htmlonly
<iframe width="560" height="315" src="https://www.youtube.com/embed/RF8ypHyiKrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
\endhtmlonly
4 changes: 2 additions & 2 deletions docs/MO_DG/img/workflow_steps.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 12 additions & 6 deletions docs/MO_DG/prepare_model/Config_Model_Optimizer.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# Configuring the Model Optimizer {#openvino_docs_MO_DG_prepare_model_Config_Model_Optimizer}
# Installing Model Optimizer Pre-Requisites {#openvino_docs_MO_DG_prepare_model_Config_Model_Optimizer}

You must configure the Model Optimizer for the framework that was used to train
the model. This section tells you how to configure the Model Optimizer either
through scripts or by using a manual process.
Before running the Model Optimizer, you must install the Model Optimizer pre-requisites for the framework that was used to train the model. This section tells you how to install the pre-requisites either through scripts or by using a manual process.

## Using Configuration Scripts

Expand Down Expand Up @@ -154,6 +152,10 @@ pip3 install -r requirements_onnx.txt
```

## Using the protobuf Library in the Model Optimizer for Caffe\*
\htmlonly<details>\endhtmlonly
<summary>Click to expand</summary>



These procedures require:

Expand All @@ -166,15 +168,15 @@ By default, the library executes pure Python\* language implementation,
which is slow. These steps show how to use the faster C++ implementation
of the protobuf library on Windows OS or Linux OS.

### Using the protobuf Library on Linux\* OS
#### Using the protobuf Library on Linux\* OS

To use the C++ implementation of the protobuf library on Linux, it is enough to
set up the environment variable:
```sh
export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```

### <a name="protobuf-install-windows"></a>Using the protobuf Library on Windows\* OS
#### <a name="protobuf-install-windows"></a>Using the protobuf Library on Windows\* OS

On Windows, pre-built protobuf packages for Python versions 3.4, 3.5, 3.6,
and 3.7 are provided with the installation package and can be found in
Expand Down Expand Up @@ -262,6 +264,10 @@ python3 -m easy_install dist/protobuf-3.6.1-py3.6-win-amd64.egg
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```

\htmlonly
</details>
\endhtmlonly

## See Also

* [Converting a Model to Intermediate Representation (IR)](convert_model/Converting_Model.md)
Loading

0 comments on commit 02013fc

Please sign in to comment.