Skip to content

Commit

Permalink
Update to latest 2021.1 (#121)
Browse files Browse the repository at this point in the history
* [DOCS] [41549] Fix broken code block in Install OpenVINO from PyPI Repository (openvinotoolkit#2800)

* Turned list into headings

* fixes

* fix

* add animation (openvinotoolkit#2865)

* Align `time_tests` with master branch from 4021e14 (openvinotoolkit#2881)

* Added info on DockerHub CI Framework (openvinotoolkit#2919)

* Feature/azaytsev/cherry pick pr2541 to 2021 1 (openvinotoolkit#2960)

* added OpenVINO Model Server to docs (openvinotoolkit#2541)

* added OpenVINO Model Server

* updated documentation to include valid links

* minor fixes

* Fixed links and style

* Update README.md

fixed links to model_server

* more corrections

* dropped reference in ie_docs and minor fixes

* Update README.md

Fixed links to Inference Engine pages

Co-authored-by: Alina Alborova <[email protected]>
Co-authored-by: Andrey Zaytsev <[email protected]>

* Added Model Server docs to 2021/1

Co-authored-by: Trawinski, Dariusz <[email protected]>
Co-authored-by: Alina Alborova <[email protected]>

* See Also sections in MO Guide (openvinotoolkit#2770)

* convert to doxygen comments

* layouts and code comments

* separate layout

* Changed layouts

* Removed FPGA from the documentation

* Updated according to CVS-38225

* some changes

* Made changes to benchmarks according to review comments

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* update layouts

* Post-release fixes and installation path changes

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Removed DLDT description

* Pt 1

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Extra file

* Update IR_and_opsets.md

* Update Known_Issues_Limitations.md

* Update Config_Model_Optimizer.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_ONNX.md

* Update Convert_Model_From_TensorFlow.md

* Update Converting_Model_General.md

* Update Cutting_Model.md

* Update IR_suitable_for_INT8_inference.md

* Update Aspire_Tdnn_Model.md

* Update Convert_Model_From_Caffe.md

* Update Convert_Model_From_TensorFlow.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_Kaldi.md

* Added references to other fws from each fw

* Fixed broken links

* Fixed broken links

* fixes

* fixes

* Fixed wrong links

Co-authored-by: Nikolay Tyukaev <[email protected]>
Co-authored-by: Andrey Zaytsev <[email protected]>
Co-authored-by: Tyukaev <[email protected]>

* Fixes (openvinotoolkit#3105)

* Renamed Benchmark App into Benchmark Tool in the menu (openvinotoolkit#3032)

* [DOC] Update Docker install guide (openvinotoolkit#3055) (openvinotoolkit#3200)

* [DOC] Update Docker install guide

* [DOC] Add proxy for Windows Docker install guide

* [DOC] move up prebuilt images section

* Update installing-openvino-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

Formatting fixes

* Update installing-openvino-docker-linux.md

Fixed formatting issues

* Update installing-openvino-docker-windows.md

Minor fixes

* Update installing-openvino-docker-linux.md

Fixed formatting issues

* [DOC] update text with CPU image, remove proxy for win

* Update installing-openvino-docker-windows.md

Minor fixes

* Update installing-openvino-docker-windows.md

Minor fix

* Update installing-openvino-docker-windows.md

Minor fix

* Update installing-openvino-docker-windows.md

Co-authored-by: Andrey Zaytsev <[email protected]>

(cherry picked from commit 4a09888)

* Align time_tests with master (openvinotoolkit#3238)

* Align time_tests with master

* Fix "results" uploading to DB in time_tests

* Add new model to `tgl_test_config.yml`

* Fix onnx tests versions (openvinotoolkit#3240)

* [40929] DL Workbench in Get Started (openvinotoolkit#2740)

* Initial commit

* Added the doc

* More instructions and images

* Added slide

* Borders for screenshots

* fixes

* Fixes

* Added link to Benchmark app

* Replaced the image

* tiny fix

* tiny fix

* Links to DL Workbench Installation Guide (openvinotoolkit#2861)

* Links to WB

* Changed wording

* Changed wording

* Fixes

* Changes the wording

* Minor corrections

* Removed an extra point

* [41545] Add links to DL Workbench from components that are available in the DL WB (openvinotoolkit#2801)

* Added links to MO and Benchmark App

* Changed wording

* Fixes a link

* fixed a link

* Changed the wording

* Feature/azaytsev/change layout (openvinotoolkit#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (openvinotoolkit#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <[email protected]>

* Add several new models to `tgl_test_config.yml` in time_tests (openvinotoolkit#3269)

* Fix wrong path for `yolo-v2-tiny-ava-0001` for time_tests

* Add several new models to `tgl_test_config.yml` in time_tests

* Fix a typo in DL Workbench Get Started (openvinotoolkit#3338)

* Fixed a typo

* Update openvino_docs.xml

Co-authored-by: Andrey Zaytsev <[email protected]>

* ops math formula fix (openvinotoolkit#3333)

Co-authored-by: Nikolay Tyukaev <[email protected]>

* Fix paths for `squeezenet1.1` in time_tests config (openvinotoolkit#3416)

* GNA Plugin doc review (openvinotoolkit#2922)

* Doc review

* Addressed comments

* Removed an inexistent link

* Port PlaidML plugin forward to 2021.1 (#32)

Adds PlaidML plugin to this repo on the 2021.1 branch

* Enable testing of BatchNorm (#33)

* Require specific path to shared library (#34)

* enable separate source/so paths

* fix tabs

* Fix multiple outputs and add Split (#42)

Fixes the problem with multiple outputs in the PlaidML plugin described in #36. Adds the Split operation with tests, which utilizes multiple outputs.

* Swish (#47)

* Add Reverse & tests to PlaidML Plugin (#35)

* Make separate PlaidMLProgramBuilder (#92)

* Variadic Split (#91)

* variadic split w/o -1

* -1 implemented

* Fix -1 support in variadic split and test it

* Style consistency

* Cleanup

* Fix exception message

* Add BinaryConvolution (#93)

* Add working tests back (#97)

* Attempt to add tests back

* Change op name for logical_and to match tests

* Add Tests for
*Convert
*Convolution_Backprop_Data
*Fake_quantize
*SoftMax
*Tile
*Transpose

* Fix comparison and logical tests, use IE ref mode for now

* Remove cumsum and logical tests

* Remove comparison tests and it's fixes

* Add bucketize op and  tests (#90)

* Add extract image patches op (#96)

* Hswish via ReLU (#95)

* Hswish via ReLU

* ReLU max_value used

* Add reorg_yolo op (#101)

* Remove conv bprop & fake quant tests (#106)

* add EmbeddingBagOffsetsSum op and tests (#100)

* Add LSTMCell (#102)

* Add RNNCell (#109)

Co-authored-by: Ling, Liyang <[email protected]>

* Add space_to_batch op (#104)

* Add tests for MinMax, DepthToSpace (#105)

* Add GELU (#107)

* Add GRUCell (#110)

Co-authored-by: Ling, Liyang <[email protected]>

* Fix support for using OpenVINO as a subproject (#111)

* Build fixes for newer compilers (#113)

* add EmbeddingBagPackedSum op and tests (#114)

Co-authored-by: Tim Zerrell <[email protected]>

* Add shuffle_channels op and test. (#112)

* Tests for squared difference op (#115)

Co-authored-by: Tim Zerrell <[email protected]>

* Add acosh, asinh, atanh into tests (#118)

* Reverse sequence (#116)

* Add PriorBox op and test. (#117)

* Remove obsolete PlaidML code (#120)

Co-authored-by: Alina Alborova <[email protected]>
Co-authored-by: Nikolay Tyukaev <[email protected]>
Co-authored-by: Vitaliy Urusovskij <[email protected]>
Co-authored-by: Andrey Zaytsev <[email protected]>
Co-authored-by: Trawinski, Dariusz <[email protected]>
Co-authored-by: Nikolay Tyukaev <[email protected]>
Co-authored-by: Kate Generalova <[email protected]>
Co-authored-by: Rafal Blaczkowski <[email protected]>
Co-authored-by: Tim Zerrell <[email protected]>
Co-authored-by: Brian Retford <[email protected]>
Co-authored-by: Michael Yi <[email protected]>
Co-authored-by: Liyang Ling <[email protected]>
Co-authored-by: Namrata Choudhury <[email protected]>
Co-authored-by: Xin Wang <[email protected]>
Co-authored-by: xinghong chen <[email protected]>
Co-authored-by: Zhibin Li <[email protected]>
Co-authored-by: Frank Laub <[email protected]>
  • Loading branch information
18 people authored Dec 18, 2020
1 parent e9dd889 commit 053ef76
Show file tree
Hide file tree
Showing 80 changed files with 2,633 additions and 789 deletions.
2 changes: 1 addition & 1 deletion docs/IE_DG/Introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ For Intel® Distribution of OpenVINO™ toolkit, the Inference Engine package co
[sample console applications](Samples_Overview.md) demonstrating how you can use
the Inference Engine in your applications.

The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md">Inference Engine Build Instructions</a>.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.
## See Also
- [Inference Engine Samples](Samples_Overview.md)
- [Intel&reg; Deep Learning Deployment Toolkit Web Page](https://software.intel.com/en-us/computer-vision-sdk)
Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/Samples_Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ The officially supported Linux* build environment is the following:
* GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
* CMake* version 3.10 or higher

> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md).
> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
To build the C or C++ sample applications for Linux, go to the `<INSTALL_DIR>/inference_engine/samples/c` or `<INSTALL_DIR>/inference_engine/samples/cpp` directory, respectively, and run the `build_samples.sh` script:
```sh
Expand Down
4 changes: 2 additions & 2 deletions docs/IE_DG/inference_engine_intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ Inference Engine is a set of C++ libraries providing a common API to deliver inf

For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages.

The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md">Inference Engine Build Instructions</a>.
The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the <a href="https://github.com/openvinotoolkit/openvino/wiki/BuildingCode">Inference Engine Build Instructions</a>.

To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation.

For complete API Reference, see the [API Reference](usergroup29.html) section.
For complete API Reference, see the [Inference Engine API References](./api_references.html) section.

Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel&reg; hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.

Expand Down
2 changes: 1 addition & 1 deletion docs/IE_DG/protecting_model_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ CNNNetwork network = core.ReadNetwork(strModel, make_shared_blob<uint8_t>({Preci
- OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org)
- Model Optimizer Developer Guide: [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
- Inference Engine Developer Guide: [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md)
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.html)
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.md)
- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index)
- For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic)
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
164 changes: 82 additions & 82 deletions docs/IE_DG/supported_plugins/GNA.md

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions docs/IE_DG/supported_plugins/HDDL.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

## Introducing HDDL Plugin

The Inference Engine HDDL plugin is developed for inference of neural networks on Intel&reg; Vision Accelerator Design with Intel&reg; Movidius&trade; VPUs which is designed for use cases those require large throughput of deep learning inference. It provides dozens amount of throughput as MYRIAD Plugin.
The Inference Engine HDDL plugin is developed for inference of neural networks on the Intel&reg; Vision Accelerator Design with Intel&reg; Movidius&trade; VPUs. It is designed for use cases which require large throughputs of deep learning inference. It provides dozens of times the throughput as the MYRIAD Plugin does.

## Installation on Linux* OS

For installation instructions, refer to the [Installation Guide for Linux\*](VPU.md).
For installation instructions, refer to the [Installation Guide for Linux*](VPU.md).

## Installation on Windows* OS

For installation instructions, refer to the [Installation Guide for Windows\*](Supported_Devices.md).
For installation instructions, refer to the [Installation Guide for Windows*](Supported_Devices.md).

## Supported networks

Expand All @@ -30,7 +30,7 @@ In addition to common parameters for Myriad plugin and HDDL plugin, HDDL plugin
| KEY_VPU_HDDL_STREAM_ID | string | empty string | Allows to execute inference on a specified device. |
| KEY_VPU_HDDL_DEVICE_TAG | string | empty string | Allows to allocate/deallocate networks on specified devices. |
| KEY_VPU_HDDL_BIND_DEVICE | YES/NO | NO | Whether the network should bind to a device. Refer to vpu_plugin_config.hpp. |
| KEY_VPU_HDDL_RUNTIME_PRIORITY | singed int | 0 | Specify the runtime priority of a device among all devices that running a same network Refer to vpu_plugin_config.hpp. |
| KEY_VPU_HDDL_RUNTIME_PRIORITY | singed int | 0 | Specify the runtime priority of a device among all devices that are running the same network. Refer to vpu_plugin_config.hpp. |

## See Also

Expand Down
7 changes: 7 additions & 0 deletions docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,13 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi

* <code>.bin</code> - Contains the weights and biases binary data.

> **TIP**: You also can work with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
> performance of deep learning models on various Intel® architecture
> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
> <br>
> Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_Install_from_Docker_Hub) to get started.
## What's New in the Model Optimizer in this Release?

* Common changes:
Expand Down
4 changes: 4 additions & 0 deletions docs/MO_DG/IR_and_opsets.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,4 +242,8 @@ To differentiate versions of the same operation type, like `ReLU`, the suffix `-
`N` usually refers to the first `opsetN` where this version of the operation is introduced.
It is not guaranteed that new operations will be named according to that rule, the naming convention might be changed, but not for old operations which are frozen completely.

---
## See Also

* [Cut Off Parts of a Model](prepare_model/convert_model/Cutting_Model.md)

5 changes: 5 additions & 0 deletions docs/MO_DG/Known_Issues_Limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,8 @@ Possible workaround is to upgrade default protobuf compiler (libprotoc 2.5.0) to
libprotoc 2.6.1.
[protobuf_issue]: https://github.com/google/protobuf/issues/4272
---
## See Also
* [Known Issues and Limitations in the Inference Engine](../IE_DG/Known_Issues_Limitations.md)
8 changes: 8 additions & 0 deletions docs/MO_DG/prepare_model/Config_Model_Optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,14 @@ python3 -m easy_install dist/protobuf-3.6.1-py3.6-win-amd64.egg
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```

---
## See Also
docs\MO_DG\prepare_model\Config_Model_Optimizer.md
docs\install_guides\installing-openvino-raspbian.md

* [Converting a Model to Intermediate Representation (IR)](convert_model/Converting_Model.md)
* [Install OpenVINO™ toolkit for Raspbian* OS](../../install_guides/installing-openvino-raspbian.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* 10](../../install_guides/installing-openvino-windows.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* with FPGA Support](../../install_guides/installing-openvino-windows-fpga.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for macOS*](../../install_guides/installing-openvino-macos.md)
* [Configuration Guide for the Intel® Distribution of OpenVINO™ toolkit 2020.4 and the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA SG2 (IEI's Mustang-F100-A10) on Linux* ](../../install_guides/VisionAcceleratorFPGA_Configure.md)
10 changes: 10 additions & 0 deletions docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,3 +144,13 @@ In this document, you learned:
* Basic information about how the Model Optimizer works with Caffe\* models
* Which Caffe\* models are supported
* How to convert a trained Caffe\* model using the Model Optimizer with both framework-agnostic and Caffe-specific command-line options

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
* [Custom Layers in the Model Optimizer ](../customize_model_optimizer/Customize_Model_Optimizer.md)
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,12 @@ must be copied to `Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r

## Supported Kaldi\* Layers
Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for the list of supported standard layers.

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Custom Layers Guide](../../../HOWTO/Custom_Layers_Guide.md)
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,12 @@ In this document, you learned:
* Basic information about how the Model Optimizer works with MXNet\* models
* Which MXNet\* models are supported
* How to convert a trained MXNet\* model using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Custom Layers in the Model Optimizer](../customize_model_optimizer/Customize_Model_Optimizer.md)
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,12 @@ There are no ONNX\* specific parameters, so only [framework-agnostic parameters]

## Supported ONNX\* Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Convert TensorFlow* BERT Model to the Intermediate Representation ](tf_specific/Convert_BERT_From_Tensorflow.md)
Original file line number Diff line number Diff line change
Expand Up @@ -375,3 +375,12 @@ In this document, you learned:
* Which TensorFlow models are supported
* How to freeze a TensorFlow model
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options

---
## See Also

* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
10 changes: 10 additions & 0 deletions docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,3 +233,13 @@ Otherwise, it will be casted to data type passed to `--data_type` parameter (by
```sh
python3 mo.py --input_model FaceNet.pb --input "placeholder_layer_name->[0.1 1.2 2.3]"
```
---
## See Also
* [Converting a Cafee* Model](Convert_Model_From_Caffe.md)
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Using Shape Inference](../../../IE_DG/ShapeInference.md)
9 changes: 8 additions & 1 deletion docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,4 +389,11 @@ In this case, when `--input_shape` is specified and the node contains multiple i
The correct command line is:
```sh
python3 mo.py --input_model=inception_v1.pb --input=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3]
```
```

---
## See Also

* [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
* [Extending the Model Optimizer with New Primitives](../customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,11 @@ Weights compression leaves `FakeQuantize` output arithmetically the same and wei
See the visualization of `Convolution` with the compressed weights:
![](../../img/compressed_int8_Convolution_weights.png)

Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default. To generate an expanded INT8 IR, use `--disable_weights_compression`.
Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default. To generate an expanded INT8 IR, use `--disable_weights_compression`.

---
## See Also

* [Quantization](@ref pot_compression_algorithms_quantization_README)
* [Optimization Guide](../../../optimization_guide/dldt_optimization_guide.md)
* [Low Precision Optimization Guide](@ref pot_docs_LowPrecisionOptimizationGuide)
Original file line number Diff line number Diff line change
Expand Up @@ -110,3 +110,8 @@ speech_sample -i feats.ark,ivector_online_ie.ark -m final.xml -d CPU -o predicti

Results can be decoded as described in "Use of Sample in Kaldi* Speech Recognition Pipeline" chapter
in [the Speech Recognition Sample description](../../../../../inference-engine/samples/speech_sample/README.md).

---
## See Also

* [Converting a Kaldi Model](../Convert_Model_From_Kaldi.md)
1 change: 1 addition & 0 deletions docs/benchmarks/performance_benchmarks.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ Measuring inference performance involves many variables and is extremely use-cas
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-datalabels"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/chartjs-plugin-annotation/0.5.7/chartjs-plugin-annotation.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/build/Plugin.Barchart.Background.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-deferred@1"></script>
<!-- download this file and place on your server (or include the styles inline) -->
<link rel="stylesheet" href="ovgraphs.css" type="text/css">
\endhtmlonly
Expand Down
Loading

0 comments on commit 053ef76

Please sign in to comment.