Skip to content

Commit

Permalink
Feature/doc fixes 2021 3 (#4971)
Browse files Browse the repository at this point in the history
* Made changes for CVS-50424

* Changes for CVS-49349

* Minor change for CVS-49349

* Changes for CVS-49343

* Cherry-pick #PR4254

* Replaced /opt/intel/openvino/ with /opt/intel/openvino_2021/ as the default target directory

* (CVS-50786) Added a new section Reference IMplementations to keep Speech Library and Speech Recognition Demos

* Doc fixes

* Replaced links to inference_engine_intro.md with Deep_Learning_Inference_Engine_DevGuide.md, fixed links

* Fixed link

* Fixes

* Fixes

* Reemoved Intel® Xeon® processor E family
  • Loading branch information
andrew-zaytsev authored Mar 25, 2021
1 parent 1fdc9e3 commit 22cf9ef
Show file tree
Hide file tree
Showing 25 changed files with 182 additions and 367 deletions.
2 changes: 1 addition & 1 deletion docs/HOWTO/Custom_Layers_Guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ operation for the CPU plugin. The code of the library is described in the [Exte
In order to build the extension run the following:<br>
```bash
mkdir build && cd build
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
cmake .. -DCMAKE_BUILD_TYPE=Release
make --jobs=$(nproc)
```
Expand Down
160 changes: 96 additions & 64 deletions docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/IE_DG/Samples_Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ vi <user_home_directory>/.bashrc

2. Add this line to the end of the file:
```sh
source /opt/intel/openvino/bin/setupvars.sh
source /opt/intel/openvino_2021/bin/setupvars.sh
```

3. Save and close the file: press the **Esc** key, type `:wq` and press the **Enter** key.
Expand Down Expand Up @@ -242,4 +242,4 @@ sample, read the sample documentation by clicking the sample name in the samples
list above.

## See Also
* [Introduction to Inference Engine](inference_engine_intro.md)
* [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md)
4 changes: 2 additions & 2 deletions docs/IE_DG/ShapeInference.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ Shape collision during shape propagation may be a sign that a new shape does not
Changing the model input shape may result in intermediate operations shape collision.

Examples of such operations:
- [`Reshape` operation](../ops/shape/Reshape_1.md) with a hard-coded output shape value
- [`MatMul` operation](../ops/matrix/MatMul_1.md) with the `Const` second input cannot be resized by spatial dimensions due to operation semantics
- [Reshape](../ops/shape/Reshape_1.md) operation with a hard-coded output shape value
- [MatMul](../ops/matrix/MatMul_1.md) operation with the `Const` second input cannot be resized by spatial dimensions due to operation semantics

Model structure and logic should not change significantly after model reshaping.
- The Global Pooling operation is commonly used to reduce output feature map of classification models output.
Expand Down
10 changes: 8 additions & 2 deletions docs/IE_DG/inference_engine_intro.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro}
================================
# Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro}

> **NOTE:** [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019).
This Guide provides an overview of the Inference Engine describing the typical workflow for performing
inference of a pre-trained and optimized deep learning model and a set of sample applications.

> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_intel_index).
After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data.

Expand Down
11 changes: 9 additions & 2 deletions docs/IE_DG/supported_plugins/MULTI.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,11 +92,18 @@ Notice that until R2 you had to calculate number of requests in your application
Notice that every OpenVINO sample that supports "-d" (which stays for "device") command-line option transparently accepts the multi-device.
The [Benchmark Application](../../../inference-engine/samples/benchmark_app/README.md) is the best reference to the optimal usage of the multi-device. As discussed multiple times earlier, you don't need to setup number of requests, CPU streams or threads as the application provides optimal out of the box performance.
Below is example command-line to evaluate HDDL+GPU performance with that:
```bash
$ ./benchmark_app –d MULTI:HDDL,GPU –m <model> -i <input> -niter 1000

```sh
./benchmark_app –d MULTI:HDDL,GPU –m <model> -i <input> -niter 1000
```
Notice that you can use the FP16 IR to work with multi-device (as CPU automatically upconverts it to the fp32) and rest of devices support it naturally.
Also notice that no demos are (yet) fully optimized for the multi-device, by means of supporting the OPTIMAL_NUMBER_OF_INFER_REQUESTS metric, using the GPU streams/throttling, and so on.

## Video: MULTI Plugin
[![](https://img.youtube.com/vi/xbORYFEmrqU/0.jpg)](https://www.youtube.com/watch?v=xbORYFEmrqU)
<iframe width="560" height="315" src="https://www.youtube.com/embed/xbORYFEmrqU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

## See Also
* [Supported Devices](Supported_Devices.md)


13 changes: 13 additions & 0 deletions docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,3 +111,16 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi
* [Known Issues](Known_Issues_Limitations.md)

**Typical Next Step:** [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md)

## Video: Model Optimizer Concept

[![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8)
<iframe width="560" height="315" src="https://www.youtube.com/embed/Kl1ptVb7aI8" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

## Video: Model Optimizer Basic Operation
[![](https://img.youtube.com/vi/BBt1rseDcy0/0.jpg)](https://www.youtube.com/watch?v=BBt1rseDcy0)
<iframe width="560" height="315" src="https://www.youtube.com/embed/BBt1rseDcy0" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

## Video: Choosing the Right Precision
[![](https://img.youtube.com/vi/RF8ypHyiKrY/0.jpg)](https://www.youtube.com/watch?v=RF8ypHyiKrY)
<iframe width="560" height="315" src="https://www.youtube.com/embed/RF8ypHyiKrY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Original file line number Diff line number Diff line change
Expand Up @@ -367,6 +367,10 @@ Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for th

The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.

## Video: Converting a TensorFlow Model
[![](https://img.youtube.com/vi/QW6532LtiTc/0.jpg)](https://www.youtube.com/watch?v=QW6532LtiTc)
<iframe width="560" height="315" src="https://www.youtube.com/embed/QW6532LtiTc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>


## Summary
In this document, you learned:
Expand Down
Loading

0 comments on commit 22cf9ef

Please sign in to comment.