Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to latest 2021.1 #121

Merged
merged 51 commits into from
Dec 18, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
eec2fd8
[DOCS] [41549] Fix broken code block in Install OpenVINO from PyPI Re…
aalborov Oct 26, 2020
2cf8999
add animation (#2865)
ntyukaev Oct 30, 2020
a081dfe
Align `time_tests` with master branch from 4021e144 (#2881)
vurusovs Oct 30, 2020
a7ab76e
Added info on DockerHub CI Framework (#2919)
andrew-zaytsev Nov 3, 2020
313d889
Feature/azaytsev/cherry pick pr2541 to 2021 1 (#2960)
andrew-zaytsev Nov 3, 2020
14aa83f
See Also sections in MO Guide (#2770)
aalborov Nov 6, 2020
d2dc54f
Fixes (#3105)
aalborov Nov 16, 2020
78f8b6a
Renamed Benchmark App into Benchmark Tool in the menu (#3032)
aalborov Nov 16, 2020
bd3ba38
[DOC] Update Docker install guide (#3055) (#3200)
generalova-kate Nov 18, 2020
38892b2
Align time_tests with master (#3238)
vurusovs Nov 20, 2020
43a6e4c
Fix onnx tests versions (#3240)
rblaczkowski Nov 20, 2020
751ef42
[40929] DL Workbench in Get Started (#2740)
aalborov Nov 20, 2020
57eee6a
Links to DL Workbench Installation Guide (#2861)
aalborov Nov 20, 2020
9d5b200
[41545] Add links to DL Workbench from components that are available …
aalborov Nov 20, 2020
20fd0bc
Feature/azaytsev/change layout (#3295)
andrew-zaytsev Nov 23, 2020
6adaad6
Add several new models to `tgl_test_config.yml` in time_tests (#3269)
vurusovs Nov 24, 2020
f2a3d6b
Fix a typo in DL Workbench Get Started (#3338)
aalborov Nov 25, 2020
f5e2fff
ops math formula fix (#3333)
ntyukaev Nov 30, 2020
bff3381
Fix paths for `squeezenet1.1` in time_tests config (#3416)
vurusovs Nov 30, 2020
6260125
GNA Plugin doc review (#2922)
aalborov Dec 7, 2020
6374d44
Port PlaidML plugin forward to 2021.1 (#32)
tzerrell Oct 19, 2020
35651cd
Enable testing of BatchNorm (#33)
tzerrell Oct 19, 2020
b60f71a
Require specific path to shared library (#34)
Oct 19, 2020
5966062
Fix multiple outputs and add Split (#42)
tzerrell Oct 22, 2020
022e254
Swish (#47)
mwyi Oct 24, 2020
56e9add
Add Reverse & tests to PlaidML Plugin (#35)
tzerrell Oct 26, 2020
ed90660
Make separate PlaidMLProgramBuilder (#92)
tzerrell Oct 30, 2020
9b91499
Variadic Split (#91)
mwyi Nov 2, 2020
e15c466
Add BinaryConvolution (#93)
LiyangLingIntel Nov 3, 2020
4ac5e60
Add working tests back (#97)
cnamrata15 Nov 3, 2020
4e81cc1
Add bucketize op and tests (#90)
XinWangIntel Nov 3, 2020
3454f38
Add extract image patches op (#96)
XingHongChenIntel Nov 4, 2020
4320e6b
Hswish via ReLU (#95)
mwyi Nov 4, 2020
714c4a8
Add reorg_yolo op (#101)
XingHongChenIntel Nov 7, 2020
3f4722f
Remove conv bprop & fake quant tests (#106)
tzerrell Nov 10, 2020
bacee8b
add EmbeddingBagOffsetsSum op and tests (#100)
haoyouab Nov 10, 2020
c6457f9
Add LSTMCell (#102)
LiyangLingIntel Nov 10, 2020
2897093
Add RNNCell (#109)
tzerrell Nov 10, 2020
bace3d8
Add space_to_batch op (#104)
XingHongChenIntel Nov 10, 2020
3326f49
Add tests for MinMax, DepthToSpace (#105)
cnamrata15 Nov 10, 2020
d1f0000
Add GELU (#107)
LiyangLingIntel Nov 10, 2020
b796cd5
Add GRUCell (#110)
tzerrell Nov 10, 2020
cbf3f33
Fix support for using OpenVINO as a subproject (#111)
Nov 11, 2020
83f6ce4
Build fixes for newer compilers (#113)
Nov 11, 2020
b053f8c
add EmbeddingBagPackedSum op and tests (#114)
haoyouab Nov 12, 2020
9146822
Add shuffle_channels op and test. (#112)
XingHongChenIntel Nov 13, 2020
f38d9d9
Tests for squared difference op (#115)
cnamrata15 Nov 13, 2020
3edec51
Add acosh, asinh, atanh into tests (#118)
LiyangLingIntel Dec 3, 2020
11aeeb9
Reverse sequence (#116)
XingHongChenIntel Dec 4, 2020
fa8c54b
Add PriorBox op and test. (#117)
XinWangIntel Dec 4, 2020
249e62f
Remove obsolete PlaidML code (#120)
tzerrell Dec 18, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
See Also sections in MO Guide (openvinotoolkit#2770)
* convert to doxygen comments

* layouts and code comments

* separate layout

* Changed layouts

* Removed FPGA from the documentation

* Updated according to CVS-38225

* some changes

* Made changes to benchmarks according to review comments

* Added logo info to the Legal_Information, updated Ubuntu, CentOS supported versions

* Updated supported Intel® Core™ processors list

* Fixed table formatting

* update api layouts

* Added new index page with overview

* Changed CMake and Python versions

* Fixed links

* some layout changes

* some layout changes

* some layout changes

* COnverted svg images to png

* layouts

* update layout

* Added a label for nGraph_Python_API.md

* fixed links

* Fixed image

* removed links to ../IE_DG/Introduction.md

* Removed links to tools overview page as removed

* some changes

* Remove link to Integrate_your_kernels_into_IE.md

* remove openvino_docs_IE_DG_Graph_debug_capabilities from layout as it was removed

* update layouts

* Post-release fixes and installation path changes

* Added PIP installation and Build from Source to the layout

* Fixed formatting issue, removed broken link

* Renamed section EXAMPLES to RESOURCES according to review comments

* add mo faq navigation by url param

* Removed DLDT description

* Pt 1

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Extra file

* Update IR_and_opsets.md

* Update Known_Issues_Limitations.md

* Update Config_Model_Optimizer.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_ONNX.md

* Update Convert_Model_From_TensorFlow.md

* Update Converting_Model_General.md

* Update Cutting_Model.md

* Update IR_suitable_for_INT8_inference.md

* Update Aspire_Tdnn_Model.md

* Update Convert_Model_From_Caffe.md

* Update Convert_Model_From_TensorFlow.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_Kaldi.md

* Added references to other fws from each fw

* Fixed broken links

* Fixed broken links

* fixes

* fixes

* Fixed wrong links

Co-authored-by: Nikolay Tyukaev <[email protected]>
Co-authored-by: Andrey Zaytsev <[email protected]>
Co-authored-by: Tyukaev <[email protected]>
  • Loading branch information
4 people authored Nov 6, 2020
commit 14aa83f4d92f4a787490a5bb2e29a9789fcc131d
4 changes: 4 additions & 0 deletions docs/MO_DG/IR_and_opsets.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,4 +242,8 @@ To differentiate versions of the same operation type, like `ReLU`, the suffix `-
`N` usually refers to the first `opsetN` where this version of the operation is introduced.
It is not guaranteed that new operations will be named according to that rule, the naming convention might be changed, but not for old operations which are frozen completely.

---
## See Also

* [Cut Off Parts of a Model](prepare_model/convert_model/Cutting_Model.md)

5 changes: 5 additions & 0 deletions docs/MO_DG/Known_Issues_Limitations.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,8 @@ Possible workaround is to upgrade default protobuf compiler (libprotoc 2.5.0) to
libprotoc 2.6.1.

[protobuf_issue]: https://github.com/google/protobuf/issues/4272

---
## See Also

* [Known Issues and Limitations in the Inference Engine](../IE_DG/Known_Issues_Limitations.md)
8 changes: 8 additions & 0 deletions docs/MO_DG/prepare_model/Config_Model_Optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,14 @@ python3 -m easy_install dist/protobuf-3.6.1-py3.6-win-amd64.egg
set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp
```

---
## See Also
docs\MO_DG\prepare_model\Config_Model_Optimizer.md
docs\install_guides\installing-openvino-raspbian.md

* [Converting a Model to Intermediate Representation (IR)](convert_model/Converting_Model.md)
* [Install OpenVINO™ toolkit for Raspbian* OS](../../install_guides/installing-openvino-raspbian.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* 10](../../install_guides/installing-openvino-windows.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for Windows* with FPGA Support](../../install_guides/installing-openvino-windows-fpga.md)
* [Install Intel® Distribution of OpenVINO™ toolkit for macOS*](../../install_guides/installing-openvino-macos.md)
* [Configuration Guide for the Intel® Distribution of OpenVINO™ toolkit 2020.4 and the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA SG2 (IEI's Mustang-F100-A10) on Linux* ](../../install_guides/VisionAcceleratorFPGA_Configure.md)
10 changes: 10 additions & 0 deletions docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,3 +144,13 @@ In this document, you learned:
* Basic information about how the Model Optimizer works with Caffe\* models
* Which Caffe\* models are supported
* How to convert a trained Caffe\* model using the Model Optimizer with both framework-agnostic and Caffe-specific command-line options

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
* [Custom Layers in the Model Optimizer ](../customize_model_optimizer/Customize_Model_Optimizer.md)
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,12 @@ must be copied to `Parameter_0_for_Offset_fastlstm2.r_trunc__2Offset_fastlstm2.r

## Supported Kaldi\* Layers
Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for the list of supported standard layers.

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Custom Layers Guide](../../../HOWTO/Custom_Layers_Guide.md)
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,12 @@ In this document, you learned:
* Basic information about how the Model Optimizer works with MXNet\* models
* Which MXNet\* models are supported
* How to convert a trained MXNet\* model using the Model Optimizer with both framework-agnostic and MXNet-specific command-line options

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Custom Layers in the Model Optimizer](../customize_model_optimizer/Customize_Model_Optimizer.md)
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,12 @@ There are no ONNX\* specific parameters, so only [framework-agnostic parameters]

## Supported ONNX\* Layers
Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the list of supported standard layers.

---
## See Also

* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Convert TensorFlow* BERT Model to the Intermediate Representation ](tf_specific/Convert_BERT_From_Tensorflow.md)
Original file line number Diff line number Diff line change
Expand Up @@ -375,3 +375,12 @@ In this document, you learned:
* Which TensorFlow models are supported
* How to freeze a TensorFlow model
* How to convert a trained TensorFlow model using the Model Optimizer with both framework-agnostic and TensorFlow-specific command-line options

---
## See Also

* [Converting a Caffe* Model](Convert_Model_From_Caffe.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
10 changes: 10 additions & 0 deletions docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,3 +233,13 @@ Otherwise, it will be casted to data type passed to `--data_type` parameter (by
```sh
python3 mo.py --input_model FaceNet.pb --input "placeholder_layer_name->[0.1 1.2 2.3]"
```

---
## See Also

* [Converting a Cafee* Model](Convert_Model_From_Caffe.md)
* [Converting a TensorFlow* Model](Convert_Model_From_TensorFlow.md)
* [Converting an MXNet* Model](Convert_Model_From_MxNet.md)
* [Converting an ONNX* Model](Convert_Model_From_ONNX.md)
* [Converting a Kaldi* Model](Convert_Model_From_Kaldi.md)
* [Using Shape Inference](../../../IE_DG/ShapeInference.md)
9 changes: 8 additions & 1 deletion docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,4 +389,11 @@ In this case, when `--input_shape` is specified and the node contains multiple i
The correct command line is:
```sh
python3 mo.py --input_model=inception_v1.pb --input=0:InceptionV1/InceptionV1/Conv2d_1a_7x7/convolution --input_shape=[1,224,224,3]
```
```

---
## See Also

* [Sub-Graph Replacement in the Model Optimizer](../customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md)
* [Extending the Model Optimizer with New Primitives](../customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md)
* [Converting a Model Using General Conversion Parameters](Converting_Model_General.md)
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,11 @@ Weights compression leaves `FakeQuantize` output arithmetically the same and wei
See the visualization of `Convolution` with the compressed weights:
![](../../img/compressed_int8_Convolution_weights.png)

Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default. To generate an expanded INT8 IR, use `--disable_weights_compression`.
Both Model Optimizer and Post-Training Optimization tool generate a compressed IR by default. To generate an expanded INT8 IR, use `--disable_weights_compression`.

---
## See Also

* [Quantization](@ref pot_compression_algorithms_quantization_README)
* [Optimization Guide](../../../optimization_guide/dldt_optimization_guide.md)
* [Low Precision Optimization Guide](@ref pot_docs_LowPrecisionOptimizationGuide)
Original file line number Diff line number Diff line change
Expand Up @@ -110,3 +110,8 @@ speech_sample -i feats.ark,ivector_online_ie.ark -m final.xml -d CPU -o predicti

Results can be decoded as described in "Use of Sample in Kaldi* Speech Recognition Pipeline" chapter
in [the Speech Recognition Sample description](../../../../../inference-engine/samples/speech_sample/README.md).

---
## See Also

* [Converting a Kaldi Model](../Convert_Model_From_Kaldi.md)