From 26a9f2203a5670c2692ce4f550b21f43a8845f72 Mon Sep 17 00:00:00 2001 From: Maxim Vafin Date: Tue, 13 Apr 2021 14:34:35 +0300 Subject: [PATCH] Add PyTorch section to the documentation (#4972) * Add PyTorch section to the documentation * Apply review feedback * Remove section about loop * Apply review feedback * Apply review feedback * Apply review feedback --- .../Deep_Learning_Model_Optimizer_DevGuide.md | 6 ++- .../convert_model/Convert_Model_From_ONNX.md | 11 ---- .../Convert_Model_From_PyTorch.md | 53 +++++++++++++++++++ .../Convert_F3Net.md | 0 .../Convert_QuartzNet.md | 0 .../Convert_YOLACT.md | 0 docs/doxygen/ie_docs.xml | 10 ++-- 7 files changed, 64 insertions(+), 16 deletions(-) create mode 100644 docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md rename docs/MO_DG/prepare_model/convert_model/{onnx_specific => pytorch_specific}/Convert_F3Net.md (100%) rename docs/MO_DG/prepare_model/convert_model/{onnx_specific => pytorch_specific}/Convert_QuartzNet.md (100%) rename docs/MO_DG/prepare_model/convert_model/{onnx_specific => pytorch_specific}/Convert_YOLACT.md (100%) diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md index cd9245c3e69646..3b657f52a35556 100644 --- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -93,7 +93,11 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi * [Converting Your ONNX* Model](prepare_model/convert_model/Convert_Model_From_ONNX.md) * [Converting Faster-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md) * [Converting Mask-RCNN ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md) - * [Converting DLRM ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_DLRM.md) + * [Converting GPT2 ONNX* Model](prepare_model/convert_model/onnx_specific/Convert_GPT2.md) + * [Converting Your PyTorch* Model](prepare_model/convert_model/Convert_Model_From_PyTorch.md) + * [Converting F3Net PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_F3Net.md) + * [Converting QuartzNet PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md) + * [Converting YOLACT PyTorch* Model](prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md) * [Model Optimizations Techniques](prepare_model/Model_Optimization_Techniques.md) * [Cutting parts of the model](prepare_model/convert_model/Cutting_Model.md) * [Sub-graph Replacement in Model Optimizer](prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md) diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md index 6b4e91e2818fa1..561a7f84cf85a3 100644 --- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md +++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md @@ -27,17 +27,6 @@ Listed models are built with the operation set version 8 except the GPT-2 model. Models that are upgraded to higher operation set versions may not be supported. -## Supported Pytorch* Models via ONNX Conversion -Starting from the 2019R4 release, the OpenVINO™ toolkit officially supports public Pytorch* models (from `torchvision` 0.2.1 and `pretrainedmodels` 0.7.4 packages) via ONNX conversion. -The list of supported topologies is presented below: - -|Package Name|Supported Models| -|:----|:----| -| [Torchvision Models](https://pytorch.org/docs/stable/torchvision/index.html) | alexnet, densenet121, densenet161, densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19 | -| [Pretrained Models](https://github.com/Cadene/pretrained-models.pytorch) | alexnet, fbresnet152, resnet101, resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11 | -| [ESPNet Models](https://github.com/sacmehta/ESPNet/tree/master/pretrained) | | -| [MobileNetV3](https://github.com/d-li14/mobilenetv3.pytorch) | | - ## Supported PaddlePaddle* Models via ONNX Conversion Starting from the R5 release, the OpenVINO™ toolkit officially supports public PaddlePaddle* models via ONNX conversion. The list of supported topologies downloadable from PaddleHub is presented below: diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md new file mode 100644 index 00000000000000..a03df559291a06 --- /dev/null +++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md @@ -0,0 +1,53 @@ +# Converting a PyTorch* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_PyTorch} + +PyTorch* framework is supported through export to ONNX\* format. A summary of the steps for optimizing and deploying a model that was trained with the PyTorch\* framework: + +1. [Export PyTorch model to ONNX\*](#export-to-onnx). +2. [Configure the Model Optimizer](../Config_Model_Optimizer.md) for ONNX\*. +3. [Convert an ONNX\* model](Convert_Model_From_ONNX.md) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values. +4. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided [sample applications](../../../IE_DG/Samples_Overview.md). +5. [Integrate](../../../IE_DG/Samples_Overview.md) the Inference Engine in your application to deploy the model in the target environment. + +## Supported Topologies + +Here is the list of models that were tested and are guaranteed to be supported. +It is not a full list of models that can be converted to ONNX\* and to IR. + +|Package Name|Supported Models| +|:----|:----| +| [Torchvision Models](https://pytorch.org/docs/stable/torchvision/index.html) | alexnet, densenet121, densenet161, densenet169, densenet201, resnet101, resnet152, resnet18, resnet34, resnet50, vgg11, vgg13, vgg16, vgg19 | +| [Pretrained Models](https://github.com/Cadene/pretrained-models.pytorch) | alexnet, fbresnet152, resnet101, resnet152, resnet18, resnet34, resnet152, resnet18, resnet34, resnet50, resnext101_32x4d, resnext101_64x4d, vgg11 | + +**Other supported topologies** + +* [ESPNet Models](https://github.com/sacmehta/ESPNet/tree/master/pretrained) +* [MobileNetV3](https://github.com/d-li14/mobilenetv3.pytorch) +* F3Net topology can be converted using [Convert PyTorch\* F3Net to the IR](pytorch_specific/Convert_F3Net.md) instruction. +* QuartzNet topologies from [NeMo project](https://github.com/NVIDIA/NeMo) can be converted using [Convert PyTorch\* QuartzNet to the IR](pytorch_specific/Convert_QuartzNet.md) instruction. +* YOLACT topology can be converted using [Convert PyTorch\* YOLACT to the IR](pytorch_specific/Convert_YOLACT.md) instruction. + +## Export PyTorch\* Model to ONNX\* Format + +PyTorch models are defined in a Python\* code, to export such models use `torch.onnx.export()` method. +Only the basics will be covered here, the step to export to ONNX\* is crucial but it is covered by PyTorch\* framework. +For more information, please refer to [PyTorch\* documentation](https://pytorch.org/docs/stable/onnx.html). + +To export a PyTorch\* model you need to obtain the model as an instance of `torch.nn.Module` class and call the `export` function. +```python +import torch + +# Instantiate your model. This is just a regular PyTorch model that will be exported in the following steps. +model = SomeModel() +# Evaluate the model to switch some operations from training mode to inference. +model.eval() +# Create dummy input for the model. It will be used to run the model inside export function. +dummy_input = torch.randn(1, 3, 224, 224) +# Call the export function +torch.onnx.export(model, (dummy_input, ), 'model.onnx') +``` + +## Known Issues + +* Not all PyTorch\* operations can be exported to ONNX\* opset 9 which is used by default, as of version 1.8.1. +It is recommended to export models to opset 11 or higher when export to default opset 9 is not working. In that case, use `opset_version` +option of the `torch.onnx.export`. For more information about ONNX* opset, refer to the [Operator Schemas](https://github.com/onnx/onnx/blob/master/docs/Operators.md). diff --git a/docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_F3Net.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_F3Net.md similarity index 100% rename from docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_F3Net.md rename to docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_F3Net.md diff --git a/docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_QuartzNet.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md similarity index 100% rename from docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_QuartzNet.md rename to docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_QuartzNet.md diff --git a/docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_YOLACT.md b/docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md similarity index 100% rename from docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_YOLACT.md rename to docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md diff --git a/docs/doxygen/ie_docs.xml b/docs/doxygen/ie_docs.xml index a6f5dd3250c818..010d7cac724bc2 100644 --- a/docs/doxygen/ie_docs.xml +++ b/docs/doxygen/ie_docs.xml @@ -52,10 +52,12 @@ limitations under the License. - - - - + + + + + +