Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[v1.x] Update ONNX tutorial and doc #20253

Merged
merged 3 commits into from
May 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 52 additions & 28 deletions docs/python_docs/python/tutorials/deploy/export/onnx.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,27 +17,22 @@

# Exporting to ONNX format

[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.

In this tutorial, we will show how you can save MXNet models to the ONNX format.

MXNet-ONNX operators coverage and features are updated regularly. Visit the [ONNX operator coverage](https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage) page for the latest information.
[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. MXNet-ONNX export coverage and features are updated since MXNet 1.9.0. Visit the [ONNX operator coverage](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix) page for the latest information.

In this tutorial, we will learn how to use MXNet to ONNX exporter on pre-trained models.

## Prerequisites

To run the tutorial you will need to have installed the following python modules:
- [MXNet >= 1.3.0](/get_started)
- [onnx]( https://github.com/onnx/onnx#installation) v1.2.1 (follow the install guide)
- [MXNet >= 1.6.0](/get_started)
- [onnx >= 1.7.0](https://github.com/onnx/onnx#installation)

*Note:* MXNet-ONNX importer and exporter follows version 7 of ONNX operator set which comes with ONNX v1.2.1.
*Note:* MXNet-ONNX exporter works with ONNX opset version later than 12, which comes with ONNX v1.7.0


```python
import mxnet as mx
import numpy as np
from mxnet.contrib import onnx as onnx_mxnet
import logging
logging.basicConfig(level=logging.INFO)
```
Expand All @@ -62,46 +57,64 @@ Now, we have downloaded ResNet-18 symbol, params and synset file on the disk.
Let us describe the MXNet's `export_model` API.

```python
help(onnx_mxnet.export_model)
help(mx.onnx.export_model)
```

Output:

```text
Help on function export_model in module mxnet.contrib.onnx.mx2onnx.export_model:
Help on function export_model in module mxnet.onnx.mx2onnx._export_model:

export_model(sym, params, input_shape, input_type=<type 'numpy.float32'>, onnx_file_path=u'model.onnx', verbose=False)
export_model(sym, params, in_shapes=None, in_types=<class 'numpy.float32'>, onnx_file_path='model.onnx', verbose=False, dynamic=False, dynamic_input_shapes=None, run_shape_inference=False, input_type=None, input_shape=None)
Exports the MXNet model file, passed as a parameter, into ONNX model.
Accepts both symbol,parameter objects as well as json and params filepaths as input.
Operator support and coverage - https://cwiki.apache.org/confluence/display/MXNET/MXNet-ONNX+Integration
Operator support and coverage -
https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix

Parameters
----------
sym : str or symbol object
Path to the json file or Symbol object
params : str or symbol object
Path to the params file or params dictionary. (Including both arg_params and aux_params)
input_shape : List of tuple
params : str or dict or list of dict
str - Path to the params file
dict - params dictionary (Including both arg_params and aux_params)
list - list of length 2 that contains arg_params and aux_params
in_shapes : List of tuple
Input shape of the model e.g [(1,3,224,224)]
input_type : data type
Input data type e.g. np.float32
in_types : data type or list of data types
Input data type e.g. np.float32, or [np.float32, np.int32]
onnx_file_path : str
Path where to save the generated onnx file
verbose : Boolean
If true will print logs of the model conversion
If True will print logs of the model conversion
dynamic: Boolean
If True will allow for dynamic input shapes to the model
dynamic_input_shapes: list of tuple
Specifies the dynamic input_shapes. If None then all dimensions are set to None
run_shape_inference : Boolean
If True will run shape inference on the model
input_type : data type or list of data types
This is the old name of in_types. We keep this parameter name for backward compatibility
in_shapes : List of tuple
This is the old name of in_shapes. We keep this parameter name for backward compatibility

Returns
-------
onnx_file_path : str
Onnx file path

Notes
-----
This method is available when you ``import mxnet.onnx``
```

`export_model` API can accept the MXNet model in one of the following two ways.
`export_model` API can accept the MXNet model in one of the following ways.

1. MXNet sym, params objects:
* This is useful if we are training a model. At the end of training, we just need to invoke the `export_model` function and provide sym and params objects as inputs with other attributes to save the model in ONNX format.
2. MXNet's exported json and params files:
1. MXNet's exported json and params files:
* This is useful if we have pre-trained models and we want to convert them to ONNX format.
2. MXNet sym, params objects:
* This is useful if we are training a model. At the end of training, we just need to invoke the `export_model` function and provide sym and params objects as inputs with other attributes to save the model in ONNX format. The params can be either a single object that contains both argument and auxiliary parameters, or a list that includes arg_parmas and aux_params objects


Since we have downloaded pre-trained model files, we will use the `export_model` API by passing the path for symbol and params files.

Expand All @@ -115,20 +128,31 @@ sym = './resnet-18-symbol.json'
params = './resnet-18-0000.params'

# Standard Imagenet input - 3 channels, 224*224
input_shape = (1,3,224,224)
input_shape = [(1,3,224,224)]
input_dtypes = [np.float32]

# Path of the output file
onnx_file = './mxnet_exported_resnet50.onnx'
onnx_file = './mxnet_exported_resnet18.onnx'
```

We have defined the input parameters required for the `export_model` API. Now, we are ready to covert the MXNet model into ONNX format.

```python
# Invoke export model API. It returns path of the converted onnx model
converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file)
converted_model_path = mx.onnx.export_model(sym, params, input_shape, input_dtypes, onnx_file)
```

This API returns path of the converted model which you can later use to import the model into other frameworks.
This API returns path of the converted model which you can later use to import the model into other frameworks. Please refer to [mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis) for more details about the API.

### Dynamic Shape Input
MXNet to ONNX export also supports dynamic input shapes. By setting up optional flags in `export_model`, users have the control of partially/fully dynamic shape input export. For example, setting the batch dimension to dynamic enables dynamic batching inference; setting the width and height dimension to dynamic allows inference on images with different shapes. Below is a code example for dynamic shape on batch dimension. The flag `dynamic` is set to switch on dynamic shape input export, and `dynamic_input_shapes` is used to specify which dimensions are dynamic. `None` or any string variable can be used to represent a dynamic shape dimension.

```python
# The first input dimension will be dynamic in this case
dynamic_input_shapes = [(None, 3, 224, 224)]
mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file,
dynamic=True, dynamic_input_shapes=dynamic_input_shapes)
```

## Check validity of ONNX model

Expand All @@ -147,4 +171,4 @@ checker.check_graph(model_proto.graph)

If the converted protobuf format doesn't qualify to ONNX proto specifications, the checker will throw errors, but in this case it successfully passes.

This method confirms exported model protobuf is valid. Now, the model is ready to be imported in other frameworks for inference!
This method confirms exported model protobuf is valid. Now, the model is ready to be imported in other frameworks for inference! Users may consider to further optimize the ONNX model file using various tools such as [onnx-simplifier](https://github.com/daquexian/onnx-simplifier).
52 changes: 52 additions & 0 deletions python/mxnet/onnx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,58 @@ From the 1.9 release and on, the ONNX export module has become an offical, built
If you are a user of earlier MXNet versions and do not want to upgrade MXNet, you can still enjoy the latest ONNX suppor by pulling the MXNet source code and building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` and then build the wheel with `python3 -m build`. You should be able to find the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should be able to access the module with `import mx2onnx` then.

### APIs
```python
mxnet.onnx.export_model(sym, params, in_shapes=None, in_types=np.float32,
onnx_file_path='model.onnx', verbose=False, dynamic=False,
dynamic_input_shapes=None, run_shape_inference=False, input_type=None,
input_shape=None)
```
Exports the MXNet model file into ONNX model.

Parameters:

sym : str or symbol object
Path to the MXNet json file or Symbol object
params : str or dict or list of dict
str - Path to the MXNet params file
dict - MXNet params dictionary (Including both arg_params and aux_params)
list - list of length 2 that contains MXNet arg_params and aux_params
in_shapes : List of tuple
Input shape of the model e.g [(1,3,224,224)]
in_types : data type or list of data types
Input data type e.g. np.float32, or [np.float32, np.int32]
onnx_file_path : str
Path where to save the generated onnx file
verbose : Boolean
If True will print logs of the model conversion
dynamic: Boolean
If True will allow for dynamic input shapes to the model
dynamic_input_shapes: list of tuple
Specifies the dynamic input_shapes. If None then all dimensions are set to None
run_shape_inference : Boolean
If True will run shape inference on the model
input_type : data type or list of data types
This is the old name of in_types. We keep this parameter name for backward compatibility
in_shapes : List of tuple
This is the old name of in_shapes. We keep this parameter name for backward compatibility

Returns:

onnx_file_path : str
Onnx file path

#### Model with Multiple Input
When the model has multiple input, all the input shapes and dtypes should be provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in `in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol file. If `in_dtypes` is provided as a single data type, the type will be applied to all input nodes.

#### Dynamic Shape Input
By setting up optional flags in export_model API, users have the control of partially/fully dynamic shape input export. The flag `dynamic` is set to switch on dynamic shape input export, and `dynamic_input_shapes` is used to specify which dimensions are dynamic `None` or any string variable can be used to represent a dynamic shape dimension.

```python
# The first input dimension will be dynamic in this case
dynamic_input_shapes = [(None, 3, 224, 224)]
mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file,
dynamic=True, dynamic_input_shapes=dynamic_input_shapes)
```

### Operator Support Matrix
|MXNet Op|ONNX Version|
Expand Down