InferenceEngine::Core::LoadNetwork()
in order to take effect.
When specifying key values as raw strings (that is, when using Python API), omit the `KEY_` prefix.
+
| Parameter Name | Parameter Values | Default | Description |
|---------------------|-----------------------------|-----------------|-----------------------------------------------------------|
| `KEY_CACHE_DIR` | `".xml
- Describes the network topology
* .bin
- Contains the weights and biases binary data.
-> **TIP**: You also can work with the Model Optimizer inside the OpenVINO™ [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) (DL Workbench).
-> [DL Workbench](@ref workbench_docs_Workbench_DG_Introduction) is a platform built upon OpenVINO™ and provides a web-based graphical environment that enables you to optimize, fine-tune, analyze, visualize, and compare
-> performance of deep learning models on various Intel® architecture
-> configurations. In the DL Workbench, you can use most of OpenVINO™ toolkit components.
-> + + | ++ + | ++ + | +
Model Optimizer Concept. Duration: 3:56 |
+ Model Optimizer Basic Operation. Duration: 2:57. |
+ Choosing the Right Precision. Duration: 4:18. |
+
mo.py
script from the `mo.py
with a path to the input model file and an output directory where you have write permissions:
+Use the mo.py
script from the `mo.py
script is the universal entry point that can deduce the framework that has produced the input model by a standard extension of the model file:
-
-* `.caffemodel` - Caffe\* models
-* `.pb` - TensorFlow\* models
-* `.params` - MXNet\* models
-* `.onnx` - ONNX\* models
-* `.nnet` - Kaldi\* models.
-
-If the model files do not have standard extensions, you can use the ``--framework {tf,caffe,kaldi,onnx,mxnet,paddle}`` option to specify the framework type explicitly.
-
-For example, the following commands are equivalent:
-```sh
-python3 mo.py --input_model /user/models/model.pb
-```
-```sh
-python3 mo.py --framework tf --input_model /user/models/model.pb
-```
+> **NOTE**: Some models require using additional arguments to specify conversion parameters, such as `--input_shape`, `--scale`, `--scale_values`, `--mean_values`, `--mean_file`. To learn about when you need to use these parameters, refer to [Converting a Model Using General Conversion Parameters](Converting_Model_General.md).
To adjust the conversion process, you may use general parameters defined in the [Converting a Model Using General Conversion Parameters](Converting_Model_General.md) and
Framework-specific parameters for:
-* [Caffe](Convert_Model_From_Caffe.md),
-* [TensorFlow](Convert_Model_From_TensorFlow.md),
-* [MXNet](Convert_Model_From_MxNet.md),
-* [ONNX](Convert_Model_From_ONNX.md),
-* [Kaldi](Convert_Model_From_Kaldi.md).
-* [Paddle](Convert_Model_From_Paddle.md).
+* [Caffe](Convert_Model_From_Caffe.md)
+* [TensorFlow](Convert_Model_From_TensorFlow.md)
+* [MXNet](Convert_Model_From_MxNet.md)
+* [ONNX](Convert_Model_From_ONNX.md)
+* [Kaldi](Convert_Model_From_Kaldi.md)
## See Also
diff --git a/docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md b/docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
index 2d267cda3e7172..913278a8e2ac0e 100644
--- a/docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
+++ b/docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
@@ -212,8 +212,7 @@ Launch the Model Optimizer for the Caffe bvlc_alexnet model with reversed input
python3 mo.py --input_model bvlc_alexnet.caffemodel --reverse_input_channels --mean_values [255,255,255] --data_type FP16 --output_dir