From 97977110103e2b87c4d15cde63b7ebc69bf9b5bb Mon Sep 17 00:00:00 2001 From: Ilya Lavrenov Date: Fri, 18 Mar 2022 18:40:02 +0300 Subject: [PATCH] Updated documentation for compile_tool --- .../supported_plugins/Device_Plugins.md | 22 +++++++++---------- tools/compile_tool/README.md | 20 ++++++++++------- 2 files changed, 23 insertions(+), 19 deletions(-) diff --git a/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md b/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md index d9515d6d4abd4c..a2486d0630ded8 100644 --- a/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md +++ b/docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md @@ -40,17 +40,17 @@ Devices similar to the ones we have used for benchmarking can be accessed using ## Features support matrix The table below demonstrates support of key features by OpenVINO device plugins. -| Capability | [CPU](CPU.md) | [GPU](GPU.md) | [GNA](GNA.md) | [VPU](VPU.md) | [Arm® CPU](ARM_CPU.md) | +| Capability | [CPU](CPU.md) | [GPU](GPU.md) | [GNA](GNA.md) |[Arm® CPU](ARM_CPU.md) | | ---------- | --- | --- | --- | --- | --- | -| [Heterogeneous execution](../hetero_execution.md)| Yes | Yes | No | ? | Yes | -| [Multi-device execution](../multi_device.md) | Yes | Yes | Partial | ? | Yes | -| [Automatic batching](../automatic_batching.md) | No | Yes | No | ? | No | -| [Multi-stream execution](@ref openvino_docs_optimization_guide_dldt_optimization_guide) | Yes | Yes | No | ? | Yes | -| [Models caching](../Model_caching_overview.md) | Yes | Partial | Yes | ? | No | -| [Dynamic shapes](../ov_dynamic_shapes.md) | Yes | Partial | No | ? | No | -| Import/Export | Yes | No | Yes | ? | No | -| [Preprocessing acceleration](../preprocessing_overview.md) | Yes | Yes | No | ? | Partial | -| [Stateful models](../network_state_intro.md) | Yes | No | Yes | ? | No | -| [Extensibility](@ref openvino_docs_Extensibility_UG_Intro) | Yes | Yes | No | ? | No | +| [Heterogeneous execution](../hetero_execution.md)| Yes | Yes | No | Yes | +| [Multi-device execution](../multi_device.md) | Yes | Yes | Partial | Yes | +| [Automatic batching](../automatic_batching.md) | No | Yes | No | No | +| [Multi-stream execution](../../optimization_guide/dldt_deployment_optimization_tput.md) | Yes | Yes | No | Yes | +| [Models caching](../Model_caching_overview.md) | Yes | Partial | Yes | No | +| [Dynamic shapes](../ov_dynamic_shapes.md) | Yes | Partial | No | No | +| [Import/Export](../../../tools/compile_tool/README.md) | Yes | No | Yes | No | +| [Preprocessing acceleration](../preprocessing_overview.md) | Yes | Yes | No | Partial | +| [Stateful models](../network_state_intro.md) | Yes | No | Yes | No | +| [Extensibility](@ref openvino_docs_Extensibility_UG_Intro) | Yes | Yes | No | No | For more details on plugin specific feature limitation, see corresponding plugin pages. diff --git a/tools/compile_tool/README.md b/tools/compile_tool/README.md index 787a25a56481a9..fe8351a898c7ae 100644 --- a/tools/compile_tool/README.md +++ b/tools/compile_tool/README.md @@ -1,19 +1,23 @@ # Compile Tool {#openvino_inference_engine_tools_compile_tool_README} -Compile tool is a C++ application that enables you to compile a network for inference on a specific device and export it to a binary file. -With the Compile Tool, you can compile a network using supported Inference Engine plugins on a machine that doesn't have the physical device connected and then transfer a generated file to any machine with the target inference device available. +Compile tool is a C++ application that enables you to compile a model for inference on a specific device and export the compiled representation to a binary file. +With the Compile Tool, you can compile a model using supported OpenVINO Runtime devices on a machine that doesn't have the physical device connected and then transfer a generated file to any machine with the target inference device available. See the [Features support matrix](../../docs/OV_Runtime_UG/supported_plugins/Device_Plugins.md) to understand which device support import / export functionality. -The tool compiles networks for the following target devices using corresponding Inference Engine plugins: +The tool compiles networks for the following target devices using corresponding OpenVINO Runtime plugins: * Intel® Neural Compute Stick 2 (MYRIAD plugin) - The tool is delivered as an executable file that can be run on both Linux* and Windows*. The tool is located in the `/tools/compile_tool` directory. -The workflow of the Compile tool is as follows: +## Workflow of the Compile tool + +1. First, the application reads command-line parameters and loads a model to the OpenVINO Runtime device. +2. Then the application exports a blob with the compiled model and writes it to the output file. -1. First, the application reads command-line parameters and loads a network to the Inference Engine device. -2. The application exports a blob with the compiled network and writes it to the output file. +Also, the compile_tool supports the following capabilities: +- Embedding [layout](../../docs/OV_Runtime_UG/layout_overview.md) and precision conversions (see [Optimize Preprocessing](../../docs/OV_Runtime_UG/preprocessing_overview.md)). To compile the model with advanced preprocessing capabilities, refer to [Use Case - Integrate and Save Preprocessing Steps Into IR](../../docs/OV_Runtime_UG/preprocessing_usecase_save.md) which shows how to have all the preprocessing in the compiled blob. +- Compile blobs for OpenVINO Runtime API 2.0 by default or for Inference Engine API with explicit option `-ov_api_1_0` +- Accepts device specific options for customizing the compilation process ## Run the Compile Tool @@ -85,5 +89,5 @@ To import a blob with the network from a generated file into your application, u ```cpp ov::Core ie; std::ifstream file{"model_name.blob"}; -ov::CompiledModel compiled_model = ie.import_model(file, "MYRIAD", {}); +ov::CompiledModel compiled_model = ie.import_model(file, "MYRIAD"); ```