From 89f8d4228c611225d0bce0b77047ea138e02a445 Mon Sep 17 00:00:00 2001 From: Evan Date: Tue, 3 May 2022 10:37:28 -0600 Subject: [PATCH 1/2] Add links to specific examples This edit adds links to more example applications, making it easier for users to discover how to build an OpenVINO application around their specific model. --- .../integrate_with_your_application.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/docs/OV_Runtime_UG/integrate_with_your_application.md b/docs/OV_Runtime_UG/integrate_with_your_application.md index 7a62c6dc70dacb..ede9383f066ca5 100644 --- a/docs/OV_Runtime_UG/integrate_with_your_application.md +++ b/docs/OV_Runtime_UG/integrate_with_your_application.md @@ -12,7 +12,7 @@ @endsphinxdirective -> **NOTE**: Before start using OpenVINO™ Runtime, make sure you set all environment variables during the installation. If you did not, follow the instructions from the _Set the Environment Variables_ section in the installation guides: +> **NOTE**: Before start using OpenVINO™ Runtime, make sure you set all environment variables during the installation. To do so, follow the instructions from the _Set the Environment Variables_ section in the installation guides: > * [For Windows* 10](../install_guides/installing-openvino-windows.md) > * [For Linux*](../install_guides/installing-openvino-linux.md) > * [For macOS*](../install_guides/installing-openvino-macos.md) @@ -20,7 +20,7 @@ ## Use OpenVINO™ Runtime API to Implement Inference Pipeline -This section provides step-by-step instructions to implement a typical inference pipeline with the OpenVINO™ Runtime C++ API: +This section provides step-by-step instructions to implement a typical inference pipeline with the OpenVINO™ Runtime C++ or Python API: ![ie_api_use_cpp] @@ -64,7 +64,7 @@ Use the following code to create OpenVINO™ Core to manage available devices an ### Step 2. Compile the Model -`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index, this approach is aligned with the majority of frameworks. +`ov::CompiledModel` class represents a device specific compiled model. `ov::CompiledModel` allows you to get information inputs or output ports by a tensor name or index. This approach is aligned with the majority of frameworks. Compile the model for a specific device using `ov::Core::compile_model()`: @@ -185,7 +185,7 @@ You can use external memory to create `ov::Tensor` and use the `ov::InferRequest ### Step 5. Start Inference -OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application's overall frame-rate, because rather than wait for inference to complete, the app can keep working on the host, while the accelerator is busy. You can use `ov::InferRequest::start_async` to start model inference in the asynchronous mode and call `ov::InferRequest::wait` to wait for the inference results: +OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application's overall frame-rate: rather than waiting for inference to complete, the app can keep working on the host while the accelerator is busy. You can use `ov::InferRequest::start_async` to start model inference in the asynchronous mode and call `ov::InferRequest::wait` to wait for the inference results: @sphinxtabset @@ -203,7 +203,7 @@ OpenVINO™ Runtime supports inference in either synchronous or asynchronous mod @endsphinxtabset -This section demonstrates a simple pipeline, to get more information about other ways to perform inference, read the dedicated ["Run inference" section](./ov_infer_request.md). +This section demonstrates a simple pipeline. To get more information about other ways to perform inference, read the dedicated ["Run inference" section](./ov_infer_request.md). ### Step 6. Process the Inference Results @@ -253,16 +253,20 @@ cd build/ cmake ../project cmake --build . ``` -It's allowed to specify additional build options (e.g. to build CMake project on Windows with a specific build tools). Please refer to the [CMake page](https://cmake.org/cmake/help/latest/manual/cmake.1.html#manual:cmake(1)) for details. +You can also specify additional build options (e.g. to build CMake project on Windows with a specific build tools). Please refer to the [CMake page](https://cmake.org/cmake/help/latest/manual/cmake.1.html#manual:cmake(1)) for details. ## Run Your Application Congratulations, you have made your first application with OpenVINO™ toolkit, now you may run it. +This page showed how to implement a typical inference pipeline with OpenVINO. See the [OpenVINO Samples](Samples_Overview.md) page or the [Open Model Zoo Demos](https://docs.openvino.ai/latest/omz_demos.html) page for specific examples of how OpenVINO pipelines are implemented for applications like image classification, text prediction, and many others. + ## See also - [OpenVINO™ Runtime Preprocessing](./preprocessing_overview.md) - [Using Encrypted Models with OpenVINO™](./protecting_model_guide.md) + - [OpenVINO Samples](Samples_Overview.md) + - [Open Model Zoo Demos](https://docs.openvino.ai/latest/omz_demos.html) [ie_api_flow_cpp]: img/BASIC_IE_API_workflow_Cpp.svg [ie_api_use_cpp]: img/IMPLEMENT_PIPELINE_with_API_C.svg From 7fcac01024981aebc297a659b7943add83dd0e22 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Fri, 6 May 2022 13:31:01 +0200 Subject: [PATCH 2/2] Update docs/OV_Runtime_UG/integrate_with_your_application.md --- docs/OV_Runtime_UG/integrate_with_your_application.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/OV_Runtime_UG/integrate_with_your_application.md b/docs/OV_Runtime_UG/integrate_with_your_application.md index ede9383f066ca5..0dd3e3c5fa092b 100644 --- a/docs/OV_Runtime_UG/integrate_with_your_application.md +++ b/docs/OV_Runtime_UG/integrate_with_your_application.md @@ -185,7 +185,7 @@ You can use external memory to create `ov::Tensor` and use the `ov::InferRequest ### Step 5. Start Inference -OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application's overall frame-rate: rather than waiting for inference to complete, the app can keep working on the host while the accelerator is busy. You can use `ov::InferRequest::start_async` to start model inference in the asynchronous mode and call `ov::InferRequest::wait` to wait for the inference results: +OpenVINO™ Runtime supports inference in either synchronous or asynchronous mode. Using the Async API can improve application's overall frame-rate: instead of waiting for inference to complete, the app can keep working on the host while the accelerator is busy. You can use `ov::InferRequest::start_async` to start model inference in the asynchronous mode and call `ov::InferRequest::wait` to wait for the inference results: @sphinxtabset