From 2a3ff17061ea53ba50827e94acb0b10a73ab59dc Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 07:28:47 -0600 Subject: [PATCH 01/32] Docs: Add links to Samples source code on GitHub --- docs/OV_Runtime_UG/Samples_Overview.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/OV_Runtime_UG/Samples_Overview.md b/docs/OV_Runtime_UG/Samples_Overview.md index 528ebdba28ada8..815d6e4585693d 100644 --- a/docs/OV_Runtime_UG/Samples_Overview.md +++ b/docs/OV_Runtime_UG/Samples_Overview.md @@ -35,6 +35,8 @@ If you install OpenVINO™ Runtime, sample applications for С, C++, and Python * `/samples/cpp` * `/samples/python` +The source code for the samples are also available in the [OpenVINO™ samples repository on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples). If you installed OpenVINO™ Runtime using PyPI, samples are not installed locally and must be accessed through GitHub. + The applications include: - **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors. From 21d6b7809a97df6e4369b049eb483411b18fe51e Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 07:45:09 -0600 Subject: [PATCH 02/32] Add link to source code on GitHub --- samples/python/hello_reshape_ssd/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_reshape_ssd/README.md b/samples/python/hello_reshape_ssd/README.md index 6fbad8b1cf6c0f..f914737123ece5 100644 --- a/samples/python/hello_reshape_ssd/README.md +++ b/samples/python/hello_reshape_ssd/README.md @@ -1,7 +1,7 @@ # Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README} This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md). -Models with only 1 input and output are supported. +Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd/hello_reshape_ssd.py). The following Python API is used in the application: From 648946ba4a9e4e2f501acf5c1b6e824d8aa0231c Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 07:46:43 -0600 Subject: [PATCH 03/32] Add link to source code on GitHub --- samples/python/hello_reshape_ssd/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_reshape_ssd/README.md b/samples/python/hello_reshape_ssd/README.md index f914737123ece5..159c9a3e319818 100644 --- a/samples/python/hello_reshape_ssd/README.md +++ b/samples/python/hello_reshape_ssd/README.md @@ -1,7 +1,7 @@ # Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README} This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md). -Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd/hello_reshape_ssd.py). +Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd). The following Python API is used in the application: From 25fe3079d6a08c60d25f97abdf72f3d3872fa4d8 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 07:48:01 -0600 Subject: [PATCH 04/32] Add link to source code on GitHub --- samples/python/classification_sample_async/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/classification_sample_async/README.md b/samples/python/classification_sample_async/README.md index 02be3098f3f84b..35c7f2700c790c 100644 --- a/samples/python/classification_sample_async/README.md +++ b/samples/python/classification_sample_async/README.md @@ -1,7 +1,7 @@ # Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README} This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API. -Models with only 1 input and output are supported. +Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/classification_sample_async). The following Python API is used in the application: From 386536d4b0893ecb535f2331994563d4d251b761 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 07:49:16 -0600 Subject: [PATCH 05/32] Add link to source code on GitHub --- samples/python/hello_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_classification/README.md b/samples/python/hello_classification/README.md index 671a14ed9eaa42..2d6ee2f53ea639 100644 --- a/samples/python/hello_classification/README.md +++ b/samples/python/hello_classification/README.md @@ -1,7 +1,7 @@ # Hello Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_classification_README} This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API. -Models with only 1 input and output are supported. +Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_classification). The following Python API is used in the application: From 4c841092d4448b159198204a6469f2d7884264ee Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 08:27:11 -0600 Subject: [PATCH 06/32] Add link to source code on GitHub --- samples/python/hello_query_device/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_query_device/README.md b/samples/python/hello_query_device/README.md index aa934529df0aec..0f3c1aa174c65a 100644 --- a/samples/python/hello_query_device/README.md +++ b/samples/python/hello_query_device/README.md @@ -1,6 +1,6 @@ # Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README} -This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). +This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). The source code for this application is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_query_device). The following Python API is used in the application: From 366cbacecab83ca0fe7762f3efc507b66de58878 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 08:29:18 -0600 Subject: [PATCH 07/32] Add link to source code on GitHub --- samples/python/model_creation_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/model_creation_sample/README.md b/samples/python/model_creation_sample/README.md index 068cb25894c1b7..269bc76b55bebb 100644 --- a/samples/python/model_creation_sample/README.md +++ b/samples/python/model_creation_sample/README.md @@ -1,6 +1,6 @@ # Model Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_model_creation_sample_README} -This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. +This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/model_creation_sample). The following OpenVINO Python API is used in the application: From 1fa19022d5957d493bcd05440a4b9d267503e0d1 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 12:54:39 -0600 Subject: [PATCH 08/32] Add link to source code on GitHub --- samples/python/speech_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/speech_sample/README.md b/samples/python/speech_sample/README.md index 48752c6a575698..2cbd6d1ad30457 100644 --- a/samples/python/speech_sample/README.md +++ b/samples/python/speech_sample/README.md @@ -1,6 +1,6 @@ # Automatic Speech Recognition Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_speech_sample_README} -This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural models and speech feature vectors. +This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural models and speech feature vectors. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/speech_sample). The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores. From 700b4af0c56762ffa311ee5e1a81a5a6e994b396 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:20:04 -0600 Subject: [PATCH 09/32] Add link to source code on GitHub --- samples/c/hello_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/c/hello_classification/README.md b/samples/c/hello_classification/README.md index f4456353671d4c..4388a1456b19f8 100644 --- a/samples/c/hello_classification/README.md +++ b/samples/c/hello_classification/README.md @@ -1,6 +1,6 @@ # Hello Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_classification_README} -This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature. +This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_classification). Hello Classification C sample application demonstrates how to use the following Inference Engine C API in applications: From a826c92ed1841e121b5d1cb29c66fe9f8e290d6e Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:20:54 -0600 Subject: [PATCH 10/32] Add link to source code on GitHub --- samples/c/hello_nv12_input_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/c/hello_nv12_input_classification/README.md b/samples/c/hello_nv12_input_classification/README.md index af0898330cec11..38ab52ca9cf072 100644 --- a/samples/c/hello_nv12_input_classification/README.md +++ b/samples/c/hello_nv12_input_classification/README.md @@ -1,6 +1,6 @@ # Hello NV12 Input Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README} -This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API. +This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_nv12_input_classification). Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API of the Inference Engine in your applications: From de04cc17b3c91b94661e5ad7d6dce338541e0c14 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:23:38 -0600 Subject: [PATCH 11/32] Add link to source code on GitHub --- samples/cpp/classification_sample_async/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/classification_sample_async/README.md b/samples/cpp/classification_sample_async/README.md index a126f1401bb1c6..9f118e03687d6d 100644 --- a/samples/cpp/classification_sample_async/README.md +++ b/samples/cpp/classification_sample_async/README.md @@ -1,7 +1,7 @@ # Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README} This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API. -Models with only one input and output are supported. +Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/classification_sample_async). In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model. From c21cbf9ab2f5c5c30f7e35c2f82ca9d66e6f3aac Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:24:22 -0600 Subject: [PATCH 12/32] Add link to source code on GitHub --- samples/cpp/hello_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_classification/README.md b/samples/cpp/hello_classification/README.md index 61106b20807484..83a645dfe3f999 100644 --- a/samples/cpp/hello_classification/README.md +++ b/samples/cpp/hello_classification/README.md @@ -1,7 +1,7 @@ # Hello Classification C++ Sample {#openvino_inference_engine_samples_hello_classification_README} This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API. -Models with only one input and output are supported. +Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_classification). The following C++ API is used in the application: From 6a6e7dc1295701811e104b2bbe42147c8ceae0d0 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:25:09 -0600 Subject: [PATCH 13/32] Add link to source code on GitHub --- samples/cpp/hello_nv12_input_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_nv12_input_classification/README.md b/samples/cpp/hello_nv12_input_classification/README.md index dffe7dcca466db..0d032d3c2c4b92 100644 --- a/samples/cpp/hello_nv12_input_classification/README.md +++ b/samples/cpp/hello_nv12_input_classification/README.md @@ -1,6 +1,6 @@ # Hello NV12 Input Classification C++ Sample {#openvino_inference_engine_samples_hello_nv12_input_classification_README} -This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API. +This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_nv12_input_classification). The following C++ API is used in the application: From 6986ed9b362cd27352ddec85d9aec377d895fc91 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:25:47 -0600 Subject: [PATCH 14/32] Add link to source code on GitHub --- samples/cpp/hello_query_device/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_query_device/README.md b/samples/cpp/hello_query_device/README.md index 202f733542becc..ed44a41ccfd1ff 100644 --- a/samples/cpp/hello_query_device/README.md +++ b/samples/cpp/hello_query_device/README.md @@ -1,6 +1,6 @@ # Hello Query Device C++ Sample {#openvino_inference_engine_samples_hello_query_device_README} -This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). +This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_query_device). The following C++ API is used in the application: From 5c729be3929dd08a5ee4f87dd72766af544076a5 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:26:52 -0600 Subject: [PATCH 15/32] Add link to source code on GitHub --- samples/cpp/hello_reshape_ssd/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_reshape_ssd/README.md b/samples/cpp/hello_reshape_ssd/README.md index 583d8608d44f8e..b0d9d3f926124a 100644 --- a/samples/cpp/hello_reshape_ssd/README.md +++ b/samples/cpp/hello_reshape_ssd/README.md @@ -1,7 +1,7 @@ # Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README} This sample demonstrates how to do synchronous inference of object detection models using [input reshape feature](../../../docs/OV_Runtime_UG/ShapeInference.md). -Models with only one input and output are supported. +Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_reshape_ssd). The following C++ API is used in the application: From f85b553396b2e94aa663361291e7f64385697f5a Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:27:53 -0600 Subject: [PATCH 16/32] Add link to source code on GitHub --- samples/cpp/model_creation_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/model_creation_sample/README.md b/samples/cpp/model_creation_sample/README.md index 542d6d82ec0077..542355f755a60a 100644 --- a/samples/cpp/model_creation_sample/README.md +++ b/samples/cpp/model_creation_sample/README.md @@ -1,6 +1,6 @@ # Model Creation C++ Sample {#openvino_inference_engine_samples_model_creation_sample_README} -This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks. +This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/model_creation_sample). You do not need an XML file to create a model. The API of ov::Model allows creating a model on the fly from the source code. From fe63a29d9986ad0b1eda8b72a97cbf873fcfa9c6 Mon Sep 17 00:00:00 2001 From: Evan Date: Mon, 6 Jun 2022 21:28:40 -0600 Subject: [PATCH 17/32] Add link to source code on GitHub --- samples/cpp/speech_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/speech_sample/README.md b/samples/cpp/speech_sample/README.md index 0f440ba767d344..78d40e4a90b282 100644 --- a/samples/cpp/speech_sample/README.md +++ b/samples/cpp/speech_sample/README.md @@ -1,6 +1,6 @@ # Automatic Speech Recognition C++ Sample {#openvino_inference_engine_samples_speech_sample_README} -This sample demonstrates how to execute an Asynchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors. +This sample demonstrates how to execute an Asynchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors. The source code for this example is also availbe [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/speech_sample). The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores. From 9881196e46b006ec3421d64baff07f51d73c92d9 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:09:06 +0200 Subject: [PATCH 18/32] Update docs/OV_Runtime_UG/Samples_Overview.md --- docs/OV_Runtime_UG/Samples_Overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/OV_Runtime_UG/Samples_Overview.md b/docs/OV_Runtime_UG/Samples_Overview.md index 815d6e4585693d..cccae027f6f282 100644 --- a/docs/OV_Runtime_UG/Samples_Overview.md +++ b/docs/OV_Runtime_UG/Samples_Overview.md @@ -35,7 +35,7 @@ If you install OpenVINO™ Runtime, sample applications for С, C++, and Python * `/samples/cpp` * `/samples/python` -The source code for the samples are also available in the [OpenVINO™ samples repository on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples). If you installed OpenVINO™ Runtime using PyPI, samples are not installed locally and must be accessed through GitHub. +Source code for the samples is also available in the [OpenVINO™ samples repository on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples). If you installed OpenVINO™ Runtime using PyPI, samples are not installed locally and must be accessed through GitHub. The applications include: From 27fe869a8781ebfdbbdc324353adc3aa2bdbd440 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:06 +0200 Subject: [PATCH 19/32] Update samples/c/hello_classification/README.md --- samples/c/hello_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/c/hello_classification/README.md b/samples/c/hello_classification/README.md index 4388a1456b19f8..acc975382798f4 100644 --- a/samples/c/hello_classification/README.md +++ b/samples/c/hello_classification/README.md @@ -1,6 +1,6 @@ # Hello Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_classification_README} -This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_classification). +This sample demonstrates how to execute inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and the input auto-resize feature. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_classification). Hello Classification C sample application demonstrates how to use the following Inference Engine C API in applications: From 65f3deb8158761c2329095893b42011ad492b919 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:12 +0200 Subject: [PATCH 20/32] Update samples/c/hello_nv12_input_classification/README.md --- samples/c/hello_nv12_input_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/c/hello_nv12_input_classification/README.md b/samples/c/hello_nv12_input_classification/README.md index 38ab52ca9cf072..33284679aba09b 100644 --- a/samples/c/hello_nv12_input_classification/README.md +++ b/samples/c/hello_nv12_input_classification/README.md @@ -1,6 +1,6 @@ # Hello NV12 Input Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README} -This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_nv12_input_classification). +This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_nv12_input_classification). Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API of the Inference Engine in your applications: From c31c665e229c25f40800f80e832b8ab4accb48d4 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:17 +0200 Subject: [PATCH 21/32] Update samples/cpp/classification_sample_async/README.md --- samples/cpp/classification_sample_async/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/classification_sample_async/README.md b/samples/cpp/classification_sample_async/README.md index 9f118e03687d6d..99e307dcbf216f 100644 --- a/samples/cpp/classification_sample_async/README.md +++ b/samples/cpp/classification_sample_async/README.md @@ -1,7 +1,7 @@ # Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README} This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API. -Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/classification_sample_async). +Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/classification_sample_async). In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model. From 0ad1508e7619d1d7312bef5c7cb56156c6a5b327 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:21 +0200 Subject: [PATCH 22/32] Update samples/cpp/hello_classification/README.md --- samples/cpp/hello_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_classification/README.md b/samples/cpp/hello_classification/README.md index 83a645dfe3f999..6effd5b0e71b18 100644 --- a/samples/cpp/hello_classification/README.md +++ b/samples/cpp/hello_classification/README.md @@ -1,7 +1,7 @@ # Hello Classification C++ Sample {#openvino_inference_engine_samples_hello_classification_README} This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API. -Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_classification). +Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_classification). The following C++ API is used in the application: From ddeb8a49e529311544ce8c5486e0759b2587b2e3 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:26 +0200 Subject: [PATCH 23/32] Update samples/cpp/hello_nv12_input_classification/README.md --- samples/cpp/hello_nv12_input_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_nv12_input_classification/README.md b/samples/cpp/hello_nv12_input_classification/README.md index 0d032d3c2c4b92..4ed39d78aac3c2 100644 --- a/samples/cpp/hello_nv12_input_classification/README.md +++ b/samples/cpp/hello_nv12_input_classification/README.md @@ -1,6 +1,6 @@ # Hello NV12 Input Classification C++ Sample {#openvino_inference_engine_samples_hello_nv12_input_classification_README} -This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_nv12_input_classification). +This sample demonstrates how to execute inference of image classification models with images in NV12 color format using Synchronous Inference Request API. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_nv12_input_classification). The following C++ API is used in the application: From aeede6ad297b686851e14538cccdf8c911301fb9 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:30 +0200 Subject: [PATCH 24/32] Update samples/python/classification_sample_async/README.md --- samples/python/classification_sample_async/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/classification_sample_async/README.md b/samples/python/classification_sample_async/README.md index 35c7f2700c790c..3eb6d2ea423bad 100644 --- a/samples/python/classification_sample_async/README.md +++ b/samples/python/classification_sample_async/README.md @@ -1,7 +1,7 @@ # Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README} This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API. -Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/classification_sample_async). +Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/classification_sample_async). The following Python API is used in the application: From d5c6a23fec3b2e8f88631d39b953997ca270661b Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:35 +0200 Subject: [PATCH 25/32] Update samples/python/hello_classification/README.md --- samples/python/hello_classification/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_classification/README.md b/samples/python/hello_classification/README.md index 2d6ee2f53ea639..59f74e93e2813a 100644 --- a/samples/python/hello_classification/README.md +++ b/samples/python/hello_classification/README.md @@ -1,7 +1,7 @@ # Hello Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_classification_README} This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API. -Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_classification). +Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_classification). The following Python API is used in the application: From c3f8c34df9754c6a4348691e14bec2cecb19a102 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:42 +0200 Subject: [PATCH 26/32] Update samples/python/hello_query_device/README.md --- samples/python/hello_query_device/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_query_device/README.md b/samples/python/hello_query_device/README.md index 0f3c1aa174c65a..0863f311010b04 100644 --- a/samples/python/hello_query_device/README.md +++ b/samples/python/hello_query_device/README.md @@ -1,6 +1,6 @@ # Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README} -This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). The source code for this application is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_query_device). +This sample demonstrates how to show OpenVINO™ Runtime devices and print their metrics with default configuration values, using the [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). Source code for this application is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_query_device). The following Python API is used in the application: From c08c562d973d77bc77378c5c6698e475f69f84e1 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:48 +0200 Subject: [PATCH 27/32] Update samples/python/hello_reshape_ssd/README.md --- samples/python/hello_reshape_ssd/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/hello_reshape_ssd/README.md b/samples/python/hello_reshape_ssd/README.md index 159c9a3e319818..756b88016422f8 100644 --- a/samples/python/hello_reshape_ssd/README.md +++ b/samples/python/hello_reshape_ssd/README.md @@ -1,7 +1,7 @@ # Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README} This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md). -Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd). +Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd). The following Python API is used in the application: From ca8ca1da9668a3d2afc8181ee0f5a326cc55be7b Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:28:53 +0200 Subject: [PATCH 28/32] Update samples/python/speech_sample/README.md --- samples/python/speech_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/python/speech_sample/README.md b/samples/python/speech_sample/README.md index 2cbd6d1ad30457..ffc647c44a1293 100644 --- a/samples/python/speech_sample/README.md +++ b/samples/python/speech_sample/README.md @@ -1,6 +1,6 @@ # Automatic Speech Recognition Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_speech_sample_README} -This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural models and speech feature vectors. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/speech_sample). +This sample demonstrates how to execute asynchronous inference of an acoustic model based on Kaldi neural models and speech feature vectors. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/speech_sample). The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores. From 55a74e868e7491d01fbe5374dbe9d69854d4b721 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:29:09 +0200 Subject: [PATCH 29/32] Update samples/cpp/hello_query_device/README.md --- samples/cpp/hello_query_device/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_query_device/README.md b/samples/cpp/hello_query_device/README.md index ed44a41ccfd1ff..96620981bbc6e7 100644 --- a/samples/cpp/hello_query_device/README.md +++ b/samples/cpp/hello_query_device/README.md @@ -1,6 +1,6 @@ # Hello Query Device C++ Sample {#openvino_inference_engine_samples_hello_query_device_README} -This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_query_device). +This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_query_device). The following C++ API is used in the application: From 463dd04d61bfb5f3f2d4b16b0a3f265ea7951438 Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:29:30 +0200 Subject: [PATCH 30/32] Update samples/cpp/speech_sample/README.md --- samples/cpp/speech_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/speech_sample/README.md b/samples/cpp/speech_sample/README.md index 78d40e4a90b282..ca596b28ad1484 100644 --- a/samples/cpp/speech_sample/README.md +++ b/samples/cpp/speech_sample/README.md @@ -1,6 +1,6 @@ # Automatic Speech Recognition C++ Sample {#openvino_inference_engine_samples_speech_sample_README} -This sample demonstrates how to execute an Asynchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors. The source code for this example is also availbe [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/speech_sample). +This sample demonstrates how to execute asynchronous inference of an acoustic model based on Kaldi neural networks and speech feature vectors. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/speech_sample). The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores. From e01841b05cf70dbd7356b1eba64ba001fadf478c Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:29:42 +0200 Subject: [PATCH 31/32] Update samples/cpp/hello_reshape_ssd/README.md --- samples/cpp/hello_reshape_ssd/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/hello_reshape_ssd/README.md b/samples/cpp/hello_reshape_ssd/README.md index b0d9d3f926124a..4df02ffbe70c6c 100644 --- a/samples/cpp/hello_reshape_ssd/README.md +++ b/samples/cpp/hello_reshape_ssd/README.md @@ -1,7 +1,7 @@ # Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README} This sample demonstrates how to do synchronous inference of object detection models using [input reshape feature](../../../docs/OV_Runtime_UG/ShapeInference.md). -Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_reshape_ssd). +Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_reshape_ssd). The following C++ API is used in the application: From 64ad9ddace1af06aec81a5b6b7beb75e9bc49ffa Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Wed, 8 Jun 2022 13:29:52 +0200 Subject: [PATCH 32/32] Update samples/cpp/model_creation_sample/README.md --- samples/cpp/model_creation_sample/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/samples/cpp/model_creation_sample/README.md b/samples/cpp/model_creation_sample/README.md index 542355f755a60a..824083945b8149 100644 --- a/samples/cpp/model_creation_sample/README.md +++ b/samples/cpp/model_creation_sample/README.md @@ -1,6 +1,6 @@ # Model Creation C++ Sample {#openvino_inference_engine_samples_model_creation_sample_README} -This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/model_creation_sample). +This sample demonstrates how to execute synchronous inference using [a model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/model_creation_sample). You do not need an XML file to create a model. The API of ov::Model allows creating a model on the fly from the source code.