Skip to content

Commit

Permalink
Docs: Add source code links to OpenVINO Samples (#11803)
Browse files Browse the repository at this point in the history
* Docs: Add links to Samples source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Add link to source code on GitHub

* Update docs/OV_Runtime_UG/Samples_Overview.md

* Update samples/c/hello_classification/README.md

* Update samples/c/hello_nv12_input_classification/README.md

* Update samples/cpp/classification_sample_async/README.md

* Update samples/cpp/hello_classification/README.md

* Update samples/cpp/hello_nv12_input_classification/README.md

* Update samples/python/classification_sample_async/README.md

* Update samples/python/hello_classification/README.md

* Update samples/python/hello_query_device/README.md

* Update samples/python/hello_reshape_ssd/README.md

* Update samples/python/speech_sample/README.md

* Update samples/cpp/hello_query_device/README.md

* Update samples/cpp/speech_sample/README.md

* Update samples/cpp/hello_reshape_ssd/README.md

* Update samples/cpp/model_creation_sample/README.md

Co-authored-by: Karol Blaszczak <[email protected]>
  • Loading branch information
EdjeElectronics and kblaszczak-intel authored Jun 8, 2022
1 parent f5d9e1d commit 1f229bc
Show file tree
Hide file tree
Showing 16 changed files with 17 additions and 15 deletions.
2 changes: 2 additions & 0 deletions docs/OV_Runtime_UG/Samples_Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ If you install OpenVINO™ Runtime, sample applications for С, C++, and Python
* `<INSTALL_DIR>/samples/cpp`
* `<INSTALL_DIR>/samples/python`

Source code for the samples is also available in the [OpenVINO™ samples repository on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples). If you installed OpenVINO™ Runtime using PyPI, samples are not installed locally and must be accessed through GitHub.

The applications include:

- **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors.
Expand Down
2 changes: 1 addition & 1 deletion samples/c/hello_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_classification_README}

This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature.
This sample demonstrates how to execute inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and the input auto-resize feature. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_classification).

Hello Classification C sample application demonstrates how to use the following Inference Engine C API in applications:

Expand Down
2 changes: 1 addition & 1 deletion samples/c/hello_nv12_input_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello NV12 Input Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README}

This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API.
This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_nv12_input_classification).

Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API of the Inference Engine in your applications:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/classification_sample_async/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README}

This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
Models with only one input and output are supported.
Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/classification_sample_async).

In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model.

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Classification C++ Sample {#openvino_inference_engine_samples_hello_classification_README}

This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API.
Models with only one input and output are supported.
Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_classification).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_nv12_input_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello NV12 Input Classification C++ Sample {#openvino_inference_engine_samples_hello_nv12_input_classification_README}

This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API.
This sample demonstrates how to execute inference of image classification models with images in NV12 color format using Synchronous Inference Request API. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_nv12_input_classification).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_query_device/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello Query Device C++ Sample {#openvino_inference_engine_samples_hello_query_device_README}

This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md).
This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_query_device).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README}

This sample demonstrates how to do synchronous inference of object detection models using [input reshape feature](../../../docs/OV_Runtime_UG/ShapeInference.md).
Models with only one input and output are supported.
Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_reshape_ssd).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/model_creation_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Model Creation C++ Sample {#openvino_inference_engine_samples_model_creation_sample_README}

This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks.
This sample demonstrates how to execute synchronous inference using [a model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/model_creation_sample).

You do not need an XML file to create a model. The API of ov::Model allows creating a model on the fly from the source code.

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/speech_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Automatic Speech Recognition C++ Sample {#openvino_inference_engine_samples_speech_sample_README}

This sample demonstrates how to execute an Asynchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors.
This sample demonstrates how to execute asynchronous inference of an acoustic model based on Kaldi neural networks and speech feature vectors. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/speech_sample).

The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores.

Expand Down
2 changes: 1 addition & 1 deletion samples/python/classification_sample_async/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}

This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
Models with only 1 input and output are supported.
Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/classification_sample_async).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/hello_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_classification_README}

This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API.
Models with only 1 input and output are supported.
Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_classification).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/hello_query_device/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README}

This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md).
This sample demonstrates how to show OpenVINO™ Runtime devices and print their metrics with default configuration values, using the [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). Source code for this application is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_query_device).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README}

This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md).
Models with only 1 input and output are supported.
Models with only one input and output are supported. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/model_creation_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Model Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_model_creation_sample_README}

This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly.
This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/model_creation_sample).

The following OpenVINO Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/speech_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Automatic Speech Recognition Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_speech_sample_README}

This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural models and speech feature vectors.
This sample demonstrates how to execute asynchronous inference of an acoustic model based on Kaldi neural models and speech feature vectors. Source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/speech_sample).

The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores.

Expand Down

0 comments on commit 1f229bc

Please sign in to comment.