Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Add source code links to OpenVINO Samples #11803

Merged
merged 32 commits into from
Jun 8, 2022
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
2a3ff17
Docs: Add links to Samples source code on GitHub
EdjeElectronics Jun 6, 2022
21d6b78
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
648946b
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
25fe307
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
386536d
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
4c84109
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
366cbac
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
1fa1902
Add link to source code on GitHub
EdjeElectronics Jun 6, 2022
700b4af
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
a826c92
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
de04cc1
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
c21cbf9
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
6a6e7dc
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
6986ed9
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
5c729be
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
f85b553
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
fe63a29
Add link to source code on GitHub
EdjeElectronics Jun 7, 2022
9881196
Update docs/OV_Runtime_UG/Samples_Overview.md
kblaszczak-intel Jun 8, 2022
27fe869
Update samples/c/hello_classification/README.md
kblaszczak-intel Jun 8, 2022
65f3deb
Update samples/c/hello_nv12_input_classification/README.md
kblaszczak-intel Jun 8, 2022
c31c665
Update samples/cpp/classification_sample_async/README.md
kblaszczak-intel Jun 8, 2022
0ad1508
Update samples/cpp/hello_classification/README.md
kblaszczak-intel Jun 8, 2022
ddeb8a4
Update samples/cpp/hello_nv12_input_classification/README.md
kblaszczak-intel Jun 8, 2022
aeede6a
Update samples/python/classification_sample_async/README.md
kblaszczak-intel Jun 8, 2022
d5c6a23
Update samples/python/hello_classification/README.md
kblaszczak-intel Jun 8, 2022
c3f8c34
Update samples/python/hello_query_device/README.md
kblaszczak-intel Jun 8, 2022
c08c562
Update samples/python/hello_reshape_ssd/README.md
kblaszczak-intel Jun 8, 2022
ca8ca1d
Update samples/python/speech_sample/README.md
kblaszczak-intel Jun 8, 2022
55a74e8
Update samples/cpp/hello_query_device/README.md
kblaszczak-intel Jun 8, 2022
463dd04
Update samples/cpp/speech_sample/README.md
kblaszczak-intel Jun 8, 2022
e01841b
Update samples/cpp/hello_reshape_ssd/README.md
kblaszczak-intel Jun 8, 2022
64ad9dd
Update samples/cpp/model_creation_sample/README.md
kblaszczak-intel Jun 8, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/OV_Runtime_UG/Samples_Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ If you install OpenVINO™ Runtime, sample applications for С, C++, and Python
* `<INSTALL_DIR>/samples/cpp`
* `<INSTALL_DIR>/samples/python`

The source code for the samples are also available in the [OpenVINO™ samples repository on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples). If you installed OpenVINO™ Runtime using PyPI, samples are not installed locally and must be accessed through GitHub.

The applications include:

- **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors.
Expand Down
2 changes: 1 addition & 1 deletion samples/c/hello_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_classification_README}

This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature.
This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_classification).

Hello Classification C sample application demonstrates how to use the following Inference Engine C API in applications:

Expand Down
2 changes: 1 addition & 1 deletion samples/c/hello_nv12_input_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello NV12 Input Classification C Sample {#openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README}

This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API.
This sample demonstrates how to execute an inference of image classification networks like AlexNet with images in NV12 color format using Synchronous Inference Request API. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/c/hello_nv12_input_classification).

Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API of the Inference Engine in your applications:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/classification_sample_async/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Image Classification Async C++ Sample {#openvino_inference_engine_samples_classification_sample_async_README}

This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
Models with only one input and output are supported.
Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/classification_sample_async).

In addition to regular images, the sample also supports single-channel `ubyte` images as an input for LeNet model.

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Classification C++ Sample {#openvino_inference_engine_samples_hello_classification_README}

This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API.
Models with only one input and output are supported.
Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_classification).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_nv12_input_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello NV12 Input Classification C++ Sample {#openvino_inference_engine_samples_hello_nv12_input_classification_README}

This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API.
This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_nv12_input_classification).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_query_device/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello Query Device C++ Sample {#openvino_inference_engine_samples_hello_query_device_README}

This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md).
This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_query_device).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README}

This sample demonstrates how to do synchronous inference of object detection models using [input reshape feature](../../../docs/OV_Runtime_UG/ShapeInference.md).
Models with only one input and output are supported.
Models with only one input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/hello_reshape_ssd).

The following C++ API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/model_creation_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Model Creation C++ Sample {#openvino_inference_engine_samples_model_creation_sample_README}

This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks.
This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/model_creation_sample).

You do not need an XML file to create a model. The API of ov::Model allows creating a model on the fly from the source code.

Expand Down
2 changes: 1 addition & 1 deletion samples/cpp/speech_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Automatic Speech Recognition C++ Sample {#openvino_inference_engine_samples_speech_sample_README}

This sample demonstrates how to execute an Asynchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors.
This sample demonstrates how to execute an Asynchronous Inference of acoustic model based on Kaldi\* neural networks and speech feature vectors. The source code for this example is also availbe [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/cpp/speech_sample).

The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores.

Expand Down
2 changes: 1 addition & 1 deletion samples/python/classification_sample_async/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Image Classification Async Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_classification_sample_async_README}

This sample demonstrates how to do inference of image classification models using Asynchronous Inference Request API.
Models with only 1 input and output are supported.
Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/classification_sample_async).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/hello_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Classification Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_classification_README}

This sample demonstrates how to do inference of image classification models using Synchronous Inference Request API.
Models with only 1 input and output are supported.
Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_classification).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/hello_query_device/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Hello Query Device Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README}

This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md).
This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](../../../docs/OV_Runtime_UG/supported_plugins/config_properties.md). The source code for this application is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/hello_query_device).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/hello_reshape_ssd/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Hello Reshape SSD Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_hello_reshape_ssd_README}

This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](../../../docs/OV_Runtime_UG/ShapeInference.md).
Models with only 1 input and output are supported.
Models with only 1 input and output are supported. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_reshape_ssd).

The following Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/model_creation_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Model Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_model_creation_sample_README}

This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly.
This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/model_creation_sample).

The following OpenVINO Python API is used in the application:

Expand Down
2 changes: 1 addition & 1 deletion samples/python/speech_sample/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Automatic Speech Recognition Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_speech_sample_README}

This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural models and speech feature vectors.
This sample demonstrates how to do a Synchronous Inference of acoustic model based on Kaldi\* neural models and speech feature vectors. The source code for this example is also available [on GitHub](https://github.com/openvinotoolkit/openvino/tree/master/samples/python/speech_sample).

The sample works with Kaldi ARK or Numpy* uncompressed NPZ files, so it does not cover an end-to-end speech recognition scenario (speech to text), requiring additional preprocessing (feature extraction) to get a feature vector from a speech signal, as well as postprocessing (decoding) to produce text from scores.

Expand Down