+
@endsphinxdirective
-To download the SqueezeNet 1.1 Caffe* model to the `models` folder:
+To download the GoogleNet v1 Caffe* model to the `models` folder:
@sphinxdirective
+
.. tab:: Linux
.. code-block:: sh
- python3 downloader.py --name squeezenet1.1 --output_dir ~/models
+ omz_downloader --name googlenet-v1 --output_dir ~/models
.. tab:: Windows
.. code-block:: bat
- python downloader.py --name squeezenet1.1 --output_dir C:\Users\\Documents\models
+ omz_downloader --name googlenet-v1 --output_dir %USERPROFILE%\Documents\models
.. tab:: macOS
.. code-block:: sh
- python3 downloader.py --name squeezenet1.1 --output_dir ~/models
+ omz_downloader --name googlenet-v1 --output_dir ~/models
@endsphinxdirective
Your screen looks similar to this after the download and shows the paths of downloaded files:
+
@sphinxdirective
.. tab:: Linux
@@ -186,14 +157,14 @@ Your screen looks similar to this after the download and shows the paths of down
###############|| Downloading models ||###############
- ========= Downloading /home/username/models/public/squeezenet1.1/squeezenet1.1.prototxt
+ ========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.prototxt
- ========= Downloading /home/username/models/public/squeezenet1.1/squeezenet1.1.caffemodel
+ ========= Downloading /home/username/models/public/googlenet-v1/googlenet-v1.caffemodel
... 100%, 4834 KB, 3157 KB/s, 1 seconds passed
###############|| Post processing ||###############
- ========= Replacing text in /home/username/models/public/squeezenet1.1/squeezenet1.1.prototxt =========
+ ========= Replacing text in /home/username/models/public/googlenet-v1/googlenet-v1.prototxt =========
.. tab:: Windows
@@ -201,15 +172,15 @@ Your screen looks similar to this after the download and shows the paths of down
################|| Downloading models ||################
- ========== Downloading C:\Users\username\Documents\models\public\squeezenet1.1\squeezenet1.1.prototxt
+ ========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt
... 100%, 9 KB, ? KB/s, 0 seconds passed
- ========== Downloading C:\Users\username\Documents\models\public\squeezenet1.1\squeezenet1.1.caffemodel
+ ========== Downloading C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel
... 100%, 4834 KB, 571 KB/s, 8 seconds passed
################|| Post-processing ||################
- ========== Replacing text in C:\Users\username\Documents\models\public\squeezenet1.1\squeezenet1.1.prototxt
+ ========== Replacing text in C:\Users\username\Documents\models\public\googlenet-v1\googlenet-v1.prototxt
.. tab:: macOS
@@ -217,135 +188,15 @@ Your screen looks similar to this after the download and shows the paths of down
###############|| Downloading models ||###############
- ========= Downloading /Users/username/models/public/squeezenet1.1/squeezenet1.1.prototxt
+ ========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt
... 100%, 9 KB, 44058 KB/s, 0 seconds passed
- ========= Downloading /Users/username/models/public/squeezenet1.1/squeezenet1.1.caffemodel
+ ========= Downloading /Users/username/models/public/googlenet-v1/googlenet-v1.caffemodel
... 100%, 4834 KB, 4877 KB/s, 0 seconds passed
###############|| Post processing ||###############
- ========= Replacing text in /Users/username/models/public/squeezenet1.1/squeezenet1.1.prototxt =========
-
-@endsphinxdirective
-
-@sphinxdirective
-.. raw:: html
-
-
-
-@endsphinxdirective
-
-@sphinxdirective
-.. raw:: html
-
-
-
-@endsphinxdirective
-
-To download all three pre-trained models in FP16 precision to the `models` folder in your home folder:
-
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- python3 downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-attributes-recognition-barrier-0039,license-plate-recognition-barrier-0001 --output_dir ~/models --precisions FP16
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- python downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-attributes-recognition-barrier-0039,license-plate-recognition-barrier-0001 --output_dir C:\Users\
\Documents\models --precisions FP16
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- python3 downloader.py --name vehicle-license-plate-detection-barrier-0106,vehicle-attributes-recognition-barrier-0039,license-plate-recognition-barrier-0001 --output_dir ~/models --precisions FP16
-
-@endsphinxdirective
-
-Your screen looks similar to this after the download:
-
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- ################|| Downloading models ||################
-
- ========== Downloading /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml
- ... 100%, 204 KB, 183949 KB/s, 0 seconds passed
-
- ========== Downloading /home/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin
- ... 100%, 1256 KB, 3948 KB/s, 0 seconds passed
-
- ========== Downloading /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml
- ... 100%, 32 KB, 133398 KB/s, 0 seconds passed
-
- ========== Downloading /home/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.bin
- ... 100%, 1222 KB, 3167 KB/s, 0 seconds passed
-
- ========== Downloading /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml
- ... 100%, 47 KB, 85357 KB/s, 0 seconds passed
-
- ========== Downloading /home/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.bin
- ... 100%, 2378 KB, 5333 KB/s, 0 seconds passed
-
- ################|| Post-processing ||################
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- ################|| Downloading models ||################
-
- ========== Downloading C:\Users\username\Documents\models\intel\vehicle-license-plate-detection-barrier-0106\FP16\vehicle-license-plate-detection-barrier-0106.xml
- ... 100%, 207 KB, 13810 KB/s, 0 seconds passed
-
- ========== Downloading C:\Users\username\Documents\models\intel\vehicle-license-plate-detection-barrier-0106\FP16\vehicle-license-plate-detection-barrier-0106.bin
- ... 100%, 1256 KB, 70 KB/s, 17 seconds passed
-
- ========== Downloading C:\Users\username\Documents\models\intel\vehicle-attributes-recognition-barrier-0039\FP16\vehicle-attributes-recognition-barrier-0039.xml
- ... 100%, 32 KB, ? KB/s, 0 seconds passed
-
- ========== Downloading C:\Users\username\Documents\models\intel\vehicle-attributes-recognition-barrier-0039\FP16\vehicle-attributes-recognition-barrier-0039.bin
- ... 100%, 1222 KB, 277 KB/s, 4 seconds passed
-
- ========== Downloading C:\Users\username\Documents\models\intel\license-plate-recognition-barrier-0001\FP16\license-plate-recognition-barrier-0001.xml
- ... 100%, 47 KB, ? KB/s, 0 seconds passed
-
- ========== Downloading C:\Users\username\Documents\models\intel\license-plate-recognition-barrier-0001\FP16\license-plate-recognition-barrier-0001.bin
- ... 100%, 2378 KB, 120 KB/s, 19 seconds passed
-
- ################|| Post-processing ||################
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- ################|| Downloading models ||################
-
- ========== Downloading /Users/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.xml
- ... 100%, 207 KB, 313926 KB/s, 0 seconds passed
-
- ========== Downloading /Users/username/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106.bin
- ... 100%, 1256 KB, 2552 KB/s, 0 seconds passed
-
- ========== Downloading /Users/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.xml
- ... 100%, 32 KB, 172042 KB/s, 0 seconds passed
-
- ========== Downloading /Users/username/models/intel/vehicle-attributes-recognition-barrier-0039/FP16/vehicle-attributes-recognition-barrier-0039.bin
- ... 100%, 1222 KB, 2712 KB/s, 0 seconds passed
-
- ========== Downloading /Users/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.xml
- ... 100%, 47 KB, 217130 KB/s, 0 seconds passed
-
- ========== Downloading /Users/username/models/intel/license-plate-recognition-barrier-0001/FP16/license-plate-recognition-barrier-0001.bin
- ... 100%, 2378 KB, 4222 KB/s, 0 seconds passed
-
- ################|| Post-processing ||################
+ ========= Replacing text in /Users/username/models/public/googlenet-v1/googlenet-v1.prototxt =========
@endsphinxdirective
@@ -366,9 +217,9 @@ REQUIRED: `model_name.xml`
REQUIRED: `model_name.bin`
OPTIONAL: `model_name.json`, `model_name.mapping`, etc.
-This tutorial uses the public SqueezeNet 1.1 Caffe* model to run the Image Classification Sample. See the example in the Download Models section of this page to learn how to download this model.
+This tutorial uses the public GoogleNet v1 Caffe* model to run the Image Classification Sample. See the example in the Download Models section of this page to learn how to download this model.
-The SqueezeNet1.1 model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to IR. The `vehicle-license-plate-detection-barrier-0106`, `vehicle-attributes-recognition-barrier-0039`, and `license-plate-recognition-barrier-0001` models are downloaded in IR format. You don't need to use the Model Optimizer on them because they are Intel models that have previously been converted. Public models will need converting with Model Optimizer.
+The googlenet-v1 model is downloaded in the Caffe* format. You must use the Model Optimizer to convert the model to IR.
Create an `` directory to contain the model's Intermediate Representation (IR).
@@ -383,7 +234,7 @@ Create an `` directory to contain the model's Intermediate Representatio
.. code-block:: bat
- mkdir C:\Users\\Documents\ir
+ mkdir %USERPROFILE%\Documents\ir
.. tab:: macOS
@@ -397,29 +248,9 @@ The OpenVINO Runtime can perform inference on different precision formats, such
Generic Model Optimizer script:
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- cd /opt/intel/openvino/deployment_tools/model_optimizer
- python3 mo.py --input_model / --data_type --output_dir
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- cd \deployment_tools\model_optimizer
- python mo.py --input_model \ --data_type --output_dir
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- cd /opt/intel/openvino/deployment_tools/model_optimizer
- python3 ./mo.py --input_model / --data_type --output_dir
-
-@endsphinxdirective
+``` sh
+ mo --input_model / --data_type --output_dir
+```
IR files produced by the script are written to the directory.
@@ -430,22 +261,19 @@ The command with most placeholders filled in and FP16 precision:
.. code-block:: sh
- cd /opt/intel/openvino/deployment_tools/model_optimizer
- python3 mo.py --input_model ~/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir ~/ir
+ mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --data_type FP16 --output_dir ~/ir
.. tab:: Windows
.. code-block:: bat
- cd \deployment_tools\model_optimizer
- python mo.py --input_model C:\Users\\Documents\models\public\squeezenet1.1\squeezenet1.1.caffemodel --data_type FP16 --output_dir C:\Users\\Documents\ir
+ mo --input_model %USERPROFILE%\Documents\models\public\googlenet-v1\googlenet-v1.caffemodel --data_type FP16 --output_dir %USERPROFILE%\Documents\ir
.. tab:: macOS
.. code-block:: sh
- cd /opt/intel/openvino/deployment_tools/model_optimizer
- python3 mo.py --input_model ~/models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir ~/ir
+ mo --input_model ~/models/public/googlenet-v1/googlenet-v1.caffemodel --data_type FP16 --output_dir ~/ir
@endsphinxdirective
@@ -457,32 +285,13 @@ Many sources are available from which you can download video media to use the co
- [Google Images](https://images.google.com)
As an alternative, the Intel® Distribution of OpenVINO™ toolkit includes several sample images and videos that you can use for running code samples and demo applications:
-@sphinxdirective
-.. tab:: Linux
-
- - ``/opt/intel/openvino_2021/deployment_tools/demo/car.png``
- - ``/opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp``
- - `Sample images and video `_
- - `Sample videos `_
-
-.. tab:: Windows
- - ``\deployment_tools\demo\car.png``
- - ``\deployment_tools\demo\car_1.bmp``
- - `Sample images and video `_
- - `Sample videos `_
-
-.. tab:: macOS
-
- - ``/opt/intel/openvino_2021/deployment_tools/demo/car.png``
- - ``/opt/intel/openvino_2021/deployment_tools/demo/car_1.bmp``
- - `Sample images and video `_
- - `Sample videos `_
-
-@endsphinxdirective
+ - [Sample images and video](https://storage.openvinotoolkit.org/data/test_data/)
+ - [Sample videos](https://github.com/intel-iot-devkit/sample-videos)
## Step 4: Run Inference on the Sample
+
### Run the Image Classification Code Sample
To run the **Image Classification** code sample with an input image using the IR model:
@@ -493,19 +302,19 @@ To run the **Image Classification** code sample with an input image using the IR
.. code-block:: sh
- source /opt/intel/openvino/bin/setupvars.sh
+ source /setupvars.sh
.. tab:: Windows
.. code-block:: bat
- \openvino\bin\setupvars.bat
+ \setupvars.bat
.. tab:: macOS
.. code-block:: sh
- source /opt/intel/openvino/bin/setupvars.sh
+ source /setupvars.sh
@endsphinxdirective
@@ -521,7 +330,7 @@ To run the **Image Classification** code sample with an input image using the IR
.. code-block:: bat
- cd C:\Users\\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release
+ cd %USERPROFILE%\Documents\Intel\OpenVINO\inference_engine_samples_build\intel64\Release
.. tab:: macOS
@@ -561,7 +370,7 @@ To run the **Image Classification** code sample with an input image using the IR
@endsphinxdirective
-The following commands run the Image Classification Code Sample using the `car.png` file from the `demo` directory as an input image, the model in IR format from the `ir` directory, and on different hardware devices:
+The following commands run the Image Classification Code Sample using the [`dog.bmp`](https://storage.openvinotoolkit.org/data/test_data/images/224x224/dog.bmp) file as an input image, the model in IR format from the `ir` directory, and on different hardware devices:
**CPU:**
@sphinxdirective
@@ -569,292 +378,86 @@ The following commands run the Image Classification Code Sample using the `car.p
.. code-block:: sh
- ./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU
+ ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU
.. tab:: Windows
.. code-block:: bat
- .\classification_sample_async.exe -i \deployment_tools\demo\car.png -m C:\Users\\Documents\models\public\squeezenet1.1\ir\squeezenet1.1.xml -d CPU
+ .\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d CPU
.. tab:: macOS
.. code-block:: sh
- ./classification_sample_async -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d CPU
+ ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d CPU
@endsphinxdirective
**GPU:**
- > **NOTE**: Running inference on Intel® Processor Graphics (GPU) requires
- [additional hardware configuration steps](https://docs.openvino.ai/latest/_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps), as described earlier on this page. Running on GPU is not compatible with macOS*.
+ > **NOTE**: Running inference on Intel® Processor Graphics (GPU) requires [additional hardware configuration steps](../install_guides/configurations-for-intel-gpu.md), as described earlier on this page. Running on GPU is not compatible with macOS*.
@sphinxdirective
.. tab:: Linux
.. code-block:: sh
- ./classification_sample -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d GPU
+ ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d GPU
.. tab:: Windows
.. code-block:: bat
- .\classification_sample_async.exe -i \deployment_tools\demo\car.png -m C:\Users\\Documents\models\public\squeezenet1.1\ir\squeezenet1.1.xml -d GPU
+ .\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d GPU
@endsphinxdirective
**MYRIAD:**
- > **NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute
- Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires
- [additional hardware configuration steps](inference-engine/README.md#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2), as described earlier on this page.
+ > **NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires [additional hardware configuration steps](../install_guides/configurations-for-ncs2.md), as described earlier on this page.
@sphinxdirective
.. tab:: Linux
.. code-block:: sh
- ./classification_sample -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD
+ ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d MYRIAD
.. tab:: Windows
.. code-block:: bat
- .\classification_sample_async.exe -i \deployment_tools\demo\car.png -m C:\Users\\Documents\models\public\squeezenet1.1\ir\squeezenet1.1.xml -d MYRIAD
+ .\classification_sample_async.exe -i %USERPROFILE%\Downloads\dog.bmp -m %USERPROFILE%\Documents\ir\googlenet-v1.xml -d MYRIAD
.. tab:: macOS
.. code-block:: sh
- ./classification_sample -i /opt/intel/openvino/deployment_tools/demo/car.png -m ~/models/public/squeezenet1.1/ir/squeezenet1.1.xml -d MYRIAD
+ ./classification_sample_async -i ~/Downloads/dog.bmp -m ~/ir/googlenet-v1.xml -d MYRIAD
@endsphinxdirective
When the sample application is complete, you see the label and confidence for the top 10 categories on the display. Below is a sample output with inference results on CPU:
@sphinxdirective
-.. tab:: Linux
.. code-block:: sh
- Top 10 results:
-
- Image /opt/intel/deployment-tools/demo/car.png
-
- classid probability label
- ------- ----------- -----
- 817 0.8363345 sports car, sport car
- 511 0.0946488 convertible
- 479 0.0419131 car wheel
- 751 0.0091071 racer, race car, racing car
- 436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
- 656 0.0037564 minivan
- 586 0.0025741 half track
- 717 0.0016069 pickup, pickup truck
- 864 0.0012027 tow truck, tow car, wrecker
- 581 0.0005882 grille, radiator grille
-
- [ INFO ] Execution successful
-
- [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
-
-.. tab:: Windows
-
- .. code-block:: bat
-
Top 10 results:
- Image C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\car.png
+ Image dog.bmp
classid probability label
------- ----------- -----
- 817 0.8363345 sports car, sport car
- 511 0.0946488 convertible
- 479 0.0419131 car wheel
- 751 0.0091071 racer, race car, racing car
- 436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
- 656 0.0037564 minivan
- 586 0.0025741 half track
- 717 0.0016069 pickup, pickup truck
- 864 0.0012027 tow truck, tow car, wrecker
- 581 0.0005882 grille, radiator grille
-
- [ INFO ] Execution successful
-
- [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- Top 10 results:
-
- Image /opt/intel/deployment-tools/demo/car.png
-
- classid probability label
- ------- ----------- -----
- 817 0.8363345 sports car, sport car
- 511 0.0946488 convertible
- 479 0.0419131 car wheel
- 751 0.0091071 racer, race car, racing car
- 436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
- 656 0.0037564 minivan
- 586 0.0025741 half track
- 717 0.0016069 pickup, pickup truck
- 864 0.0012027 tow truck, tow car, wrecker
- 581 0.0005882 grille, radiator grille
-
- [ INFO ] Execution successful
-
- [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
-
-@endsphinxdirective
-
-@sphinxdirective
-.. raw:: html
-
-
-
-@endsphinxdirective
-
-###
Run the Security Barrier Camera Demo Application
-
-To run the **Security Barrier Camera Demo Application** using an input image on the prepared IR models:
-
-1. Set up the OpenVINO environment variables:
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- source /opt/intel/openvino/bin/setupvars.sh
-
-.. tab:: Windows
-
- .. code-block:: bat
-
-
\bin\setupvars.bat
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- source /opt/intel/openvino/bin/setupvars.sh
-
-@endsphinxdirective
-
-2. Go to the demo application build directory:
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- cd ~/inference_engine_demos_build/intel64/Release
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- cd C:\Users\\Documents\Intel\OpenVINO\inference_engine_demos_build\intel64\Release
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- cd ~/inference_engine_demos_build/intel64/Release
-
-@endsphinxdirective
-
-3. Run the demo executable, specifying the input media file, list of model IRs, and a target device for performing inference:
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- .\security_barrier_camera_demo.exe -i -m -m_va -m_lpr -d
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d
-
-@endsphinxdirective
-
-@sphinxdirective
-.. raw:: html
-
-
-
-@endsphinxdirective
-
-**CPU:**
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m
/vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d CPU
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- .\security_barrier_camera_demo.exe -i \deployment_tools\demo\car_1.bmp -m C:\Users\username\Documents\models\intel\vehicle-license-plate-detection-barrier-0106\FP16\vehicle-license-plate-detection-barrier-0106.xml -m_va C:\Users\username\Documents\models\intel\vehicle-attributes-recognition-barrier-0039\FP16\vehicle-attributes-recognition-barrier-0039.xml -m_lpr C:\Users\username\Documents\models\intel\license-plate-recognition-barrier-0001\FP16\license-plate-recognition-barrier-0001.xml -d CPU
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d CPU
-
-@endsphinxdirective
-
-**GPU:**
-> **NOTE**: Running inference on Intel® Processor Graphics (GPU) requires [additional hardware configuration steps](https://docs.openvino.ai/latest/_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps), as described earlier on this page. Running on GPU is not compatible with macOS*.
-
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d GPU
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- .\security_barrier_camera_demo.exe -i \deployment_tools\demo\car_1.bmp -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d GPU
-
-.. tab:: macOS
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d GPU
-
-@endsphinxdirective
-
-**MYRIAD:**
-> **NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires [additional hardware configuration steps](https://docs.openvino.ai/latest/_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps), as described earlier on this page.
-
-@sphinxdirective
-.. tab:: Linux
-
- .. code-block:: sh
-
- ./security_barrier_camera_demo -i /opt/intel/openvino/deployment_tools/demo/car_1.bmp -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d MYRIAD
-
-.. tab:: Windows
-
- .. code-block:: bat
-
- .\security_barrier_camera_demo.exe -i \deployment_tools\demo\car_1.bmp -m /vehicle-license-plate-detection-barrier-0106.xml -m_va /vehicle-attributes-recognition-barrier-0039.xml -m_lpr /license-plate-recognition-barrier-0001.xml -d MYRIAD
+ 156 0.6875963 Blenheim spaniel
+ 215 0.0868125 Brittany spaniel
+ 218 0.0784114 Welsh springer spaniel
+ 212 0.0597296 English setter
+ 217 0.0212105 English springer, English springer spaniel
+ 219 0.0194193 cocker spaniel, English cocker spaniel, cocker
+ 247 0.0086272 Saint Bernard, St Bernard
+ 157 0.0058511 papillon
+ 216 0.0057589 clumber, clumber spaniel
+ 154 0.0052615 Pekinese, Pekingese, Peke
@endsphinxdirective