diff --git a/docs/HOWTO/Custom_Layers_Guide.md b/docs/HOWTO/Custom_Layers_Guide.md index 6ab7b1bb65581d..f9fd16145a747a 100644 --- a/docs/HOWTO/Custom_Layers_Guide.md +++ b/docs/HOWTO/Custom_Layers_Guide.md @@ -337,7 +337,7 @@ operation for the CPU plugin. The code of the library is described in the [Exte In order to build the extension run the following:
```bash mkdir build && cd build -source /opt/intel/openvino/bin/setupvars.sh +source /opt/intel/openvino_2021/bin/setupvars.sh cmake .. -DCMAKE_BUILD_TYPE=Release make --jobs=$(nproc) ``` diff --git a/docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md b/docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md index d4162a72cecb05..e8ea28038341f0 100644 --- a/docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md +++ b/docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md @@ -1,88 +1,120 @@ # Inference Engine Developer Guide {#openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide} -## Introduction to the OpenVINO™ Toolkit - -The OpenVINO™ toolkit is a comprehensive toolkit that you can use to develop and deploy vision-oriented solutions on -Intel® platforms. Vision-oriented means the solutions use images or videos to perform specific tasks. -A few of the solutions use cases include autonomous navigation, digital surveillance cameras, robotics, -and mixed-reality headsets. - -The OpenVINO™ toolkit: - -* Enables CNN-based deep learning inference on the edge -* Supports heterogeneous execution across an Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2 -* Speeds time-to-market via an easy-to-use library of computer vision functions and pre-optimized kernels -* Includes optimized calls for computer vision standards including OpenCV\*, OpenCL™, and OpenVX\* - -The OpenVINO™ toolkit includes the following components: - -* Intel® Deep Learning Deployment Toolkit (Intel® DLDT) - - [Deep Learning Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) — A cross-platform command-line tool for importing models and - preparing them for optimal execution with the Deep Learning Inference Engine. The Model Optimizer supports converting Caffe*, - TensorFlow*, MXNet*, Kaldi*, ONNX* models. - - [Deep Learning Inference Engine](inference_engine_intro.md) — A unified API to allow high performance inference on many hardware types - including Intel® CPU, Intel® Processor Graphics, Intel® FPGA, Intel® Neural Compute Stick 2. - - [nGraph](../nGraph_DG/nGraph_dg.md) — graph representation and manipulation engine which is used to represent a model inside Inference Engine and allows the run-time model construction without using Model Optimizer. -* [OpenCV](https://docs.opencv.org/) — OpenCV* community version compiled for Intel® hardware. -Includes PVL libraries for computer vision. -* Drivers and runtimes for OpenCL™ version 2.1 -* [Intel® Media SDK](https://software.intel.com/en-us/media-sdk) -* [OpenVX*](https://software.intel.com/en-us/cvsdk-ovx-guide) — Intel's implementation of OpenVX* -optimized for running on Intel® hardware (CPU, GPU, IPU). -* [Demos and samples](Samples_Overview.md). - - -This Guide provides overview of the Inference Engine describing the typical workflow for performing +> **NOTE:** [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019). + +This Guide provides an overview of the Inference Engine describing the typical workflow for performing inference of a pre-trained and optimized deep learning model and a set of sample applications. -> **NOTES:** -> - Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel). -> - [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019). +> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_group_intel). + +After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data. + +Inference Engine is a set of C++ libraries providing a common API to deliver inference solutions on the platform of your choice: CPU, GPU, or VPU. Use the Inference Engine API to read the Intermediate Representation, set the input and output formats, and execute the model on devices. While the C++ libraries is the primary implementation, C libraries and Python bindings are also available. + +For Intel® Distribution of OpenVINO™ toolkit, Inference Engine binaries are delivered within release packages. + +The open source version is available in the [OpenVINO™ toolkit GitHub repository](https://github.com/openvinotoolkit/openvino) and can be built for supported platforms using the Inference Engine Build Instructions. + +To learn about how to use the Inference Engine API for your application, see the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation. + +For complete API Reference, see the [Inference Engine API References](./api_references.html) section. + +Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs. + +## Modules in the Inference Engine component +### Core Inference Engine Libraries ### + +Your application must link to the core Inference Engine libraries: +* Linux* OS: + - `libinference_engine.so`, which depends on `libinference_engine_transformations.so`, `libtbb.so`, `libtbbmalloc.so` and `libngraph.so` +* Windows* OS: + - `inference_engine.dll`, which depends on `inference_engine_transformations.dll`, `tbb.dll`, `tbbmalloc.dll` and `ngraph.dll` +* macOS*: + - `libinference_engine.dylib`, which depends on `libinference_engine_transformations.dylib`, `libtbb.dylib`, `libtbbmalloc.dylib` and `libngraph.dylib` + +The required C++ header files are located in the `include` directory. + +This library contains the classes to: +* Create Inference Engine Core object to work with devices and read network (InferenceEngine::Core) +* Manipulate network information (InferenceEngine::CNNNetwork) +* Execute and pass inputs and outputs (InferenceEngine::ExecutableNetwork and InferenceEngine::InferRequest) + +### Plugin Libraries to Read a Network Object ### + +Starting from 2020.4 release, Inference Engine introduced a concept of `CNNNetwork` reader plugins. Such plugins can be automatically dynamically loaded by Inference Engine in runtime depending on file format: +* Linux* OS: + - `libinference_engine_ir_reader.so` to read a network from IR + - `libinference_engine_onnx_reader.so` to read a network from ONNX model format +* Windows* OS: + - `inference_engine_ir_reader.dll` to read a network from IR + - `inference_engine_onnx_reader.dll` to read a network from ONNX model format + +### Device-Specific Plugin Libraries ### + +For each supported target device, Inference Engine provides a plugin — a DLL/shared library that contains complete implementation for inference on this particular device. The following plugins are available: + +| Plugin | Device Type | +| ------- | ----------------------------- | +|CPU | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE | +|GPU | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics | +|MYRIAD | Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X | +|GNA | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, Intel® Core™ i3-1000G4 Processor | +|HETERO | Automatic splitting of a network inference between several devices (for example if a device doesn't support certain layers| +|MULTI | Simultaneous inference of the same network on several devices in parallel| + +The table below shows the plugin libraries and additional dependencies for Linux, Windows and macOS platforms. + +| Plugin | Library name for Linux | Dependency libraries for Linux | Library name for Windows | Dependency libraries for Windows | Library name for macOS | Dependency libraries for macOS | +|--------|-----------------------------|-------------------------------------------------------------|--------------------------|--------------------------------------------------------------------------------------------------------|------------------------------|---------------------------------------------| +| CPU | `libMKLDNNPlugin.so` | `libinference_engine_lp_transformations.so` | `MKLDNNPlugin.dll` | `inference_engine_lp_transformations.dll` | `libMKLDNNPlugin.so` | `inference_engine_lp_transformations.dylib` | +| GPU | `libclDNNPlugin.so` | `libinference_engine_lp_transformations.so`, `libOpenCL.so` | `clDNNPlugin.dll` | `OpenCL.dll`, `inference_engine_lp_transformations.dll` | Is not supported | - | +| MYRIAD | `libmyriadPlugin.so` | `libusb.so`, | `myriadPlugin.dll` | `usb.dll` | `libmyriadPlugin.so` | `libusb.dylib` | +| HDDL | `libHDDLPlugin.so` | `libbsl.so`, `libhddlapi.so`, `libmvnc-hddl.so` | `HDDLPlugin.dll` | `bsl.dll`, `hddlapi.dll`, `json-c.dll`, `libcrypto-1_1-x64.dll`, `libssl-1_1-x64.dll`, `mvnc-hddl.dll` | Is not supported | - | +| GNA | `libGNAPlugin.so` | `libgna.so`, | `GNAPlugin.dll` | `gna.dll` | Is not supported | - | +| HETERO | `libHeteroPlugin.so` | Same as for selected plugins | `HeteroPlugin.dll` | Same as for selected plugins | `libHeteroPlugin.so` | Same as for selected plugins | +| MULTI | `libMultiDevicePlugin.so` | Same as for selected plugins | `MultiDevicePlugin.dll` | Same as for selected plugins | `libMultiDevicePlugin.so` | Same as for selected plugins | +> **NOTE**: All plugin libraries also depend on core Inference Engine libraries. -## Table of Contents +Make sure those libraries are in your computer's path or in the place you pointed to in the plugin loader. Make sure each plugin's related dependencies are in the: -* [Inference Engine API Changes History](API_Changes.md) +* Linux: `LD_LIBRARY_PATH` +* Windows: `PATH` +* macOS: `DYLD_LIBRARY_PATH` -* [Introduction to Inference Engine](inference_engine_intro.md) +On Linux and macOS, use the script `bin/setupvars.sh` to set the environment variables. -* [Understanding Inference Engine Memory Primitives](Memory_primitives.md) +On Windows, run the `bin\setupvars.bat` batch file to set the environment variables. -* [Introduction to Inference Engine Device Query API](InferenceEngine_QueryAPI.md) +To learn more about supported devices and corresponding plugins, see the [Supported Devices](supported_plugins/Supported_Devices.md) chapter. -* [Adding Your Own Layers to the Inference Engine](Extensibility_DG/Intro.md) +## Common Workflow for Using the Inference Engine API -* [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) +The common workflow contains the following steps: -* [[DEPRECATED] Migration from Inference Engine Plugin API to Core API](Migration_CoreAPI.md) +1. **Create Inference Engine Core object** - Create an `InferenceEngine::Core` object to work with different devices, all device plugins are managed internally by the `Core` object. Register extensions with custom nGraph operations (`InferenceEngine::Core::AddExtension`). -* [Introduction to Performance Topics](Intro_to_Performance.md) +2. **Read the Intermediate Representation** - Using the `InferenceEngine::Core` class, read an Intermediate Representation file into an object of the `InferenceEngine::CNNNetwork` class. This class represents the network in the host memory. -* [Inference Engine Python API Overview](../../inference-engine/ie_bridges/python/docs/api_overview.md) +3. **Prepare inputs and outputs format** - After loading the network, specify input and output precision and the layout on the network. For these specification, use the `InferenceEngine::CNNNetwork::getInputsInfo()` and `InferenceEngine::CNNNetwork::getOutputsInfo()`. -* [Using Dynamic Batching feature](DynamicBatching.md) +4. Pass per device loading configurations specific to this device (`InferenceEngine::Core::SetConfig`), and register extensions to this device (`InferenceEngine::Core::AddExtension`). -* [Using Static Shape Infer feature](ShapeInference.md) +5. **Compile and Load Network to device** - Use the `InferenceEngine::Core::LoadNetwork()` method with specific device (e.g. `CPU`, `GPU`, etc.) to compile and load the network on the device. Pass in the per-target load configuration for this compilation and load operation. -* [Using Low-Precision 8-bit Integer Inference](Int8Inference.md) +6. **Set input data** - With the network loaded, you have an `InferenceEngine::ExecutableNetwork` object. Use this object to create an `InferenceEngine::InferRequest` in which you signal the input buffers to use for input and output. Specify a device-allocated memory and copy it into the device memory directly, or tell the device to use your application memory to save a copy. -* [Using Bfloat16 Inference](Bfloat16Inference.md) +7. **Execute** - With the input and output memory now defined, choose your execution mode: -* Utilities to Validate Your Converted Model - * [Using Cross Check Tool for Per-Layer Comparison Between Plugins](../../inference-engine/tools/cross_check_tool/README.md) + * Synchronously - `InferenceEngine::InferRequest::Infer()` method. Blocks until inference is completed. + * Asynchronously - `InferenceEngine::InferRequest::StartAsync()` method. Check status with the `InferenceEngine::InferRequest::Wait()` method (0 timeout), wait, or specify a completion callback. -* [Supported Devices](supported_plugins/Supported_Devices.md) - * [GPU](supported_plugins/CL_DNN.md) - * [CPU](supported_plugins/CPU.md) - * [VPU](supported_plugins/VPU.md) - * [MYRIAD](supported_plugins/MYRIAD.md) - * [HDDL](supported_plugins/HDDL.md) - * [Heterogeneous execution](supported_plugins/HETERO.md) - * [GNA](supported_plugins/GNA.md) - * [MULTI](supported_plugins/MULTI.md) +8. **Get the output** - After inference is completed, get the output memory or read the memory you provided earlier. Do this with the `InferenceEngine::IInferRequest::GetBlob()` method. -* [Pre-Trained Models](@ref omz_models_group_intel) +## Video: Inference Engine Concept +[![](https://img.youtube.com/vi/e6R13V8nbak/0.jpg)](https://www.youtube.com/watch?v=e6R13V8nbak) + -* [Known Issues](Known_Issues_Limitations.md) +## Further Reading -**Typical Next Step:** [Introduction to Inference Engine](inference_engine_intro.md) +For more details on the Inference Engine API, refer to the [Integrating Inference Engine in Your Application](Integrate_with_customer_application_new_API.md) documentation. diff --git a/docs/IE_DG/Samples_Overview.md b/docs/IE_DG/Samples_Overview.md index a89b56761ade88..d3d749549a68d5 100644 --- a/docs/IE_DG/Samples_Overview.md +++ b/docs/IE_DG/Samples_Overview.md @@ -205,7 +205,7 @@ vi /.bashrc 2. Add this line to the end of the file: ```sh -source /opt/intel/openvino/bin/setupvars.sh +source /opt/intel/openvino_2021/bin/setupvars.sh ``` 3. Save and close the file: press the **Esc** key, type `:wq` and press the **Enter** key. @@ -242,4 +242,4 @@ sample, read the sample documentation by clicking the sample name in the samples list above. ## See Also -* [Introduction to Inference Engine](inference_engine_intro.md) +* [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md) diff --git a/docs/IE_DG/ShapeInference.md b/docs/IE_DG/ShapeInference.md index 0d8bf23ca3cd68..93b27c621b50ce 100644 --- a/docs/IE_DG/ShapeInference.md +++ b/docs/IE_DG/ShapeInference.md @@ -66,8 +66,8 @@ Shape collision during shape propagation may be a sign that a new shape does not Changing the model input shape may result in intermediate operations shape collision. Examples of such operations: -- [`Reshape` operation](../ops/shape/Reshape_1.md) with a hard-coded output shape value -- [`MatMul` operation](../ops/matrix/MatMul_1.md) with the `Const` second input cannot be resized by spatial dimensions due to operation semantics +- [Reshape](../ops/shape/Reshape_1.md) operation with a hard-coded output shape value +- [MatMul](../ops/matrix/MatMul_1.md) operation with the `Const` second input cannot be resized by spatial dimensions due to operation semantics Model structure and logic should not change significantly after model reshaping. - The Global Pooling operation is commonly used to reduce output feature map of classification models output. diff --git a/docs/IE_DG/inference_engine_intro.md b/docs/IE_DG/inference_engine_intro.md index 41e8711e366acb..4859ea11da0172 100644 --- a/docs/IE_DG/inference_engine_intro.md +++ b/docs/IE_DG/inference_engine_intro.md @@ -1,5 +1,11 @@ -Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro} -================================ +# Introduction to Inference Engine {#openvino_docs_IE_DG_inference_engine_intro} + +> **NOTE:** [Intel® System Studio](https://software.intel.com/en-us/system-studio) is an all-in-one, cross-platform tool suite, purpose-built to simplify system bring-up and improve system and IoT device application performance on Intel® platforms. If you are using the Intel® Distribution of OpenVINO™ with Intel® System Studio, go to [Get Started with Intel® System Studio](https://software.intel.com/en-us/articles/get-started-with-openvino-and-intel-system-studio-2019). + +This Guide provides an overview of the Inference Engine describing the typical workflow for performing +inference of a pre-trained and optimized deep learning model and a set of sample applications. + +> **NOTE:** Before you perform inference with the Inference Engine, your models should be converted to the Inference Engine format using the Model Optimizer or built directly in run-time using nGraph API. To learn about how to use Model Optimizer, refer to the [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). To learn about the pre-trained and optimized models delivered with the OpenVINO™ toolkit, refer to [Pre-Trained Models](@ref omz_models_intel_index). After you have used the Model Optimizer to create an Intermediate Representation (IR), use the Inference Engine to infer the result for a given input data. diff --git a/docs/IE_DG/supported_plugins/MULTI.md b/docs/IE_DG/supported_plugins/MULTI.md index a6b4aaefc9f1c9..167c79b3c7eb65 100644 --- a/docs/IE_DG/supported_plugins/MULTI.md +++ b/docs/IE_DG/supported_plugins/MULTI.md @@ -92,11 +92,18 @@ Notice that until R2 you had to calculate number of requests in your application Notice that every OpenVINO sample that supports "-d" (which stays for "device") command-line option transparently accepts the multi-device. The [Benchmark Application](../../../inference-engine/samples/benchmark_app/README.md) is the best reference to the optimal usage of the multi-device. As discussed multiple times earlier, you don't need to setup number of requests, CPU streams or threads as the application provides optimal out of the box performance. Below is example command-line to evaluate HDDL+GPU performance with that: -```bash -$ ./benchmark_app –d MULTI:HDDL,GPU –m -i -niter 1000 + +```sh +./benchmark_app –d MULTI:HDDL,GPU –m -i -niter 1000 ``` Notice that you can use the FP16 IR to work with multi-device (as CPU automatically upconverts it to the fp32) and rest of devices support it naturally. Also notice that no demos are (yet) fully optimized for the multi-device, by means of supporting the OPTIMAL_NUMBER_OF_INFER_REQUESTS metric, using the GPU streams/throttling, and so on. +## Video: MULTI Plugin +[![](https://img.youtube.com/vi/xbORYFEmrqU/0.jpg)](https://www.youtube.com/watch?v=xbORYFEmrqU) + + ## See Also * [Supported Devices](Supported_Devices.md) + + diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md index cd9245c3e69646..c8b7f2bcb98e82 100644 --- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -111,3 +111,16 @@ Model Optimizer produces an Intermediate Representation (IR) of the network, whi * [Known Issues](Known_Issues_Limitations.md) **Typical Next Step:** [Preparing and Optimizing your Trained Model with Model Optimizer](prepare_model/Prepare_Trained_Model.md) + +## Video: Model Optimizer Concept + +[![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8) + + +## Video: Model Optimizer Basic Operation +[![](https://img.youtube.com/vi/BBt1rseDcy0/0.jpg)](https://www.youtube.com/watch?v=BBt1rseDcy0) + + +## Video: Choosing the Right Precision +[![](https://img.youtube.com/vi/RF8ypHyiKrY/0.jpg)](https://www.youtube.com/watch?v=RF8ypHyiKrY) + \ No newline at end of file diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md index 9a608d8e607b00..d0342efdccd30d 100644 --- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md +++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md @@ -367,6 +367,10 @@ Refer to [Supported Framework Layers ](../Supported_Frameworks_Layers.md) for th The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong. +## Video: Converting a TensorFlow Model +[![](https://img.youtube.com/vi/QW6532LtiTc/0.jpg)](https://www.youtube.com/watch?v=QW6532LtiTc) + + ## Summary In this document, you learned: diff --git a/docs/benchmarks/performance_benchmarks.md b/docs/benchmarks/performance_benchmarks.md index fedb3a923f7ca8..7969b2929ffecb 100644 --- a/docs/benchmarks/performance_benchmarks.md +++ b/docs/benchmarks/performance_benchmarks.md @@ -10,252 +10,3 @@ Use the links below to review the benchmarking results for each alternative: * [OpenVINO™ Model Server Benchmark Results](performance_benchmarks_ovms.md) Performance for a particular application can also be evaluated virtually using [Intel® DevCloud for the Edge](https://devcloud.intel.com/edge/), a remote development environment with access to Intel® hardware and the latest versions of the Intel® Distribution of the OpenVINO™ Toolkit. [Learn more](https://devcloud.intel.com/edge/get_started/devcloud/) or [Register here](https://inteliot.force.com/DevcloudForEdge/s/). - -\htmlonly - - - - - - - - - - -\endhtmlonly - - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - - -\htmlonly - -\endhtmlonly - -\htmlonly - -\endhtmlonly - - -## Platform Configurations - -Intel® Distribution of OpenVINO™ toolkit performance benchmark numbers are based on release 2021.2. - -Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Performance results are based on testing as of December 9, 2020 and may not reflect all publicly available updates. See configuration disclosure for details. No product can be absolutely secure. - -Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex). - -Your costs and results may vary. - -© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. - -Intel optimizations, for Intel compilers or other products, may not optimize to the same degree for non-Intel products. - -Testing by Intel done on: see test date for each HW platform below. - -**CPU Inference Engines** - -| | Intel® Xeon® E-2124G | Intel® Xeon® W1290P | Intel® Xeon® Silver 4216R | -| ------------------------------- | ---------------------- | --------------------------- | ---------------------------- | -| Motherboard | ASUS* WS C246 PRO | ASUS* WS W480-ACE | Intel® Server Board S2600STB | -| CPU | Intel® Xeon® E-2124G CPU @ 3.40GHz | Intel® Xeon® W-1290P CPU @ 3.70GHz | Intel® Xeon® Silver 4216R CPU @ 2.20GHz | -| Hyper Threading | OFF | ON | ON | -| Turbo Setting | ON | ON | ON | -| Memory | 2 x 16 GB DDR4 2666MHz | 4 x 16 GB DDR4 @ 2666MHz |12 x 32 GB DDR4 2666MHz | -| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | -| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic | -| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc. | Intel Corporation | -| BIOS Version | 0904 | 607 | SE5C620.86B.02.01.
0009.092820190230 | -| BIOS Release | April 12, 2019 | May 29, 2020 | September 28, 2019 | -| BIOS Settings | Select optimized default settings,
save & exit | Select optimized default settings,
save & exit | Select optimized default settings,
change power policy
to "performance",
save & exit | -| Batch size | 1 | 1 | 1 -| Precision | INT8 | INT8 | INT8 -| Number of concurrent inference requests | 4 | 5 | 32 -| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 -| Power dissipation, TDP in Watt | [71](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html#tab-blade-1-0-1) | [125](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) | [125](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | -| CPU Price on September 29, 2020, USD
Prices may vary | [213](https://ark.intel.com/content/www/us/en/ark/products/134854/intel-xeon-e-2124g-processor-8m-cache-up-to-4-50-ghz.html) | [539](https://ark.intel.com/content/www/us/en/ark/products/199336/intel-xeon-w-1290p-processor-20m-cache-3-70-ghz.html) |[1,002](https://ark.intel.com/content/www/us/en/ark/products/193394/intel-xeon-silver-4216-processor-22m-cache-2-10-ghz.html) | - -**CPU Inference Engines (continue)** - -| | Intel® Xeon® Gold 5218T | Intel® Xeon® Platinum 8270 | -| ------------------------------- | ---------------------------- | ---------------------------- | -| Motherboard | Intel® Server Board S2600STB | Intel® Server Board S2600STB | -| CPU | Intel® Xeon® Gold 5218T CPU @ 2.10GHz | Intel® Xeon® Platinum 8270 CPU @ 2.70GHz | -| Hyper Threading | ON | ON | -| Turbo Setting | ON | ON | -| Memory | 12 x 32 GB DDR4 2666MHz | 12 x 32 GB DDR4 2933MHz | -| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | -| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | -| BIOS Vendor | Intel Corporation | Intel Corporation | -| BIOS Version | SE5C620.86B.02.01.
0009.092820190230 | SE5C620.86B.02.01.
0009.092820190230 | -| BIOS Release | September 28, 2019 | September 28, 2019 | -| BIOS Settings | Select optimized default settings,
change power policy to "performance",
save & exit | Select optimized default settings,
change power policy to "performance",
save & exit | -| Batch size | 1 | 1 | -| Precision | INT8 | INT8 | -| Number of concurrent inference requests |32 | 52 | -| Test Date | December 9, 2020 | December 9, 2020 | -| Power dissipation, TDP in Watt | [105](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html#tab-blade-1-0-1) | [205](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html#tab-blade-1-0-1) | -| CPU Price on September 29, 2020, USD
Prices may vary | [1,349](https://ark.intel.com/content/www/us/en/ark/products/193953/intel-xeon-gold-5218t-processor-22m-cache-2-10-ghz.html) | [7,405](https://ark.intel.com/content/www/us/en/ark/products/192482/intel-xeon-platinum-8270-processor-35-75m-cache-2-70-ghz.html) | - - -**CPU Inference Engines (continue)** - -| | Intel® Core™ i7-8700T | Intel® Core™ i9-10920X | Intel® Core™ i9-10900TE
(iEi Flex BX210AI)| 11th Gen Intel® Core™ i7-1185G7 | -| -------------------- | ----------------------------------- |--------------------------------------| ---------------------------------------------|---------------------------------| -| Motherboard | GIGABYTE* Z370M DS3H-CF | ASUS* PRIME X299-A II | iEi / B595 | Intel Corporation
internal/Reference
Validation Platform | -| CPU | Intel® Core™ i7-8700T CPU @ 2.40GHz | Intel® Core™ i9-10920X CPU @ 3.50GHz | Intel® Core™ i9-10900TE CPU @ 1.80GHz | 11th Gen Intel® Core™ i7-1185G7 @ 3.00GHz | -| Hyper Threading | ON | ON | ON | ON | -| Turbo Setting | ON | ON | ON | ON | -| Memory | 4 x 16 GB DDR4 2400MHz | 4 x 16 GB DDR4 2666MHz | 2 x 8 GB DDR4 @ 2400MHz | 2 x 8 GB DDR4 3200MHz | -| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | -| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.8.0-05-generic | 5.8.0-05-generic | -| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* | Intel Corporation | -| BIOS Version | F11 | 505 | Z667AR10 | TGLSFWI1.R00.3425.
A00.2010162309 | -| BIOS Release | March 13, 2019 | December 17, 2019 | July 15, 2020 | October 16, 2020 | -| BIOS Settings | Select optimized default settings,
set OS type to "other",
save & exit | Default Settings | Default Settings | Default Settings | -| Batch size | 1 | 1 | 1 | 1 | -| Precision | INT8 | INT8 | INT8 | INT8 | -| Number of concurrent inference requests |4 | 24 | 5 | 4 | -| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 | December 9, 2020 | -| Power dissipation, TDP in Watt | [35](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html#tab-blade-1-0-1) | [165](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [35](https://ark.intel.com/content/www/us/en/ark/products/203901/intel-core-i9-10900te-processor-20m-cache-up-to-4-60-ghz.html) | [28](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-1) | -| CPU Price on September 29, 2020, USD
Prices may vary | [303](https://ark.intel.com/content/www/us/en/ark/products/129948/intel-core-i7-8700t-processor-12m-cache-up-to-4-00-ghz.html) | [700](https://ark.intel.com/content/www/us/en/ark/products/198012/intel-core-i9-10920x-x-series-processor-19-25m-cache-3-50-ghz.html) | [444](https://ark.intel.com/content/www/us/en/ark/products/203901/intel-core-i9-10900te-processor-20m-cache-up-to-4-60-ghz.html) | [426](https://ark.intel.com/content/www/us/en/ark/products/208664/intel-core-i7-1185g7-processor-12m-cache-up-to-4-80-ghz-with-ipu.html#tab-blade-1-0-0) | - - -**CPU Inference Engines (continue)** - -| | Intel® Core™ i5-8500 | Intel® Core™ i5-10500TE | Intel® Core™ i5-10500TE
(iEi Flex-BX210AI)| -| -------------------- | ---------------------------------- | ----------------------------------- |-------------------------------------- | -| Motherboard | ASUS* PRIME Z370-A | GIGABYTE* Z490 AORUS PRO AX | iEi / B595 | -| CPU | Intel® Core™ i5-8500 CPU @ 3.00GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz | Intel® Core™ i5-10500TE CPU @ 2.30GHz | -| Hyper Threading | OFF | ON | ON | -| Turbo Setting | ON | ON | ON | -| Memory | 2 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 @ 2666MHz | 1 x 8 GB DDR4 @ 2400MHz | -| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | -| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | 5.3.0-24-generic | -| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | American Megatrends Inc.* | -| BIOS Version | 2401 | F3 | Z667AR10 | -| BIOS Release | July 12, 2019 | March 25, 2020 | July 17, 2020 | -| BIOS Settings | Select optimized default settings,
save & exit | Select optimized default settings,
set OS type to "other",
save & exit | Default Settings | -| Batch size | 1 | 1 | 1 | -| Precision | INT8 | INT8 | INT8 | -| Number of concurrent inference requests | 3 | 4 | 4 | -| Test Date | December 9, 2020 | December 9, 2020 | December 9, 2020 | -| Power dissipation, TDP in Watt | [65](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html#tab-blade-1-0-1)| [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | [35](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | -| CPU Price on September 29, 2020, USD
Prices may vary | [192](https://ark.intel.com/content/www/us/en/ark/products/129939/intel-core-i5-8500-processor-9m-cache-up-to-4-10-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | [195](https://ark.intel.com/content/www/us/en/ark/products/203891/intel-core-i5-10500te-processor-12m-cache-up-to-3-70-ghz.html) | - - -**CPU Inference Engines (continue)** - -| | Intel Atom® x5-E3940 | Intel® Core™ i3-8100 | -| -------------------- | ---------------------------------- |----------------------------------- | -| Motherboard | | GIGABYTE* Z390 UD | -| CPU | Intel Atom® Processor E3940 @ 1.60GHz | Intel® Core™ i3-8100 CPU @ 3.60GHz | -| Hyper Threading | OFF | OFF | -| Turbo Setting | ON | OFF | -| Memory | 1 x 8 GB DDR3 1600MHz | 4 x 8 GB DDR4 2400MHz | -| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | -| Kernel Version | 5.3.0-24-generic | 5.3.0-24-generic | -| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | -| BIOS Version | 5.12 | F8 | -| BIOS Release | September 6, 2017 | May 24, 2019 | -| BIOS Settings | Default settings | Select optimized default settings,
set OS type to "other",
save & exit | -| Batch size | 1 | 1 | -| Precision | INT8 | INT8 | -| Number of concurrent inference requests | 4 | 4 | -| Test Date | December 9, 2020 | December 9, 2020 | -| Power dissipation, TDP in Watt | [9.5](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [65](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html#tab-blade-1-0-1)| -| CPU Price on September 29, 2020, USD
Prices may vary | [34](https://ark.intel.com/content/www/us/en/ark/products/96485/intel-atom-x5-e3940-processor-2m-cache-up-to-1-80-ghz.html) | [117](https://ark.intel.com/content/www/us/en/ark/products/126688/intel-core-i3-8100-processor-6m-cache-3-60-ghz.html) | - - - -**Accelerator Inference Engines** - -| | Intel® Neural Compute Stick 2 | Intel® Vision Accelerator Design
with Intel® Movidius™ VPUs (Mustang-V100-MX8) | -| --------------------------------------- | ------------------------------------- | ------------------------------------- | -| VPU | 1 X Intel® Movidius™ Myriad™ X MA2485 | 8 X Intel® Movidius™ Myriad™ X MA2485 | -| Connection | USB 2.0/3.0 | PCIe X4 | -| Batch size | 1 | 1 | -| Precision | FP16 | FP16 | -| Number of concurrent inference requests | 4 | 32 | -| Power dissipation, TDP in Watt | 2.5 | [30](https://www.mouser.com/ProductDetail/IEI/MUSTANG-V100-MX8-R10?qs=u16ybLDytRaZtiUUvsd36w%3D%3D) | -| CPU Price, USD
Prices may vary | [69](https://ark.intel.com/content/www/us/en/ark/products/140109/intel-neural-compute-stick-2.html) (from December 9, 2020) | [214](https://www.arrow.com/en/products/mustang-v100-mx8-r10/iei-technology?gclid=Cj0KCQiA5bz-BRD-ARIsABjT4ng1v1apmxz3BVCPA-tdIsOwbEjTtqnmp_rQJGMfJ6Q2xTq6ADtf9OYaAhMUEALw_wcB) (from December 9, 2020) | -| Host Computer | Intel® Core™ i7 | Intel® Core™ i5 | -| Motherboard | ASUS* Z370-A II | Uzelinfo* / US-E1300 | -| CPU | Intel® Core™ i7-8700 CPU @ 3.20GHz | Intel® Core™ i5-6600 CPU @ 3.30GHz | -| Hyper Threading | ON | OFF | -| Turbo Setting | ON | ON | -| Memory | 4 x 16 GB DDR4 2666MHz | 2 x 16 GB DDR4 2400MHz | -| Operating System | Ubuntu* 18.04 LTS | Ubuntu* 18.04 LTS | -| Kernel Version | 5.0.0-23-generic | 5.0.0-23-generic | -| BIOS Vendor | American Megatrends Inc.* | American Megatrends Inc.* | -| BIOS Version | 411 | 5.12 | -| BIOS Release | September 21, 2018 | September 21, 2018 | -| Test Date | December 9, 2020 | December 9, 2020 | - -Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.2.html) - -\htmlonly - -
-

-\endhtmlonly -Results may vary. For workloads and configurations visit: [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex) and [Legal Information](../Legal_Information.md). -\htmlonly -

-
-\endhtmlonly diff --git a/docs/doxygen/ie_docs.xml b/docs/doxygen/ie_docs.xml index 172e38b2ebc644..9ef9073e409a1e 100644 --- a/docs/doxygen/ie_docs.xml +++ b/docs/doxygen/ie_docs.xml @@ -257,7 +257,6 @@ limitations under the License. - diff --git a/docs/doxygen/openvino_docs.xml b/docs/doxygen/openvino_docs.xml index ae0888b1f357e8..92238645a05348 100644 --- a/docs/doxygen/openvino_docs.xml +++ b/docs/doxygen/openvino_docs.xml @@ -169,6 +169,7 @@ limitations under the License. + @@ -184,7 +185,15 @@ limitations under the License. - + + + + + + + + + diff --git a/docs/get_started/get_started_raspbian.md b/docs/get_started/get_started_raspbian.md index c454084a2abcf1..5f3baf87d2f638 100644 --- a/docs/get_started/get_started_raspbian.md +++ b/docs/get_started/get_started_raspbian.md @@ -62,7 +62,7 @@ Follow the steps below to run pre-trained Face Detection network using Inference ``` 2. Build the Object Detection Sample with the following command: ```sh - cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp + cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp make -j2 object_detection_sample_ssd ``` 3. Download the pre-trained Face Detection model with the [Model Downloader tool](@ref omz_tools_downloader): diff --git a/docs/how_tos/how-to-links.md b/docs/how_tos/how-to-links.md index 2f1840690ba3bc..f263f22b5d236c 100644 --- a/docs/how_tos/how-to-links.md +++ b/docs/how_tos/how-to-links.md @@ -44,7 +44,6 @@ To learn about what is *custom operation* and how to work with them in the Deep [![](https://img.youtube.com/vi/Kl1ptVb7aI8/0.jpg)](https://www.youtube.com/watch?v=Kl1ptVb7aI8) - ## Computer Vision with Intel [![](https://img.youtube.com/vi/FZZD4FCvO9c/0.jpg)](https://www.youtube.com/watch?v=FZZD4FCvO9c) diff --git a/docs/index.md b/docs/index.md index f7031596066386..ee0739a1e1ecd4 100644 --- a/docs/index.md +++ b/docs/index.md @@ -83,7 +83,7 @@ The Inference Engine's plug-in architecture can be extended to meet other specia Intel® Distribution of OpenVINO™ toolkit includes the following components: - [Deep Learning Model Optimizer](MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - A cross-platform command-line tool for importing models and preparing them for optimal execution with the Inference Engine. The Model Optimizer imports, converts, and optimizes models, which were trained in popular frameworks, such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX*. -- [Deep Learning Inference Engine](IE_DG/inference_engine_intro.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU). +- [Deep Learning Inference Engine](IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) - A unified API to allow high performance inference on many hardware types including Intel® CPU, Intel® Integrated Graphics, Intel® Neural Compute Stick 2, Intel® Vision Accelerator Design with Intel® Movidius™ vision processing unit (VPU). - [Inference Engine Samples](IE_DG/Samples_Overview.md) - A set of simple console applications demonstrating how to use the Inference Engine in your applications. - [Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) - A web-based graphical environment that allows you to easily use various sophisticated OpenVINO™ toolkit components. - [Post-Training Optimization tool](@ref pot_README) - A tool to calibrate a model and then execute it in the INT8 precision. diff --git a/docs/install_guides/installing-openvino-apt.md b/docs/install_guides/installing-openvino-apt.md index 4d1ac17074d853..665186969912da 100644 --- a/docs/install_guides/installing-openvino-apt.md +++ b/docs/install_guides/installing-openvino-apt.md @@ -12,8 +12,8 @@ The following components are installed with the OpenVINO runtime package: | Component | Description| |-----------|------------| -| [Inference Engine](../IE_DG/inference_engine_intro.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. | -| [OpenCV*](https://docs.opencv.org/master/ | OpenCV* community version compiled for Intel® hardware. | +| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. | +| [OpenCV*](https://docs.opencv.org/master/) | OpenCV* community version compiled for Intel® hardware. | | Deep Learning Streamer (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). | ## Included with Developer Package @@ -23,7 +23,7 @@ The following components are installed with the OpenVINO developer package: | Component | Description| |-----------|------------| | [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | -| [Inference Engine](../IE_DG/inference_engine_intro.md) | The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.| +| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications.| | [OpenCV*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware | | [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use the Inference Engine in your applications. | | [Demo Applications](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use cases. | diff --git a/docs/install_guides/installing-openvino-docker-linux.md b/docs/install_guides/installing-openvino-docker-linux.md index 38d23610f3b903..1360144b8f6176 100644 --- a/docs/install_guides/installing-openvino-docker-linux.md +++ b/docs/install_guides/installing-openvino-docker-linux.md @@ -10,8 +10,8 @@ This guide provides the steps for creating a Docker* image with Intel® Distribu - Ubuntu\* 18.04 long-term support (LTS), 64-bit - Ubuntu\* 20.04 long-term support (LTS), 64-bit -- CentOS\* 7 -- RHEL\* 8 +- CentOS\* 7.6 +- Red Hat* Enterprise Linux* 8.2 (64 bit) **Host Operating Systems** @@ -143,7 +143,7 @@ RUN /bin/mkdir -p '/usr/local/lib' && \ WORKDIR /opt/libusb-1.0.22/ RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \ - cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \ + cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \ ldconfig ``` - **CentOS 7**: @@ -174,11 +174,11 @@ RUN /bin/mkdir -p '/usr/local/lib' && \ /bin/mkdir -p '/usr/local/include/libusb-1.0' && \ /usr/bin/install -c -m 644 libusb.h '/usr/local/include/libusb-1.0' && \ /bin/mkdir -p '/usr/local/lib/pkgconfig' && \ - printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino/bin/setupvars.sh + printf "\nexport LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:/usr/local/lib\n" >> /opt/intel/openvino_2021/bin/setupvars.sh WORKDIR /opt/libusb-1.0.22/ RUN /usr/bin/install -c -m 644 libusb-1.0.pc '/usr/local/lib/pkgconfig' && \ - cp /opt/intel/openvino/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \ + cp /opt/intel/openvino_2021/deployment_tools/inference_engine/external/97-myriad-usbboot.rules /etc/udev/rules.d/ && \ ldconfig ``` 2. Run the Docker* image: diff --git a/docs/install_guides/installing-openvino-linux-ivad-vpu.md b/docs/install_guides/installing-openvino-linux-ivad-vpu.md index ab2962542d8544..cd86804307c7fe 100644 --- a/docs/install_guides/installing-openvino-linux-ivad-vpu.md +++ b/docs/install_guides/installing-openvino-linux-ivad-vpu.md @@ -11,9 +11,9 @@ For Intel® Vision Accelerator Design with Intel® Movidius™ VPUs, the followi 1. Set the environment variables: ```sh -source /opt/intel/openvino/bin/setupvars.sh +source /opt/intel/openvino_2021/bin/setupvars.sh ``` -> **NOTE**: The `HDDL_INSTALL_DIR` variable is set to `/deployment_tools/inference_engine/external/hddl`. If you installed the Intel® Distribution of OpenVINO™ to the default install directory, the `HDDL_INSTALL_DIR` was set to `/opt/intel/openvino//deployment_tools/inference_engine/external/hddl`. +> **NOTE**: The `HDDL_INSTALL_DIR` variable is set to `/deployment_tools/inference_engine/external/hddl`. If you installed the Intel® Distribution of OpenVINO™ to the default install directory, the `HDDL_INSTALL_DIR` was set to `/opt/intel/openvino_2021//deployment_tools/inference_engine/external/hddl`. 2. Install dependencies: ```sh @@ -52,7 +52,7 @@ E: [ncAPI] [ 965618] [MainThread] ncDeviceOpen:677 Failed to find a device, ```sh kill -9 $(pidof hddldaemon autoboot) pidof hddldaemon autoboot # Make sure none of them is alive -source /opt/intel/openvino/bin/setupvars.sh +source /opt/intel/openvino_2021/bin/setupvars.sh ${HDDL_INSTALL_DIR}/bin/bsl_reset ``` diff --git a/docs/install_guides/installing-openvino-linux.md b/docs/install_guides/installing-openvino-linux.md index 7c6644dca87590..955a50a0bae8fb 100644 --- a/docs/install_guides/installing-openvino-linux.md +++ b/docs/install_guides/installing-openvino-linux.md @@ -22,7 +22,7 @@ The Intel® Distribution of OpenVINO™ toolkit for Linux\*: | Component | Description | |-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. 
Popular frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | -| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. | +| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. | | Intel® Media SDK | Offers access to hardware accelerated video codecs and frame processing | | [OpenCV](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware | | [Inference Engine Code Samples](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to utilize specific OpenVINO capabilities in an application and how to perform specific tasks, such as loading a model, running inference, querying specific device capabilities, and more. | @@ -49,7 +49,6 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I **Hardware** * 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors -* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell) * 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake) * Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake) * Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1) @@ -67,6 +66,7 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I **Operating Systems** - Ubuntu 18.04.x long-term support (LTS), 64-bit +- Ubuntu 20.04.0 long-term support (LTS), 64-bit - CentOS 7.6, 64-bit (for target only) - Yocto Project v3.0, 64-bit (for target only and requires modifications) diff --git a/docs/install_guides/installing-openvino-macos.md b/docs/install_guides/installing-openvino-macos.md index 1ac002ecd64485..0797d625ca8a16 100644 --- a/docs/install_guides/installing-openvino-macos.md +++ b/docs/install_guides/installing-openvino-macos.md @@ -24,7 +24,7 @@ The following components are installed by default: | Component | Description | | :-------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | This tool imports, converts, and optimizes models, which were trained in popular frameworks, to a format usable by Intel tools, especially the Inference Engine.
Popular frameworks include Caffe*, TensorFlow*, MXNet\*, and ONNX\*. | -| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. | +| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. | | [OpenCV\*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware | | [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use the Inference Engine in your applications. | | [Demos](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases | @@ -53,7 +53,6 @@ The development and target platforms have the same requirements, but you can sel > **NOTE**: The current version of the Intel® Distribution of OpenVINO™ toolkit for macOS* supports inference on Intel CPUs and Intel® Neural Compute Sticks 2 only. * 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors -* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell) * 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake) * Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake) * Intel® Neural Compute Stick 2 diff --git a/docs/install_guides/installing-openvino-raspbian.md b/docs/install_guides/installing-openvino-raspbian.md index a2f9a2ba9e3a86..0695ef9e772ca9 100644 --- a/docs/install_guides/installing-openvino-raspbian.md +++ b/docs/install_guides/installing-openvino-raspbian.md @@ -18,7 +18,7 @@ The OpenVINO toolkit for Raspbian OS is an archive with pre-installed header fil | Component | Description | | :-------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| [Inference Engine](../IE_DG/inference_engine_intro.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. | +| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. | | [OpenCV\*](https://docs.opencv.org/master/) | OpenCV\* community version compiled for Intel® hardware. | | [Sample Applications](../IE_DG/Samples_Overview.md) | A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. | @@ -94,12 +94,12 @@ CMake is installed. Continue to the next section to set the environment variable You must update several environment variables before you can compile and run OpenVINO toolkit applications. Run the following script to temporarily set the environment variables: ```sh -source /opt/intel/openvino/bin/setupvars.sh +source /opt/intel/openvino_2021/bin/setupvars.sh ``` **(Optional)** The OpenVINO environment variables are removed when you close the shell. As an option, you can permanently set the environment variables as follows: ```sh -echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc +echo "source /opt/intel/openvino_2021/bin/setupvars.sh" >> ~/.bashrc ``` To test your change, open a new terminal. You will see the following: @@ -118,11 +118,11 @@ Continue to the next section to add USB rules for Intel® Neural Compute Stick 2 Log out and log in for it to take effect. 2. If you didn't modify `.bashrc` to permanently set the environment variables, run `setupvars.sh` again after logging in: ```sh - source /opt/intel/openvino/bin/setupvars.sh + source /opt/intel/openvino_2021/bin/setupvars.sh ``` 3. To perform inference on the Intel® Neural Compute Stick 2, install the USB rules running the `install_NCS_udev_rules.sh` script: ```sh - sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh + sh /opt/intel/openvino_2021/install_dependencies/install_NCS_udev_rules.sh ``` 4. Plug in your Intel® Neural Compute Stick 2. @@ -138,7 +138,7 @@ Follow the next steps to run pre-trained Face Detection network using Inference ``` 2. Build the Object Detection Sample: ```sh - cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples/cpp + cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp ``` ```sh make -j2 object_detection_sample_ssd diff --git a/docs/install_guides/installing-openvino-windows.md b/docs/install_guides/installing-openvino-windows.md index 69ce426a29992c..56e963d1ea40e8 100644 --- a/docs/install_guides/installing-openvino-windows.md +++ b/docs/install_guides/installing-openvino-windows.md @@ -19,7 +19,7 @@ Your installation is complete when these are all completed: - [Microsoft Visual Studio* 2019 with MSBuild](http://visualstudio.microsoft.com/downloads/) - [CMake 3.14 or higher 64-bit](https://cmake.org/download/) - [Python **3.6** - **3.8** 64-bit](https://www.python.org/downloads/windows/) - > **IMPORTANT**: As part of this installation, make sure you click the option to add the application to your `PATH` environment variable. + > **IMPORTANT**: As part of this installation, make sure you click the option **[Add Python 3.x to PATH](https://docs.python.org/3/using/windows.html#installation-steps)** to add Python to your `PATH` environment variable. 3. Set Environment Variables @@ -57,7 +57,7 @@ The following components are installed by default: | Component | Description | |:---------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |[Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) |This tool imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine.
NOTE: Popular frameworks include such frameworks as Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. | -|[Inference Engine](../IE_DG/inference_engine_intro.md) |This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. | +|[Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) |This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. | |[OpenCV\*](https://docs.opencv.org/master/) |OpenCV* community version compiled for Intel® hardware | |[Inference Engine Samples](../IE_DG/Samples_Overview.md) |A set of simple console applications demonstrating how to use Intel's Deep Learning Inference Engine in your applications. | | [Demos](@ref omz_demos) | A set of console applications that demonstrate how you can use the Inference Engine in your applications to solve specific use-cases | @@ -82,7 +82,6 @@ Proceed to an [easy installation from Docker](@ref workbench_docs_Workbench_DG_I **Hardware** * 6th to 11th generation Intel® Core™ processors and Intel® Xeon® processors -* Intel® Xeon® processor E family (formerly code named Sandy Bridge, Ivy Bridge, Haswell, and Broadwell) * 3rd generation Intel® Xeon® Scalable processor (formerly code named Cooper Lake) * Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake) * Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1) @@ -133,12 +132,9 @@ The screen example below indicates you are missing two dependencies: You must update several environment variables before you can compile and run OpenVINO™ applications. Open the Command Prompt, and run the `setupvars.bat` batch file to temporarily set your environment variables: ```sh -cd C:\Program Files (x86)\Intel\openvino_2021\bin\ -``` - -```sh -setupvars.bat +"C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat" ``` +> **IMPORTANT**: Windows PowerShell* is not recommended to run the configuration commands, please use the Command Prompt instead. (Optional): OpenVINO toolkit environment variables are removed when you close the Command Prompt window. As an option, you can permanently set the environment variables manually. @@ -313,7 +309,7 @@ Use these steps to update your Windows `PATH` if a command you execute returns a 5. If you need to add CMake to the `PATH`, browse to the directory in which you installed CMake. The default directory is `C:\Program Files\CMake`. -6. If you need to add Python to the `PATH`, browse to the directory in which you installed Python. The default directory is `C:\Users\\AppData\Local\Programs\Python\Python36\Python`. +6. If you need to add Python to the `PATH`, browse to the directory in which you installed Python. The default directory is `C:\Users\\AppData\Local\Programs\Python\Python36\Python`. Note that the `AppData` folder is hidden by default. To view hidden files and folders, see the [Windows 10 instructions](https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-10-97fbc472-c603-9d90-91d0-1166d1d9f4b5). 7. Click **OK** repeatedly to close each screen. @@ -349,7 +345,7 @@ To learn more about converting deep learning models, go to: - [Intel Distribution of OpenVINO Toolkit home page](https://software.intel.com/en-us/openvino-toolkit) - [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes) -- [Introduction to Inference Engine](../IE_DG/inference_engine_intro.md) +- [Introduction to Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) - [Inference Engine Developer Guide](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) - [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) - [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md) diff --git a/docs/install_guides/installing-openvino-yum.md b/docs/install_guides/installing-openvino-yum.md index 68147078f64a45..27e464d1b84bd5 100644 --- a/docs/install_guides/installing-openvino-yum.md +++ b/docs/install_guides/installing-openvino-yum.md @@ -14,7 +14,7 @@ The following components are installed with the OpenVINO runtime package: | Component | Description| |-----------|------------| -| [Inference Engine](../IE_DG/inference_engine_intro.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. | +| [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md)| The engine that runs a deep learning model. It includes a set of libraries for an easy inference integration into your applications. | | [OpenCV*](https://docs.opencv.org/master/) | OpenCV* community version compiled for Intel® hardware. | | Deep Learning Stream (DL Streamer) | Streaming analytics framework, based on GStreamer, for constructing graphs of media analytics components. For the DL Streamer documentation, see [DL Streamer Samples](@ref gst_samples_README), [API Reference](https://openvinotoolkit.github.io/dlstreamer_gst/), [Elements](https://github.com/opencv/gst-video-analytics/wiki/Elements), [Tutorial](https://github.com/opencv/gst-video-analytics/wiki/DL%20Streamer%20Tutorial). | diff --git a/docs/install_guides/movidius-setup-guide.md b/docs/install_guides/movidius-setup-guide.md index 421dfbab4024a2..c26ebbda38d9de 100644 --- a/docs/install_guides/movidius-setup-guide.md +++ b/docs/install_guides/movidius-setup-guide.md @@ -46,7 +46,7 @@ The `hddldaemon` is a system service, a binary executable that is run to manage `` refers to the following default OpenVINO™ Inference Engine directories: - **Linux:** ``` - /opt/intel/openvino/inference_engine + /opt/intel/openvino_2021/inference_engine ``` - **Windows:** ``` diff --git a/docs/ovsa/ovsa_get_started.md b/docs/ovsa/ovsa_get_started.md index e99ee69239fbb2..18fdf94a885518 100644 --- a/docs/ovsa/ovsa_get_started.md +++ b/docs/ovsa/ovsa_get_started.md @@ -589,7 +589,7 @@ The Model Hosting components install the OpenVINO™ Security Add-on Runtime Doc This section requires interactions between the Model Developer/Independent Software vendor and the User. All roles must complete all applicable set up steps and installation steps before beginning this section. -This document uses the [face-detection-retail-0004](@ref omz_models_intel_face_detection_retail_0004_description_face_detection_retail_0004) model as an example. +This document uses the [face-detection-retail-0004](@ref omz_models_model_face_detection_retail_0044) model as an example. The following figure describes the interactions between the Model Developer, Independent Software Vendor, and User. diff --git a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md index fe0cb7f6d485bb..d2260d87d473aa 100644 --- a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md @@ -1,15 +1,15 @@ -# Hello Reshape SSD C++ Sample {#openvino_inference_engine_samples_hello_reshape_ssd_README} +# Hello Reshape SSD Python Sample {#openvino_inference_engine_samples_python_hello_reshape_ssd_README} This topic demonstrates how to run the Hello Reshape SSD application, which does inference using object detection -networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../../../docs/IE_DG/ShapeInference.md). +networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../../../../../docs/IE_DG/ShapeInference.md). -> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). +> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](../../../../../docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md). ## Running To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader). -> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). +> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > > The sample accepts models in ONNX format (.onnx) that do not require preprocessing. @@ -25,6 +25,6 @@ of the detected objects along with the respective confidence values and the coor rectangles to the standard output stream. ## See Also -* [Using Inference Engine Samples](../../../docs/IE_DG/Samples_Overview.md) +* [Using Inference Engine Samples](../../../../../docs/IE_DG/Samples_Overview.md) * [Model Downloader](@ref omz_tools_downloader) -* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +* [Model Optimizer](../../../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)