diff --git a/CMakeLists.txt b/CMakeLists.txt index 6c2f770b60aa5b..2187deb8e8ce24 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -48,7 +48,7 @@ endif() project(OpenVINO DESCRIPTION "OpenVINO toolkit" - HOMEPAGE_URL "https://docs.openvino.ai/2024/home.html" + HOMEPAGE_URL "https://docs.openvino.ai/2025/index.html" LANGUAGES C CXX) find_package(OpenVINODeveloperScripts REQUIRED diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c30ce12665ab33..42f778b5d847da 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -52,7 +52,7 @@ product better. Since the market of computing devices is constantly evolving, OpenVINO is always open to extending its support for new hardware. If you want to run inference on a device that is currently not supported, you can see how to develop a new plugin for it in the - [Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html). + [Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html). ### Improve documentation diff --git a/README.md b/README.md index 9ed2d4690e39e9..55514d9ec11e3d 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ Open-source software toolkit for optimizing and deploying deep learning models.

- DocumentationBlogKey FeaturesTutorialsIntegrationsBenchmarksGenerative AI + DocumentationBlogKey FeaturesTutorialsIntegrationsBenchmarksGenerative AI

[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino) @@ -24,22 +24,22 @@ Open-source software toolkit for optimizing and deploying deep learning models. - **Broad Platform Compatibility**: Reduce resource demands and efficiently deploy on a range of platforms from edge to cloud. OpenVINO™ supports inference on CPU (x86, ARM), GPU (OpenCL capable, integrated and discrete) and AI accelerators (Intel NPU). - **Community and Ecosystem**: Join an active community contributing to the enhancement of deep learning performance across various domains. -Check out the [OpenVINO Cheat Sheet](https://docs.openvino.ai/2024/_static/download/OpenVINO_Quick_Start_Guide.pdf) and [Key Features](https://docs.openvino.ai/2024/about-openvino/key-features.html) for a quick reference. +Check out the [OpenVINO Cheat Sheet](https://docs.openvino.ai/2025/_static/download/OpenVINO_Quick_Start_Guide.pdf) and [Key Features](https://docs.openvino.ai/2025/about-openvino/key-features.html) for a quick reference. ## Installation -[Get your preferred distribution of OpenVINO](https://docs.openvino.ai/2024/get-started/install-openvino.html) or use this command for quick installation: +[Get your preferred distribution of OpenVINO](https://docs.openvino.ai/2025/get-started/install-openvino.html) or use this command for quick installation: ```sh pip install -U openvino ``` -Check [system requirements](https://docs.openvino.ai/2024/about-openvino/system-requirements.html) and [supported devices](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) for detailed information. +Check [system requirements](https://docs.openvino.ai/2025/about-openvino/system-requirements.html) and [supported devices](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) for detailed information. ## Tutorials and Examples -[OpenVINO Quickstart example](https://docs.openvino.ai/2024/get-started.html) will walk you through the basics of deploying your first model. +[OpenVINO Quickstart example](https://docs.openvino.ai/2025/get-started.html) will walk you through the basics of deploying your first model. Learn how to optimize and deploy popular models with the [OpenVINO Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)📚: - [Create an LLM-powered Chatbot using OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/llm-chatbot-generate-api.ipynb) @@ -48,7 +48,7 @@ Learn how to optimize and deploy popular models with the [OpenVINO Notebooks](ht - [Multimodal assistant with LLaVa and OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llava-multimodal-chatbot/llava-multimodal-chatbot-genai.ipynb) - [Automatic speech recognition using Whisper and OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/whisper-asr-genai/whisper-asr-genai.ipynb) -Discover more examples in the [OpenVINO Samples (Python & C++)](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) and [Notebooks (Python)](https://docs.openvino.ai/2024/learn-openvino/interactive-tutorials-python.html). +Discover more examples in the [OpenVINO Samples (Python & C++)](https://docs.openvino.ai/2025/learn-openvino/openvino-samples.html) and [Notebooks (Python)](https://docs.openvino.ai/2025/learn-openvino/interactive-tutorials-python.html). Here are easy-to-follow code examples demonstrating how to run PyTorch and TensorFlow model inference using OpenVINO: @@ -96,11 +96,11 @@ data = np.random.rand(1, 224, 224, 3) output = compiled_model({0: data}) ``` -OpenVINO supports the CPU, GPU, and NPU [devices](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html) and works with models from PyTorch, TensorFlow, ONNX, TensorFlow Lite, PaddlePaddle, and JAX/Flax [frameworks](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html). It includes [APIs](https://docs.openvino.ai/2024/api/api_reference.html) in C++, Python, C, NodeJS, and offers the GenAI API for optimized model pipelines and performance. +OpenVINO supports the CPU, GPU, and NPU [devices](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html) and works with models from PyTorch, TensorFlow, ONNX, TensorFlow Lite, PaddlePaddle, and JAX/Flax [frameworks](https://docs.openvino.ai/2025/openvino-workflow/model-preparation.html). It includes [APIs](https://docs.openvino.ai/2025/api/api_reference.html) in C++, Python, C, NodeJS, and offers the GenAI API for optimized model pipelines and performance. ## Generative AI with OpenVINO -Get started with the OpenVINO GenAI [installation](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-genai.html) and refer to the [detailed guide](https://docs.openvino.ai/2024/openvino-workflow-generative/generative-inference.html) to explore the capabilities of Generative AI using OpenVINO. +Get started with the OpenVINO GenAI [installation](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-genai.html) and refer to the [detailed guide](https://docs.openvino.ai/2025/openvino-workflow-generative/generative-inference.html) to explore the capabilities of Generative AI using OpenVINO. Learn how to run LLMs and GenAI with [Samples](https://github.com/openvinotoolkit/openvino.genai/tree/master/samples) in the [OpenVINO™ GenAI repo](https://github.com/openvinotoolkit/openvino.genai). See GenAI in action with Jupyter notebooks: [LLM-powered Chatbot](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/README.md) and [LLM Instruction-following pipeline](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-question-answering/README.md). @@ -122,7 +122,7 @@ Learn how to run LLMs and GenAI with [Samples](https://github.com/openvinotoolki ### Integrations - [🤗Optimum Intel](https://github.com/huggingface/optimum-intel) - grab and use models leveraging OpenVINO within the Hugging Face API. -- [Torch.compile](https://docs.openvino.ai/2024/openvino-workflow/torch-compile.html) - use OpenVINO for Python-native applications by JIT-compiling code into optimized kernels. +- [Torch.compile](https://docs.openvino.ai/2025/openvino-workflow/torch-compile.html) - use OpenVINO for Python-native applications by JIT-compiling code into optimized kernels. - [OpenVINO LLMs inference and serving with vLLM​](https://docs.vllm.ai/en/stable/getting_started/openvino-installation.html) - enhance vLLM's fast and easy model serving with the OpenVINO backend. - [OpenVINO Execution Provider for ONNX Runtime](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html) - use OpenVINO as a backend with your existing ONNX Runtime code. - [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/openvino/) - build context-augmented GenAI applications with the LlamaIndex framework and enhance runtime performance with OpenVINO. @@ -133,7 +133,7 @@ Check out the [Awesome OpenVINO](https://github.com/openvinotoolkit/awesome-open ## Performance -Explore [OpenVINO Performance Benchmarks](https://docs.openvino.ai/2024/about-openvino/performance-benchmarks.html) to discover the optimal hardware configurations and plan your AI deployment based on verified data. +Explore [OpenVINO Performance Benchmarks](https://docs.openvino.ai/2025/about-openvino/performance-benchmarks.html) to discover the optimal hardware configurations and plan your AI deployment based on verified data. ## Contribution and Support @@ -149,7 +149,7 @@ You can ask questions and get support on: ## Resources -* [Release Notes](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino.html) +* [Release Notes](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) * [OpenVINO Blog](https://blog.openvino.ai/) * [OpenVINO™ toolkit on Medium](https://medium.com/@openvino) @@ -164,7 +164,7 @@ You can opt-out at any time by running the command: opt_in_out --opt_out ``` -More Information is available at [OpenVINO™ Telemetry](https://docs.openvino.ai/2024/about-openvino/additional-resources/telemetry.html). +More Information is available at [OpenVINO™ Telemetry](https://docs.openvino.ai/2025/about-openvino/additional-resources/telemetry.html). ## License diff --git a/docs/RELEASE.MD b/docs/RELEASE.MD index b345431f3f2bcf..5f7769e06b51a7 100644 --- a/docs/RELEASE.MD +++ b/docs/RELEASE.MD @@ -13,7 +13,7 @@ This phase takes 2-4 weeks and involves scoping the backlog, prioritizing it, an ### Execution (development of new features) - [OpenVINO Contributing Guide](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md) -- [Code Contribution Guide](https://docs.openvino.ai/2024/about-openvino/contributing/code-contribution-guide.html) +- [Code Contribution Guide](https://docs.openvino.ai/2025/about-openvino/contributing/code-contribution-guide.html) - [OpenVINO First Good Issue](https://github.com/openvinotoolkit/openvino/issues/17502) ### Stabilization (Feature Freeze, Code Freeze milestones) @@ -25,5 +25,5 @@ This phase takes 2-4 weeks and involves scoping the backlog, prioritizing it, an - After Code Freeze, the testing team can perform final regression testing to ensure that recent changes have not introduced new bugs and that the software meets the required quality standards. ### Distribution -- OpenVINO has different types of build distribution: Regular releases, Long-Term Support, Pre-release releases, Nightly builds. Read more here: [OpenVINO Release Policy](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/release-policy.html) +- OpenVINO has different types of build distribution: Regular releases, Long-Term Support, Pre-release releases, Nightly builds. Read more here: [OpenVINO Release Policy](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/release-policy.html) - Different distribution channels are supported. Explore different options here: [OpenVINO Download](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html) diff --git a/docs/articles_en/about-openvino/contributing.rst b/docs/articles_en/about-openvino/contributing.rst index f14e5f58249259..430b5cd1ef1b3a 100644 --- a/docs/articles_en/about-openvino/contributing.rst +++ b/docs/articles_en/about-openvino/contributing.rst @@ -89,7 +89,7 @@ PR. This way, it will be easier for other developers to track changes. If you want to run inference on a device that is currently not supported, you can see how to develop a new plugin for it in the -`Plugin Developer Guide `__. +`Plugin Developer Guide `__. :fas:`file-alt` Improve documentation diff --git a/docs/articles_en/about-openvino/key-features.rst b/docs/articles_en/about-openvino/key-features.rst index 7e4ffab3cbb2ec..9aab39245fffe0 100644 --- a/docs/articles_en/about-openvino/key-features.rst +++ b/docs/articles_en/about-openvino/key-features.rst @@ -17,7 +17,7 @@ Easy Integration | With the OpenVINO GenAI, you can run generative models with just a few lines of code. Check out the GenAI guide for instructions on how to do it. -| `Python / C++ / C / NodeJS APIs `__ +| `Python / C++ / C / NodeJS APIs `__ | OpenVINO offers the C++ API as a complete set of available methods. For less resource-critical solutions, the Python API provides almost full coverage, while C and NodeJS ones are limited to the methods most basic for their typical environments. The NodeJS API, is still in its diff --git a/docs/articles_en/about-openvino/performance-benchmarks/generative-ai-performance.rst b/docs/articles_en/about-openvino/performance-benchmarks/generative-ai-performance.rst index 1f111563a4f29a..ab241c3947dc76 100644 --- a/docs/articles_en/about-openvino/performance-benchmarks/generative-ai-performance.rst +++ b/docs/articles_en/about-openvino/performance-benchmarks/generative-ai-performance.rst @@ -56,7 +56,7 @@ The tables below list the key performance indicators for inference on built-in G .. grid-item:: - .. button-link:: https://docs.openvino.ai/2024/_static/download/benchmarking_genai_platform_list.pdf + .. button-link:: https://docs.openvino.ai/2025/_static/download/benchmarking_genai_platform_list.pdf :color: primary :outline: :expand: diff --git a/docs/articles_en/documentation/openvino-ecosystem.rst b/docs/articles_en/documentation/openvino-ecosystem.rst index fbd4b6e53240a3..d0760f31469235 100644 --- a/docs/articles_en/documentation/openvino-ecosystem.rst +++ b/docs/articles_en/documentation/openvino-ecosystem.rst @@ -33,7 +33,7 @@ models. Check the LLM-powered Chatbot Jupyter notebook to see how GenAI works. | **Neural Network Compression Framework** | :bdg-link-dark:`Github ` - :bdg-link-success:`User Guide ` + :bdg-link-success:`User Guide ` A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization, and sparsity algorithms to PyTorch @@ -43,7 +43,7 @@ and TensorFlow models during training. | **OpenVINO Model Server** | :bdg-link-dark:`Github ` - :bdg-link-success:`User Guide ` + :bdg-link-success:`User Guide ` A high-performance system that can be used to access the host models via request to the model server. @@ -52,7 +52,7 @@ server. | **OpenVINO Notebooks** | :bdg-link-dark:`Github ` - :bdg-link-success:`Jupyter Notebook Collection ` + :bdg-link-success:`Jupyter Notebook Collection ` A collection of Jupyter notebooks for learning and experimenting with the OpenVINO™ Toolkit. |hr| @@ -68,7 +68,7 @@ without the need to convert. | **OpenVINO Training Extensions** | :bdg-link-dark:`Github ` - :bdg-link-success:`Overview Page ` + :bdg-link-success:`Overview Page ` A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference. @@ -77,7 +77,7 @@ toolkit for optimized inference. | **OpenVINO Security Addon** | :bdg-link-dark:`Github ` - :bdg-link-success:`User Guide ` + :bdg-link-success:`User Guide ` A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution. @@ -86,7 +86,7 @@ secure model execution. | **Datumaro** | :bdg-link-dark:`Github ` - :bdg-link-success:`Overview Page ` + :bdg-link-success:`Overview Page ` A framework and a CLI tool for building, transforming, and analyzing datasets. |hr| diff --git a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library.rst b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library.rst index 7a82099fcede73..b318695faae42a 100644 --- a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library.rst +++ b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library.rst @@ -94,6 +94,6 @@ Detailed Guides API References ############## -* `OpenVINO Plugin API `__ -* `OpenVINO Transformation API `__ +* `OpenVINO Plugin API `__ +* `OpenVINO Transformation API `__ diff --git a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/plugin-api-references.rst b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/plugin-api-references.rst index cc7eb7ced9cd38..ed20af46be3bcf 100644 --- a/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/plugin-api-references.rst +++ b/docs/articles_en/documentation/openvino-extensibility/openvino-plugin-library/plugin-api-references.rst @@ -15,6 +15,6 @@ Plugin API Reference The guides below provides extra API references needed for OpenVINO plugin development: -* `OpenVINO Plugin API `__ -* `OpenVINO Transformation API `__ +* `OpenVINO Plugin API `__ +* `OpenVINO Transformation API `__ diff --git a/docs/articles_en/get-started.rst b/docs/articles_en/get-started.rst index 9b46cc416605f3..a394d8f23973bd 100644 --- a/docs/articles_en/get-started.rst +++ b/docs/articles_en/get-started.rst @@ -30,7 +30,7 @@ GET STARTED For a quick reference, check out -`the Quick Start Guide [pdf] `__ +`the Quick Start Guide [pdf] `__ .. _quick-start-example: diff --git a/docs/articles_en/get-started/configurations/genai-dependencies.rst b/docs/articles_en/get-started/configurations/genai-dependencies.rst index 13e28107f69d63..bfb05b09be88c1 100644 --- a/docs/articles_en/get-started/configurations/genai-dependencies.rst +++ b/docs/articles_en/get-started/configurations/genai-dependencies.rst @@ -19,7 +19,7 @@ is used instead. Mixing different ABIs is not possible as doing so will result i To try OpenVINO GenAI with different dependencies versions (which are **not** prebuilt packages as archives or python wheels), build OpenVINO GenAI library from -`Source `__. +`Source `__. Additional Resources ####################### diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst b/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst index 026a76f2ee86d7..215c50c2c0653a 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-genai.rst @@ -14,7 +14,7 @@ and `LLM Instruction-following pipeline `__. OpenVINO GenAI is available for installation via PyPI and Archive distributions. -A `detailed guide `__ +A `detailed guide `__ on how to build OpenVINO GenAI is available in the OpenVINO GenAI repository. PyPI Installation diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-npm.rst b/docs/articles_en/get-started/install-openvino/install-openvino-npm.rst index 5060ccfc654229..bab314e071acfe 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-npm.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-npm.rst @@ -32,7 +32,7 @@ Installing OpenVINO Node.js .. note:: The *openvino-node* npm package runs in Node.js environment only and provides - a subset of `OpenVINO Runtime C++ API `__. + a subset of `OpenVINO Runtime C++ API `__. What's Next? #################### diff --git a/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst b/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst index cd3fd41fed03e0..c6031ab798cde4 100644 --- a/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst +++ b/docs/articles_en/get-started/install-openvino/install-openvino-pip.rst @@ -141,7 +141,7 @@ the following tutorials. .. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif :width: 400 -Try the `Python Quick Start Example `__ +Try the `Python Quick Start Example `__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser. @@ -152,9 +152,9 @@ Get started with Python Visit the :doc:`Tutorials <../../../learn-openvino/interactive-tutorials-python>` page for more Jupyter Notebooks to get you started with OpenVINO, such as: -* `OpenVINO Python API Tutorial `__ -* `Basic image classification program with Hello Image Classification `__ -* `Convert a PyTorch model and use it for image background removal `__ +* `OpenVINO Python API Tutorial `__ +* `Basic image classification program with Hello Image Classification `__ +* `Convert a PyTorch model and use it for image background removal `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/get-started-demos.rst b/docs/articles_en/learn-openvino/openvino-samples/get-started-demos.rst index 4d7d94efddb898..e3d73833b8f093 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/get-started-demos.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/get-started-demos.rst @@ -262,7 +262,7 @@ You need a model that is specific for your inference task. You can get it from o Convert the Model -------------------- -If Your model requires conversion, check the `article `__ for information how to do it. +If Your model requires conversion, check the `article `__ for information how to do it. .. _download-media: diff --git a/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst b/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst index 7a9a7d449d628d..219365e2bc0d7f 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/hello-classification.rst @@ -258,7 +258,7 @@ Additional Resources - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` - :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` -- `OpenVINO Runtime C API `__ +- `OpenVINO Runtime C API `__ - `Hello Classification Python Sample on Github `__ - `Hello Classification C++ Sample on Github `__ - `Hello Classification C Sample on Github `__ diff --git a/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst b/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst index 3d1c069e2c8cb1..3298a8625e6bfe 100644 --- a/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst +++ b/docs/articles_en/learn-openvino/openvino-samples/hello-nv12-input-classification.rst @@ -209,6 +209,6 @@ Additional Resources - :doc:`Get Started with Samples ` - :doc:`Using OpenVINO Samples <../openvino-samples>` - :doc:`Convert a Model <../../openvino-workflow/model-preparation/convert-model-to-ir>` -- `API Reference `__ +- `API Reference `__ - `Hello NV12 Input Classification C++ Sample on Github `__ - `Hello NV12 Input Classification C Sample on Github `__ diff --git a/docs/articles_en/openvino-workflow-generative.rst b/docs/articles_en/openvino-workflow-generative.rst index 5ac880ace110c3..d37f357dc167b3 100644 --- a/docs/articles_en/openvino-workflow-generative.rst +++ b/docs/articles_en/openvino-workflow-generative.rst @@ -38,7 +38,7 @@ options: text generation loop, tokenization, and scheduling, offering ease of use and high performance. - `Check out the OpenVINO GenAI Quick-start Guide [PDF] `__ + `Check out the OpenVINO GenAI Quick-start Guide [PDF] `__ .. tab-item:: Optimum Intel (Hugging Face integration) diff --git a/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst b/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst index 7e26f0891f779a..1f5e766b8951a4 100644 --- a/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst +++ b/docs/articles_en/openvino-workflow-generative/inference-with-genai.rst @@ -919,7 +919,7 @@ The use case described here regards the following OpenVINO GenAI API classes: * streamer_base - an abstract base class for creating streamers. * tokenizer - the tokenizer class for text encoding and decoding. -Learn more from the `GenAI API reference `__. +Learn more from the `GenAI API reference `__. Additional Resources #################### diff --git a/docs/articles_en/openvino-workflow/deployment-locally/integrate-openvino-with-ubuntu-snap.rst b/docs/articles_en/openvino-workflow/deployment-locally/integrate-openvino-with-ubuntu-snap.rst index c47fe7f17d84ed..3a7ffb7c991bb8 100644 --- a/docs/articles_en/openvino-workflow/deployment-locally/integrate-openvino-with-ubuntu-snap.rst +++ b/docs/articles_en/openvino-workflow/deployment-locally/integrate-openvino-with-ubuntu-snap.rst @@ -152,11 +152,11 @@ Method 3 (Recommended): User Application Snap based on OpenVINO Debian Packages ############################################################################### OpenVINO toolkit is also distributed via the -`APT repository `__, +`APT repository `__, which can be used in the snaps. Third-party apt repositories can be added to the snap's snapcraft.yaml (`see the snapcraft guide `__). -1. Download the `GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB `__: +1. Download the `GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB `__: .. code-block:: sh diff --git a/docs/articles_en/openvino-workflow/model-optimization-guide/compressing-models-during-training.rst b/docs/articles_en/openvino-workflow/model-optimization-guide/compressing-models-during-training.rst index 9e18707a7f39cb..f543b5d459175c 100644 --- a/docs/articles_en/openvino-workflow/model-optimization-guide/compressing-models-during-training.rst +++ b/docs/articles_en/openvino-workflow/model-optimization-guide/compressing-models-during-training.rst @@ -71,7 +71,7 @@ NNCF provides some state-of-the-art compression methods that are still in the ex stages of development and are only recommended for expert developers. These include: * Mixed-precision quantization. -* Sparsity (check out the `Sparsity-Aware Training notebook `__). +* Sparsity (check out the `Sparsity-Aware Training notebook `__). * Movement Pruning (Movement Sparsity). To learn `more about these methods `__, diff --git a/docs/articles_en/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.rst b/docs/articles_en/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.rst index 62c10e52266ec9..75b94741339b93 100644 --- a/docs/articles_en/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.rst +++ b/docs/articles_en/openvino-workflow/model-optimization-guide/quantizing-models-post-training/basic-quantization-flow.rst @@ -149,7 +149,7 @@ If you have not already installed OpenVINO developer tools, install it with ``pi :language: python :fragment: [inference] -TorchFX models can utilize OpenVINO optimizations using `torch.compile(..., backend="openvino") `__ functionality: +TorchFX models can utilize OpenVINO optimizations using `torch.compile(..., backend="openvino") `__ functionality: .. tab-set:: diff --git a/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst b/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst index 39658fcefca109..3daf95ccb2cc18 100644 --- a/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst +++ b/docs/articles_en/openvino-workflow/running-inference/changing-input-shape.rst @@ -21,7 +21,7 @@ The reshape method ######################## The reshape method is used as ``ov::Model::reshape`` in C++ and -`Model.reshape `__ +`Model.reshape `__ in Python. The method updates input shapes and propagates them down to the outputs of the model through all intermediate layers. The code below is an example of how to set a new batch size with the ``reshape`` method: @@ -206,8 +206,8 @@ Additional Resources #################### * :doc:`Extensibility documentation <../../documentation/openvino-extensibility>` - describes a special mechanism in OpenVINO that allows adding support of shape inference for custom operations. -* `ov::Model::reshape `__ - in OpenVINO Runtime C++ API -* `Model.reshape `__ - in OpenVINO Runtime Python API. +* `ov::Model::reshape `__ - in OpenVINO Runtime C++ API +* `Model.reshape `__ - in OpenVINO Runtime Python API. * :doc:`Dynamic Shapes ` * :doc:`OpenVINO samples <../../learn-openvino/openvino-samples>` * :doc:`Preprocessing API ` diff --git a/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst b/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst index b9978f3767562e..64c86a0635fa80 100644 --- a/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst +++ b/docs/articles_en/openvino-workflow/running-inference/dynamic-shapes.rst @@ -175,7 +175,7 @@ The lower and/or upper bounds of a dynamic dimension can also be specified. They .. tab-item:: C :sync: c - The dimension bounds can be coded as arguments for `ov_dimension `__, as shown in these examples: + The dimension bounds can be coded as arguments for `ov_dimension `__, as shown in these examples: .. doxygensnippet:: docs/articles_en/assets/snippets/ov_dynamic_shapes.c :language: cpp diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device/remote-tensor-api-gpu-plugin.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device/remote-tensor-api-gpu-plugin.rst index ce243dbd87f9ae..37c9261085b968 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device/remote-tensor-api-gpu-plugin.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/gpu-device/remote-tensor-api-gpu-plugin.rst @@ -669,6 +669,6 @@ To see pseudo-code of usage examples, refer to the sections below. See Also ####################################### -* `ov::Core `__ -* `ov::RemoteTensor `__ +* `ov::Core `__ +* `ov::RemoteTensor `__ diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.rst index e641192ae9bd0e..3a644adb158522 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/hetero-execution.rst @@ -16,8 +16,8 @@ Its purpose is to: Execution via the heterogeneous mode can be divided into two independent steps: -1. Setting hardware affinity to operations (`ov::Core::query_model `__ is used internally by the Hetero device). -2. Compiling a model to the Heterogeneous device assumes splitting the model to parts, compiling them on the specified devices (via `ov::device::priorities `__), and executing them in the Heterogeneous mode. The model is split to subgraphs in accordance with the affinities, where a set of connected operations with the same affinity is to be a dedicated subgraph. Each subgraph is compiled on a dedicated device and multiple `ov::CompiledModel `__ objects are made, which are connected via automatically allocated intermediate tensors. +1. Setting hardware affinity to operations (`ov::Core::query_model `__ is used internally by the Hetero device). +2. Compiling a model to the Heterogeneous device assumes splitting the model to parts, compiling them on the specified devices (via `ov::device::priorities `__), and executing them in the Heterogeneous mode. The model is split to subgraphs in accordance with the affinities, where a set of connected operations with the same affinity is to be a dedicated subgraph. Each subgraph is compiled on a dedicated device and multiple `ov::CompiledModel `__ objects are made, which are connected via automatically allocated intermediate tensors. If you set pipeline parallelism (via ``ov::hint::model_distribution_policy``), the model is split into multiple stages, and each stage is assigned to a different device. The output of one stage is fed as input to the next stage. @@ -51,7 +51,7 @@ Manual and Automatic Modes for Assigning Affinities The Manual Mode +++++++++++++++++++++ -It assumes setting affinities explicitly for all operations in the model using `ov::Node::get_rt_info `__ with the ``"affinity"`` key. +It assumes setting affinities explicitly for all operations in the model using `ov::Node::get_rt_info `__ with the ``"affinity"`` key. If you assign specific operation to a specific device, make sure that the device actually supports the operation. Randomly selecting operations and setting affinities may lead to decrease in model accuracy. To avoid that, try to set the related operations or subgraphs of this operation to the same affinity, such as the constant operation that will be folded into this operation. @@ -158,12 +158,12 @@ Importantly, the automatic mode will not work if any operation in a model has it .. note:: - `ov::Core::query_model `__ does not depend on affinities set by a user. Instead, it queries for an operation support based on device capabilities. + `ov::Core::query_model `__ does not depend on affinities set by a user. Instead, it queries for an operation support based on device capabilities. Configure fallback devices ########################## -If you want different devices in Hetero execution to have different device-specific configuration options, you can use the special helper property `ov::device::properties `__: +If you want different devices in Hetero execution to have different device-specific configuration options, you can use the special helper property `ov::device::properties `__: .. tab-set:: diff --git a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device/remote-tensor-api-npu-plugin.rst b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device/remote-tensor-api-npu-plugin.rst index c960a57124a28a..dafab4ecc980b7 100644 --- a/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device/remote-tensor-api-npu-plugin.rst +++ b/docs/articles_en/openvino-workflow/running-inference/inference-devices-and-modes/npu-device/remote-tensor-api-npu-plugin.rst @@ -130,6 +130,6 @@ For possible low-level properties and their description, refer to the header fil Additional Resources #################### -* `ov::Core `__ -* `ov::RemoteTensor `__ +* `ov::Core `__ +* `ov::RemoteTensor `__ diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst index 1562165916e576..df4b47c44688c6 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/layout-api-overview.rst @@ -22,7 +22,7 @@ Below is a list of cases where input/output layout is important: * Doing the same operations as used during the model conversion phase. For more information, refer to the: * :doc:`Convert to OpenVINO <../../../model-preparation/convert-model-to-ir>` - * `OpenVINO Model Conversion Tutorial `__ + * `OpenVINO Model Conversion Tutorial `__ * Improving the readability of a model input and output. diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details/integrate-save-preprocessing-use-case.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details/integrate-save-preprocessing-use-case.rst index 45c25b2bce21a0..fe42e1d735174d 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details/integrate-save-preprocessing-use-case.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimize-preprocessing/preprocessing-api-details/integrate-save-preprocessing-use-case.rst @@ -102,7 +102,7 @@ Additional Resources * :doc:`Layout API overview <../layout-api-overview>` * :doc:`Model Caching Overview <../../optimizing-latency/model-caching-overview>` * :doc:`Model Preparation <../../../../model-preparation>` -* The `ov::preprocess::PrePostProcessor `__ C++ class documentation -* The `ov::pass::Serialize `__ - pass to serialize model to XML/BIN +* The `ov::preprocess::PrePostProcessor `__ C++ class documentation +* The `ov::pass::Serialize `__ - pass to serialize model to XML/BIN * The ``ov::set_batch`` - update batch dimension for a given model diff --git a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst index b1b6da190a0192..d9ff757befd77f 100644 --- a/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst +++ b/docs/articles_en/openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview.rst @@ -14,13 +14,13 @@ a common workflow consists of the following steps: 1. | **Create a Core object**: | First step to manage available devices and read model objects 2. | **Read the Intermediate Representation**: - | Read an Intermediate Representation file into the `ov::Model `__ object + | Read an Intermediate Representation file into the `ov::Model `__ object 3. | **Prepare inputs and outputs**: | If needed, manipulate precision, memory layout, size or color format 4. | **Set configuration**: | Add device-specific loading configurations to the device 5. | **Compile and Load Network to device**: - | Use the `ov::Core::compile_model() `__ method with a specific device + | Use the `ov::Core::compile_model() `__ method with a specific device 6. | **Set input data**: | Specify input tensor 7. | **Execute**: diff --git a/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst b/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst index 0ad6530cb61188..48e9671f3b6d99 100644 --- a/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst +++ b/docs/articles_en/openvino-workflow/running-inference/stateful-models/obtaining-stateful-openvino-model.rst @@ -257,7 +257,7 @@ To apply LowLatency2 Transformation, follow the instruction below: In such a case, trim non-reshapable layers via :doc:`Conversion Parameters <../../model-preparation/conversion-parameters>`: - ``--input`` and ``--output``. For example, check the `OpenVINO Model Conversion Tutorial `__. + ``--input`` and ``--output``. For example, check the `OpenVINO Model Conversion Tutorial `__. As for the parameter and the problematic constant in the picture above, it can be trimmed by using the ``--input Reshape_layer_name`` command-line option. The problematic diff --git a/docs/articles_en/openvino-workflow/torch-compile.rst b/docs/articles_en/openvino-workflow/torch-compile.rst index d398704a819edc..22cdffbc0ad589 100644 --- a/docs/articles_en/openvino-workflow/torch-compile.rst +++ b/docs/articles_en/openvino-workflow/torch-compile.rst @@ -334,7 +334,7 @@ Model Quantization and Weights Compression Model quantization and weights compression are effective methods for accelerating model inference and reducing memory consumption, with minimal impact on model accuracy. The `torch.compile` OpenVINO backend supports two key model optimization APIs: -1. Neural Network Compression Framework (`NNCF `__). NNCF offers advanced algorithms for post-training quantization and weights compression in the OpenVINO toolkit. +1. Neural Network Compression Framework (`NNCF `__). NNCF offers advanced algorithms for post-training quantization and weights compression in the OpenVINO toolkit. 2. PyTorch 2 export quantization. A general-purpose API designed for quantizing models captured by ``torch.export``. @@ -344,7 +344,7 @@ NNCF is the recommended approach for model quantization and weights compression. NNCF Model Optimization Support (Preview) +++++++++++++++++++++++++++++++++++++++++++++ -The Neural Network Compression Framework (`NNCF `__) implements advanced quantization and weights compression algorithms, which can be applied to ``torch.fx.GraphModule`` to speed up inference +The Neural Network Compression Framework (`NNCF `__) implements advanced quantization and weights compression algorithms, which can be applied to ``torch.fx.GraphModule`` to speed up inference and decrease memory consumption. Model quantization example: @@ -381,7 +381,7 @@ Model weights compression example: NNCF unlocks the full potential of low-precision OpenVINO kernels due to the placement of quantizers designed specifically for the OpenVINO. Advanced algorithms like ``SmoothQuant`` or ``BiasCorrection`` allow further metrics improvement while minimizing the outputs discrepancies between the original and compressed models. -For further details, please see the `documentation `__ +For further details, please see the `documentation `__ and a `tutorial `__. Support for PyTorch 2 export quantization (Preview) diff --git a/docs/dev/cmake_options_for_custom_compilation.md b/docs/dev/cmake_options_for_custom_compilation.md index 5ace401ce091c6..bf4a78975a7e93 100644 --- a/docs/dev/cmake_options_for_custom_compilation.md +++ b/docs/dev/cmake_options_for_custom_compilation.md @@ -186,7 +186,7 @@ In this case OpenVINO CMake scripts take `TBBROOT` environment variable into acc [pugixml]:https://pugixml.org/ [ONNX]:https://onnx.ai/ [protobuf]:https://github.com/protocolbuffers/protobuf -[OpenVINO Runtime Introduction]:https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html +[OpenVINO Runtime Introduction]:https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application.html [PDPD]:https://github.com/PaddlePaddle/Paddle [TensorFlow]:https://www.tensorflow.org/ [TensorFlow Lite]:https://www.tensorflow.org/lite diff --git a/docs/dev/debug_capabilities.md b/docs/dev/debug_capabilities.md index 033de450d7ffc7..13c63dfb2f4877 100644 --- a/docs/dev/debug_capabilities.md +++ b/docs/dev/debug_capabilities.md @@ -2,7 +2,7 @@ OpenVINO components provides different debug capabilities, to get more information please read: -* [OpenVINO Model Debug Capabilities](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html#model-debugging-capabilities) +* [OpenVINO Model Debug Capabilities](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html#model-debugging-capabilities) * [OpenVINO Pass Manager Debug Capabilities](#todo) ## See also diff --git a/docs/dev/index.md b/docs/dev/index.md index cef96f4aa1003e..04d8cdd2e58f06 100644 --- a/docs/dev/index.md +++ b/docs/dev/index.md @@ -102,7 +102,7 @@ The OpenVINO Repository includes the following components. Click on the componen OpenVINO Components include: - * [OpenVINO™ Runtime](https://docs.openvino.ai/2024/openvino-workflow/running-inference.html) - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. + * [OpenVINO™ Runtime](https://docs.openvino.ai/2025/openvino-workflow/running-inference.html) - is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. * [core](../../src/core) - provides the base API for model representation and modification. * [inference](../../src/inference) - provides an API to infer models on the device. * [transformations](../../src/common/transformations) - contains the set of common transformations which are used in OpenVINO plugins. @@ -110,9 +110,9 @@ OpenVINO Components include: * [bindings](../../src/bindings) - contains all available OpenVINO bindings which are maintained by the OpenVINO team. * [c](../../src/bindings/c) - C API for OpenVINO™ Runtime * [python](../../src/bindings/python) - Python API for OpenVINO™ Runtime -* [Plugins](../../src/plugins) - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the [list of supported devices](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html). +* [Plugins](../../src/plugins) - contains OpenVINO plugins which are maintained in open-source by the OpenVINO team. For more information, take a look at the [list of supported devices](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html). * [Frontends](../../src/frontends) - contains available OpenVINO frontends that allow reading models from the native framework format. -* [OpenVINO Model Converter (OVC)](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html) - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, and adjusts deep learning models for optimal execution on end-point target devices. +* [OpenVINO Model Converter (OVC)](https://docs.openvino.ai/2025/openvino-workflow/model-preparation.html) - is a cross-platform command-line tool that facilitates the transition between training and deployment environments, and adjusts deep learning models for optimal execution on end-point target devices. * [Samples](https://github.com/openvinotoolkit/openvino/tree/master/samples) - applications in C, C++ and Python languages that show basic OpenVINO use cases. #### OpenVINO Component Structure diff --git a/docs/openvino_sphinx_theme/openvino_sphinx_theme/templates/navbar-nav.html b/docs/openvino_sphinx_theme/openvino_sphinx_theme/templates/navbar-nav.html index 47c024f91fc2c0..324a174efd174e 100644 --- a/docs/openvino_sphinx_theme/openvino_sphinx_theme/templates/navbar-nav.html +++ b/docs/openvino_sphinx_theme/openvino_sphinx_theme/templates/navbar-nav.html @@ -40,7 +40,7 @@ diff --git a/docs/snippets/src/main.cpp b/docs/snippets/src/main.cpp index 4c51cf46f75b5a..815222a32fe4da 100644 --- a/docs/snippets/src/main.cpp +++ b/docs/snippets/src/main.cpp @@ -42,7 +42,7 @@ ov::CompiledModel compiled_model = core.compile_model("model.tflite", "AUTO"); auto create_model = []() { std::shared_ptr model; // To construct a model, please follow - // https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html + // https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html return model; }; std::shared_ptr model = create_model(); diff --git a/docs/snippets/src/main.py b/docs/snippets/src/main.py index 5d5b39dbb1177c..d17f9350f9936e 100644 --- a/docs/snippets/src/main.py +++ b/docs/snippets/src/main.py @@ -30,7 +30,7 @@ def create_model(): # This example shows how to create ov::Function # # To construct a model, please follow - # https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html + # https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html data = ov.opset8.parameter([3, 1, 2], ov.Type.f32) res = ov.opset8.result(data) return ov.Model([res], [data], "model") diff --git a/docs/sphinx_setup/_static/html/footer.html b/docs/sphinx_setup/_static/html/footer.html index 75e046fa72fefc..cb9b07a426875a 100644 --- a/docs/sphinx_setup/_static/html/footer.html +++ b/docs/sphinx_setup/_static/html/footer.html @@ -110,13 +110,13 @@
    diff --git a/docs/sphinx_setup/conf.py b/docs/sphinx_setup/conf.py index ffe00effa610b4..69f0ea93a31d30 100644 --- a/docs/sphinx_setup/conf.py +++ b/docs/sphinx_setup/conf.py @@ -48,7 +48,7 @@ except ImportError: autodoc_mock_imports.append("openvino_genai") - + breathe_projects = { "openvino": "../xml/" } @@ -66,7 +66,7 @@ } -# html_baseurl = 'https://docs.openvino.ai/2024/' +# html_baseurl = 'https://docs.openvino.ai/2025/' # -- Sitemap configuration --------------------------------------------------- diff --git a/docs/sphinx_setup/index.rst b/docs/sphinx_setup/index.rst index b4e1039248f3a0..7fcedfa29666b9 100644 --- a/docs/sphinx_setup/index.rst +++ b/docs/sphinx_setup/index.rst @@ -11,8 +11,8 @@ generative AI, video, audio, and language with models from popular frameworks li TensorFlow, ONNX, and more. Convert and optimize models, and deploy across a mix of Intel® hardware and environments, on-premises and on-device, in the browser or in the cloud. -| Check out the `OpenVINO Cheat Sheet [PDF] `__ -| Check out the `GenAI Quick-start Guide [PDF] `__ +| Check out the `OpenVINO Cheat Sheet [PDF] `__ +| Check out the `GenAI Quick-start Guide [PDF] `__ .. container:: @@ -28,7 +28,7 @@ hardware and environments, on-premises and on-device, in the browser or in the c
  • New GenAI API

    Generative AI in only a few lines of code!

    - Check out our guide + Check out our guide
  • OpenVINO models on Hugging Face!

    @@ -38,12 +38,12 @@ hardware and environments, on-premises and on-device, in the browser or in the c
  • Improved model serving

    OpenVINO Model Server has improved parallel inferencing!

    - Learn more + Learn more
  • OpenVINO via PyTorch 2.0 torch.compile()

    Use OpenVINO directly in PyTorch-native applications!

    - Learn more + Learn more
diff --git a/samples/c/hello_classification/README.md b/samples/c/hello_classification/README.md index f5d34e5d6820e6..06add9b48fc80b 100644 --- a/samples/c/hello_classification/README.md +++ b/samples/c/hello_classification/README.md @@ -2,7 +2,7 @@ This sample demonstrates how to execute an inference of image classification networks like AlexNet and GoogLeNet using Synchronous Inference Request API and input auto-resize feature. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html) ## Requirements @@ -10,9 +10,9 @@ For more detailed information on how this sample works, check the dedicated [art | ---------------------------| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | | Validated images | The sample uses OpenCV\* to [read input image](https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) (\*.bmp, \*.png) | -| Supported devices | [All](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html), | -| | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html), | +| | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html) | Hello Classification C sample application demonstrates how to use the C API from OpenVINO in applications. diff --git a/samples/c/hello_nv12_input_classification/README.md b/samples/c/hello_nv12_input_classification/README.md index a282060c8f9d2b..0c73f957c2b51a 100644 --- a/samples/c/hello_nv12_input_classification/README.md +++ b/samples/c/hello_nv12_input_classification/README.md @@ -4,7 +4,7 @@ This sample demonstrates how to execute an inference of image classification net Hello NV12 Input Classification C Sample demonstrates how to use the NV12 automatic input pre-processing API in your applications. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-nv12-input-classification.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-nv12-input-classification.html) ## Requirements @@ -12,8 +12,8 @@ For more detailed information on how this sample works, check the dedicated [art | ----------------------------| ---------------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | | Validated images | An uncompressed image in the NV12 color format - \*.yuv | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-nv12-input-classification.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-nv12-input-classification.html) | The following C++ API is used in the application: @@ -27,6 +27,6 @@ The following C++ API is used in the application: | | ``ov_preprocess_preprocess_steps_convert_color`` | | -Basic OpenVINO API is covered by [Hello Classification C sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO API is covered by [Hello Classification C sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/cpp/benchmark/sync_benchmark/README.md b/samples/cpp/benchmark/sync_benchmark/README.md index 7cbc0f26624fa6..c28387f7716b84 100644 --- a/samples/cpp/benchmark/sync_benchmark/README.md +++ b/samples/cpp/benchmark/sync_benchmark/README.md @@ -2,7 +2,7 @@ This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/sync-benchmark.html) ## Requirements @@ -12,8 +12,8 @@ For more detailed information on how this sample works, check the dedicated [art | | [face-detection-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) | | Model Format | OpenVINO™ toolkit Intermediate Representation | | | (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/sync-benchmark.html) | The following C++ API is used in the application: diff --git a/samples/cpp/benchmark/throughput_benchmark/README.md b/samples/cpp/benchmark/throughput_benchmark/README.md index bf8e7e6c8b6291..df2d7e23fecddc 100644 --- a/samples/cpp/benchmark/throughput_benchmark/README.md +++ b/samples/cpp/benchmark/throughput_benchmark/README.md @@ -2,9 +2,9 @@ This sample demonstrates how to estimate performance of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options. -The reported results may deviate from what [benchmark_app](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) reports. One example is model input precision for computer vision tasks. benchmark_app sets ``uint8``, while the sample uses default model precision which is usually ``float32``. +The reported results may deviate from what [benchmark_app](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) reports. One example is model input precision for computer vision tasks. benchmark_app sets ``uint8``, while the sample uses default model precision which is usually ``float32``. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/throughput-benchmark.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/throughput-benchmark.html) ## Requirements @@ -14,8 +14,8 @@ For more detailed information on how this sample works, check the dedicated [art | | [face-detection-](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) | | Model Format | OpenVINO™ toolkit Intermediate Representation | | | (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/throughput-benchmark.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/throughput-benchmark.html) | The following C++ API is used in the application: diff --git a/samples/cpp/benchmark_app/README.md b/samples/cpp/benchmark_app/README.md index 1f9ad9d2c2eb4a..47f96fe7d24a27 100644 --- a/samples/cpp/benchmark_app/README.md +++ b/samples/cpp/benchmark_app/README.md @@ -2,14 +2,14 @@ This page demonstrates how to use the Benchmark C++ Tool to estimate deep learning inference performance on supported devices. -> **NOTE**: This page describes usage of the C++ implementation of the Benchmark Tool. For the Python implementation, refer to the [Benchmark Python Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) page. The Python version is recommended for benchmarking models that will be used in Python applications, and the C++ version is recommended for benchmarking models that will be used in C++ applications. Both tools have a similar command interface and backend. +> **NOTE**: This page describes usage of the C++ implementation of the Benchmark Tool. For the Python implementation, refer to the [Benchmark Python Tool](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) page. The Python version is recommended for benchmarking models that will be used in Python applications, and the C++ version is recommended for benchmarking models that will be used in C++ applications. Both tools have a similar command interface and backend. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) ## Requriements -To use the C++ benchmark_app, you must first build it following the [Build the Sample Applications](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) instructions and then set up paths and environment variables by following the [Get Ready for Running the Sample Applications](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/get-started-demos.html) instructions. Navigate to the directory where the benchmark_app C++ sample binary was built. +To use the C++ benchmark_app, you must first build it following the [Build the Sample Applications](https://docs.openvino.ai/2025/learn-openvino/openvino-samples.html) instructions and then set up paths and environment variables by following the [Get Ready for Running the Sample Applications](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/get-started-demos.html) instructions. Navigate to the directory where the benchmark_app C++ sample binary was built. -> **NOTE**: If you installed OpenVINO Runtime using PyPI or Anaconda Cloud, only the [Benchmark Python Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) is available, and you should follow the usage instructions on that page instead. +> **NOTE**: If you installed OpenVINO Runtime using PyPI or Anaconda Cloud, only the [Benchmark Python Tool](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) is available, and you should follow the usage instructions on that page instead. -The benchmarking application works with models in the OpenVINO IR, TensorFlow, TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it, OpenVINO also allows you to [convert your models](https://docs.openvino.ai/2024/documentation/openvino-workflow/model-preparation/convert-model-to-ir.html). +The benchmarking application works with models in the OpenVINO IR, TensorFlow, TensorFlow Lite, PaddlePaddle, PyTorch and ONNX formats. If you need it, OpenVINO also allows you to [convert your models](https://docs.openvino.ai/2025/documentation/openvino-workflow/model-preparation/convert-model-to-ir.html). diff --git a/samples/cpp/classification_sample_async/README.md b/samples/cpp/classification_sample_async/README.md index d0b73ec70810e7..df57f5a1a631b2 100644 --- a/samples/cpp/classification_sample_async/README.md +++ b/samples/cpp/classification_sample_async/README.md @@ -6,15 +6,15 @@ Models with only one input and output are supported. In addition to regular images, the sample also supports single-channel ``ubyte`` images as an input for LeNet model. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/image-classification-async.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/image-classification-async.html) ## Requirements | Options | Values | | ---------------------------| -------------------------------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/image-classification-async.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/image-classification-async.html) | The following C++ API is used in the application: diff --git a/samples/cpp/hello_classification/README.md b/samples/cpp/hello_classification/README.md index 88dc153a25b1e1..88e41818bc7c6f 100644 --- a/samples/cpp/hello_classification/README.md +++ b/samples/cpp/hello_classification/README.md @@ -4,15 +4,15 @@ This sample demonstrates how to do inference of image classification models usin Models with only one input and output are supported. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html) ## Requirements | Options | Values | | ----------------------------| ------------------------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python, C](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html), | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python, C](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html), | The following C++ API is used in the application: diff --git a/samples/cpp/hello_nv12_input_classification/README.md b/samples/cpp/hello_nv12_input_classification/README.md index 89d1b96a6b28c5..df13d52d6a0f0c 100644 --- a/samples/cpp/hello_nv12_input_classification/README.md +++ b/samples/cpp/hello_nv12_input_classification/README.md @@ -2,7 +2,7 @@ This sample demonstrates how to execute an inference of image classification models with images in NV12 color format using Synchronous Inference Request API. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-nv12-input-classification.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-nv12-input-classification.html) ## Requirements @@ -10,8 +10,8 @@ For more detailed information on how this sample works, check the dedicated [art | ----------------------------| --------------------------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | | Validated images | An uncompressed image in the NV12 color format - \*.yuv | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-nv12-input-classification.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-nv12-input-classification.html) | The following C++ API is used in the application: @@ -26,5 +26,5 @@ The following C++ API is used in the application: | | ``ov::preprocess::PreProcessSteps::convert_color`` | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/cpp/hello_query_device/README.md b/samples/cpp/hello_query_device/README.md index 4f70ed075f5874..1ee005cdce3075 100644 --- a/samples/cpp/hello_query_device/README.md +++ b/samples/cpp/hello_query_device/README.md @@ -1,15 +1,15 @@ # Hello Query Device C++ Sample -This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties.html). +This sample demonstrates how to execute an query OpenVINO™ Runtime devices, prints their metrics and default configuration values, using [Properties API](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties.html). -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-query-device.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-query-device.html) ## Requirements | Options | Values | | ------------------------------| ----------------------------------------------------------------------------------------------------------------------------| -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-query-device.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-query-device.html) | The following C++ API is used in the application: @@ -18,4 +18,4 @@ The following C++ API is used in the application: | Available Devices | ``ov::Core::get_available_devices``, | Get available devices information and configuration for inference | | | ``ov::Core::get_property`` | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/cpp/hello_reshape_ssd/README.md b/samples/cpp/hello_reshape_ssd/README.md index 1359b07fdf27b5..e90730fb6f9129 100644 --- a/samples/cpp/hello_reshape_ssd/README.md +++ b/samples/cpp/hello_reshape_ssd/README.md @@ -1,9 +1,9 @@ # Hello Reshape SSD C++ Sample -This sample demonstrates how to do synchronous inference of object detection models using [input reshape feature](https://docs.openvino.ai/2024/openvino-workflow/running-inference/changing-input-shape.html). +This sample demonstrates how to do synchronous inference of object detection models using [input reshape feature](https://docs.openvino.ai/2025/openvino-workflow/running-inference/changing-input-shape.html). Models with only one input and output are supported. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-reshape-ssd.html) ## Requirements @@ -11,8 +11,8 @@ For more detailed information on how this sample works, check the dedicated [art | ----------------------------| -----------------------------------------------------------------------------------------------------------------------------------------| | Validated Models | [person-detection-retail-0013](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-retail-0013) | | Model Format | OpenVINO™ toolkit Intermediate Representation (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-reshape-ssd.html) | The following C++ API is used in the application: @@ -29,4 +29,4 @@ The following C++ API is used in the application: | | ``ov::preprocess::PreProcessSteps::convert_layout`` | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/cpp/model_creation_sample/README.md b/samples/cpp/model_creation_sample/README.md index ec17ab736de543..2bd62e5522657d 100644 --- a/samples/cpp/model_creation_sample/README.md +++ b/samples/cpp/model_creation_sample/README.md @@ -1,10 +1,10 @@ # Model Creation C++ Sample -This sample demonstrates how to execute an synchronous inference using [model](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks. +This sample demonstrates how to execute an synchronous inference using [model](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file to create a model. The API of ov::Model allows creating a model on the fly from the source code. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/model-creation.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/model-creation.html) ## Requirements @@ -13,8 +13,8 @@ For more detailed information on how this sample works, check the dedicated [art | Validated Models | LeNet | | Model Format | model weights file (\*.bin) | | Validated images | single-channel ``MNIST ubyte`` images | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [Python](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/model-creation.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [Python](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/model-creation.html) | The following C++ API is used in the application: @@ -43,4 +43,4 @@ The following C++ API is used in the application: | | ``ov::Model``, | | | | ``ov::ParameterVector::vector`` | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/js/node/notebooks/vision-background-removal.nnb b/samples/js/node/notebooks/vision-background-removal.nnb index fea5ba9fc5dd68..48cebf48770356 100644 --- a/samples/js/node/notebooks/vision-background-removal.nnb +++ b/samples/js/node/notebooks/vision-background-removal.nnb @@ -38,7 +38,7 @@ { "language": "typescript", "source": [ - "// Details about this normalization:\n// https://docs.openvino.ai/2024/notebooks/vision-background-removal-with-output.html#load-and-pre-process-input-image\nfunction normalizeImage(imageData, width, height) {\n // Mean and scale values\n const inputMean = [123.675, 116.28, 103.53];\n const inputScale = [58.395, 57.12, 57.375];\n\n const normalizedData = new Float32Array(imageData.length);\n const channels = 3;\n\n for (let i = 0; i < height; i++) {\n for (let j = 0; j < width; j++) {\n for (let c = 0; c < channels; c++) {\n const index = i * width * channels + j * channels + c;\n\n normalizedData[index] =\n (imageData[index] - inputMean[c]) / inputScale[c];\n }\n }\n }\n\n return normalizedData;\n}" + "// Details about this normalization:\n// https://docs.openvino.ai/2025/notebooks/vision-background-removal-with-output.html#load-and-pre-process-input-image\nfunction normalizeImage(imageData, width, height) {\n // Mean and scale values\n const inputMean = [123.675, 116.28, 103.53];\n const inputScale = [58.395, 57.12, 57.375];\n\n const normalizedData = new Float32Array(imageData.length);\n const channels = 3;\n\n for (let i = 0; i < height; i++) {\n for (let j = 0; j < width; j++) {\n for (let c = 0; c < channels; c++) {\n const index = i * width * channels + j * channels + c;\n\n normalizedData[index] =\n (imageData[index] - inputMean[c]) / inputScale[c];\n }\n }\n }\n\n return normalizedData;\n}" ], "outputs": [] }, diff --git a/samples/js/node/vision_background_removal/vision_background_removal.js b/samples/js/node/vision_background_removal/vision_background_removal.js index aabaad1a65b80c..e8687b8abd3bd8 100644 --- a/samples/js/node/vision_background_removal/vision_background_removal.js +++ b/samples/js/node/vision_background_removal/vision_background_removal.js @@ -112,7 +112,7 @@ async function main( } // Details about this normalization: -// https://docs.openvino.ai/2024/notebooks/vision-background-removal-with-output.html#load-and-pre-process-input-image +// https://docs.openvino.ai/2025/notebooks/vision-background-removal-with-output.html#load-and-pre-process-input-image function normalizeImage(imageData, width, height) { // Mean and scale values const inputMean = [123.675, 116.28, 103.53]; diff --git a/samples/python/benchmark/bert_benchmark/README.md b/samples/python/benchmark/bert_benchmark/README.md index 423ba9b9188ffc..b5987b8e417dcb 100644 --- a/samples/python/benchmark/bert_benchmark/README.md +++ b/samples/python/benchmark/bert_benchmark/README.md @@ -2,7 +2,7 @@ This sample demonstrates how to estimate performance of a Bert model using Asynchronous Inference Request API. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have configurable command line arguments. Feel free to modify sample's source code to try out different options. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/bert-benchmark.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/bert-benchmark.html) The sample downloads a model and a tokenizer, export the model to onnx, reads the exported model and reshapes it to enforce dynamic input shapes, compiles the resulting model, downloads a dataset and runs benchmarking on the dataset. diff --git a/samples/python/benchmark/sync_benchmark/README.md b/samples/python/benchmark/sync_benchmark/README.md index e620ca60512c6f..f38084b2e6e862 100644 --- a/samples/python/benchmark/sync_benchmark/README.md +++ b/samples/python/benchmark/sync_benchmark/README.md @@ -2,7 +2,7 @@ This sample demonstrates how to estimate performance of a model using Synchronous Inference Request API. It makes sense to use synchronous inference only in latency oriented scenarios. Models with static input shapes are supported. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/sync-benchmark.html) ## Requirements @@ -12,8 +12,8 @@ For more detailed information on how this sample works, check the dedicated [art | | [face-detection-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) | | Model Format | OpenVINO™ toolkit Intermediate Representation | | | (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/sync-benchmark.html) | The following Python API is used in the application: diff --git a/samples/python/benchmark/throughput_benchmark/README.md b/samples/python/benchmark/throughput_benchmark/README.md index dea73615570921..54a9a1759b0473 100644 --- a/samples/python/benchmark/throughput_benchmark/README.md +++ b/samples/python/benchmark/throughput_benchmark/README.md @@ -2,9 +2,9 @@ This sample demonstrates how to estimate performance of a model using Asynchronous Inference Request API in throughput mode. Unlike [demos](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos) this sample doesn't have other configurable command line arguments. Feel free to modify sample's source code to try out different options. -The reported results may deviate from what [benchmark_app](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) reports. One example is model input precision for computer vision tasks. benchmark_app sets uint8, while the sample uses default model precision which is usually float32. +The reported results may deviate from what [benchmark_app](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) reports. One example is model input precision for computer vision tasks. benchmark_app sets uint8, while the sample uses default model precision which is usually float32. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/sync-benchmark.html) ## Requirements @@ -14,8 +14,8 @@ For more detailed information on how this sample works, check the dedicated [art | | [face-detection-0200](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/face-detection-0200) | | Model Format | OpenVINO™ toolkit Intermediate Representation | | | (\*.xml + \*.bin), ONNX (\*.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/sync-benchmark.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/sync-benchmark.html) | The following Python API is used in the application: diff --git a/samples/python/classification_sample_async/README.md b/samples/python/classification_sample_async/README.md index 6de60ed0a192cd..8f99ffe5f88cf6 100644 --- a/samples/python/classification_sample_async/README.md +++ b/samples/python/classification_sample_async/README.md @@ -4,24 +4,24 @@ This sample demonstrates how to do inference of image classification models usin Models with only 1 input and output are supported. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/image-classification-async.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/image-classification-async.html) ## Requirements | Options | Values | | ---------------------------| -----------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin), ONNX (.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/image-classification-async.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/image-classification-async.html) | The following Python API is used in the application: | Feature | API | Description | | -------------------| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------| -| Asynchronous Infer | [openvino.runtime.AsyncInferQueue](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html), | Do asynchronous inference | -| | [openvino.runtime.AsyncInferQueue.set_callback](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.set_callback), | | -| | [openvino.runtime.AsyncInferQueue.start_async](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.start_async), | | -| | [openvino.runtime.AsyncInferQueue.wait_all](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.wait_all), | | -| | [openvino.runtime.InferRequest.results](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.InferRequest.html#openvino.runtime.InferRequest.results) | | +| Asynchronous Infer | [openvino.runtime.AsyncInferQueue](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html), | Do asynchronous inference | +| | [openvino.runtime.AsyncInferQueue.set_callback](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.set_callback), | | +| | [openvino.runtime.AsyncInferQueue.start_async](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.start_async), | | +| | [openvino.runtime.AsyncInferQueue.wait_all](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.AsyncInferQueue.html#openvino.runtime.AsyncInferQueue.wait_all), | | +| | [openvino.runtime.InferRequest.results](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.InferRequest.html#openvino.runtime.InferRequest.results) | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification Python Sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification Python Sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/python/hello_classification/README.md b/samples/python/hello_classification/README.md index ed3c2c4b3b0bef..4a9e99b3c617b4 100644 --- a/samples/python/hello_classification/README.md +++ b/samples/python/hello_classification/README.md @@ -4,32 +4,32 @@ This sample demonstrates how to do inference of image classification models usin Models with only 1 input and output are supported. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html) ## Requirements | Options | Values | | ----------------------------| ----------------------------------------------------------------------------------------------------------------------| | Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin), ONNX (.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++, C](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html), | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++, C](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html), | The following Python API is used in the application: | Feature | API | Description | | ------------------| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------| -| Basic Infer Flow | [openvino.runtime.Core](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Core.html) , | | -| | [openvino.runtime.Core.read_model](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.read_model), | | -| | [openvino.runtime.Core.compile_model](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.compile_model) | Common API to do inference | -| Synchronous Infer | [openvino.runtime.CompiledModel.infer_new_request](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.CompiledModel.html#openvino.runtime.CompiledModel.infer_new_request), | Do synchronous inference | -| Model Operations | [openvino.runtime.Model.inputs](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.inputs), | Managing of model | -| | [openvino.runtime.Model.outputs](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.outputs), | | -| Preprocessing | [openvino.preprocess.PrePostProcessor](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html), | Set image of the original size as input for a model with other input size. | -| | [openvino.preprocess.InputTensorInfo.set_element_type](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.InputTensorInfo.set_element_type), | Resize and layout conversions will be performed automatically by the corresponding plugin just before inference | -| | [openvino.preprocess.InputTensorInfo.set_layout](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.InputTensorInfo.set_layout), | | -| | [openvino.preprocess.InputTensorInfo.set_spatial_static_shape](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.InputTensorInfo.set_spatial_static_shape), | | -| | [openvino.preprocess.PreProcessSteps.resize](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.PreProcessSteps.html#openvino.preprocess.PreProcessSteps.resize), | | -| | [openvino.preprocess.InputModelInfo.set_layout](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.InputModelInfo.html#openvino.preprocess.InputModelInfo.set_layout), | | -| | [openvino.preprocess.OutputTensorInfo.set_element_type](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.OutputTensorInfo.html#openvino.preprocess.OutputTensorInfo.set_element_type), | | -| | [openvino.preprocess.PrePostProcessor.build](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.PrePostProcessor.build) | | +| Basic Infer Flow | [openvino.runtime.Core](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Core.html) , | | +| | [openvino.runtime.Core.read_model](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.read_model), | | +| | [openvino.runtime.Core.compile_model](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.compile_model) | Common API to do inference | +| Synchronous Infer | [openvino.runtime.CompiledModel.infer_new_request](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.CompiledModel.html#openvino.runtime.CompiledModel.infer_new_request), | Do synchronous inference | +| Model Operations | [openvino.runtime.Model.inputs](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.inputs), | Managing of model | +| | [openvino.runtime.Model.outputs](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.outputs), | | +| Preprocessing | [openvino.preprocess.PrePostProcessor](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html), | Set image of the original size as input for a model with other input size. | +| | [openvino.preprocess.InputTensorInfo.set_element_type](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.InputTensorInfo.set_element_type), | Resize and layout conversions will be performed automatically by the corresponding plugin just before inference | +| | [openvino.preprocess.InputTensorInfo.set_layout](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.InputTensorInfo.set_layout), | | +| | [openvino.preprocess.InputTensorInfo.set_spatial_static_shape](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.InputTensorInfo.set_spatial_static_shape), | | +| | [openvino.preprocess.PreProcessSteps.resize](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.PreProcessSteps.html#openvino.preprocess.PreProcessSteps.resize), | | +| | [openvino.preprocess.InputModelInfo.set_layout](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.InputModelInfo.html#openvino.preprocess.InputModelInfo.set_layout), | | +| | [openvino.preprocess.OutputTensorInfo.set_element_type](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.OutputTensorInfo.html#openvino.preprocess.OutputTensorInfo.set_element_type), | | +| | [openvino.preprocess.PrePostProcessor.build](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.preprocess.PrePostProcessor.html#openvino.preprocess.PrePostProcessor.build) | | diff --git a/samples/python/hello_query_device/README.md b/samples/python/hello_query_device/README.md index 9e415d9d1a76f1..375aefee8de9a0 100644 --- a/samples/python/hello_query_device/README.md +++ b/samples/python/hello_query_device/README.md @@ -1,20 +1,20 @@ # Hello Query Device Python Sample -This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties.html). +This sample demonstrates how to show OpenVINO™ Runtime devices and prints their metrics and default configuration values using [Query Device API feature](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/query-device-properties.html). -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-query-device.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-query-device.html) ## Requirements | Options | Values | | ----------------------------| --------------------------------------------------------------------------------------------------------| -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-query-device.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-query-device.html) | The following Python API is used in the application: | Feature | API | Description | | --------------| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------| -| Basic | [openvino.runtime.Core](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Core.html) | Common API | -| Query Device | [openvino.runtime.Core.available_devices](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.available_devices) , | Get device properties | +| Basic | [openvino.runtime.Core](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Core.html) | Common API | +| Query Device | [openvino.runtime.Core.available_devices](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Core.html#openvino.runtime.Core.available_devices) , | Get device properties | diff --git a/samples/python/hello_reshape_ssd/README.md b/samples/python/hello_reshape_ssd/README.md index 98d65ce7f601e2..cbdb6d3b21b88e 100644 --- a/samples/python/hello_reshape_ssd/README.md +++ b/samples/python/hello_reshape_ssd/README.md @@ -1,6 +1,6 @@ # Hello Reshape SSD Python Sample -This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](https://docs.openvino.ai/2024/openvino-workflow/running-inference/changing-input-shape.html). +This sample demonstrates how to do synchronous inference of object detection models using [Shape Inference feature](https://docs.openvino.ai/2025/openvino-workflow/running-inference/changing-input-shape.html). Models with only 1 input and output are supported. @@ -10,16 +10,16 @@ Models with only 1 input and output are supported. | ----------------------------| ---------------------------------------------------------------------------------------------------------| | Validated Layout | NCHW | | Model Format | OpenVINO™ toolkit Intermediate Representation (.xml + .bin), ONNX (.onnx) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-reshape-ssd.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-reshape-ssd.html) | The following Python API is used in the application: | Feature | API | Description | | -----------------| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------| -| Model Operations | [openvino.runtime.Model.reshape](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape), | Managing of model | -| | [openvino.runtime.Model.input](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.input), | | -| | [openvino.runtime.Output.get_any_name](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Output.html#openvino.runtime.Output.get_any_name), | | -| | [openvino.runtime.PartialShape](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.PartialShape.html) | | +| Model Operations | [openvino.runtime.Model.reshape](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.reshape), | Managing of model | +| | [openvino.runtime.Model.input](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.input), | | +| | [openvino.runtime.Output.get_any_name](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Output.html#openvino.runtime.Output.get_any_name), | | +| | [openvino.runtime.PartialShape](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.PartialShape.html) | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/samples/python/model_creation_sample/README.md b/samples/python/model_creation_sample/README.md index 11c2ff4479b3fe..b70e44ee8f52c1 100644 --- a/samples/python/model_creation_sample/README.md +++ b/samples/python/model_creation_sample/README.md @@ -1,8 +1,8 @@ # Model Creation Python Sample -This sample demonstrates how to run inference using a [model](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. +This sample demonstrates how to run inference using a [model](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/model-creation.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/model-creation.html) ## Requirements @@ -10,24 +10,24 @@ For more detailed information on how this sample works, check the dedicated [art | ----------------------------| ------------------------------------------------------------------------------------------------------------| | Validated Models | LeNet | | Model Format | Model weights file (\*.bin) | -| Supported devices | [All](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) | -| Other language realization | [C++](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/model-creation.html) | +| Supported devices | [All](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) | +| Other language realization | [C++](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/model-creation.html) | The following OpenVINO Python API is used in the application: | Feature | API | Description | | ------------------| ----------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------| -| Model Operations | [openvino.runtime.Model](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Model.html) , | Managing of model | -| | [openvino.runtime.set_batch](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.set_batch.html) , | | -| | [openvino.runtime.Model.input](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.input) | | -| Opset operations | [openvino.runtime.op.Parameter](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.op.Parameter.html), | Description of a model topology using OpenVINO Python API | -| | [openvino.runtime.op.Constant](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.op.Constant.html) , | | -| | [openvino.runtime.opset8.convolution](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset8.convolution.html) , | | -| | [openvino.runtime.opset8.add](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset8.add.html) , | | -| | [openvino.runtime.opset1.max_pool](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset1.max_pool.html) , | | -| | [openvino.runtime.opset8.reshape](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset8.reshape.html) , | | -| | [openvino.runtime.opset8.matmul](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset8.matmul.html) , | | -| | [openvino.runtime.opset8.relu](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset8.relu.html) , | | -| | [openvino.runtime.opset8.softmax](https://docs.openvino.ai/2024/api/ie_python_api/_autosummary/openvino.runtime.opset8.softmax.html) | | +| Model Operations | [openvino.runtime.Model](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Model.html) , | Managing of model | +| | [openvino.runtime.set_batch](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.set_batch.html) , | | +| | [openvino.runtime.Model.input](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.Model.html#openvino.runtime.Model.input) | | +| Opset operations | [openvino.runtime.op.Parameter](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.op.Parameter.html), | Description of a model topology using OpenVINO Python API | +| | [openvino.runtime.op.Constant](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.op.Constant.html) , | | +| | [openvino.runtime.opset8.convolution](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset8.convolution.html) , | | +| | [openvino.runtime.opset8.add](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset8.add.html) , | | +| | [openvino.runtime.opset1.max_pool](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset1.max_pool.html) , | | +| | [openvino.runtime.opset8.reshape](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset8.reshape.html) , | | +| | [openvino.runtime.opset8.matmul](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset8.matmul.html) , | | +| | [openvino.runtime.opset8.relu](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset8.relu.html) , | | +| | [openvino.runtime.opset8.softmax](https://docs.openvino.ai/2025/api/ie_python_api/_autosummary/openvino.runtime.opset8.softmax.html) | | -Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-classification.html). +Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-classification.html). diff --git a/src/README.md b/src/README.md index 91def8625756ac..ae301797d22b32 100644 --- a/src/README.md +++ b/src/README.md @@ -59,7 +59,7 @@ OpenVINO provides bindings for different languages. To get the full list of supp ## Core developer topics * [OpenVINO architecture](./docs/architecture.md) - * [Plugin Development](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html) + * [Plugin Development](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html) * [Thread safety](#todo) * [Performance](#todo) diff --git a/src/bindings/c/README.md b/src/bindings/c/README.md index 702167ac9a2f9b..ad0c1562cdb705 100644 --- a/src/bindings/c/README.md +++ b/src/bindings/c/README.md @@ -25,14 +25,14 @@ People from the [openvino-c-api-maintainers](https://github.com/orgs/openvinotoo OpenVINO C API has the following structure: * [docs](./docs) contains developer documentation for OpenVINO C APIs. - * [include](./include) contains all provided C API headers. [Learn more](https://docs.openvino.ai/2024/api/api_reference.html). + * [include](./include) contains all provided C API headers. [Learn more](https://docs.openvino.ai/2025/api/api_reference.html). * [src](./src) contains the implementations of all C APIs. * [tests](./tests) contains all tests for OpenVINO C APIs. [Learn more](./docs/how_to_write_unit_test.md). ## Tutorials -* [How to integrate OpenVINO C API with Your Application](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html) +* [How to integrate OpenVINO C API with Your Application](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application.html) * [How to wrap OpenVINO objects with C](./docs/how_to_wrap_openvino_objects_with_c.md) * [How to wrap OpenVINO interfaces with C](./docs/how_to_wrap_openvino_interfaces_with_c.md) * [Samples implemented by OpenVINO C API](../../../samples/c/) @@ -46,5 +46,5 @@ See [CONTRIBUTING](../../../CONTRIBUTING.md) for details. ## See also * [OpenVINO™ README](../../../README.md) - * [OpenVINO Runtime C API User Guide](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html) + * [OpenVINO Runtime C API User Guide](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application.html) diff --git a/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md b/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md index b0e73784bb9070..a84970594f2c1f 100644 --- a/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md +++ b/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md @@ -78,4 +78,4 @@ The tensor create needs to specify the shape info, so C shape need to be convert ## See also * [OpenVINO™ README](../../../../README.md) * [C API developer guide](../README.md) - * [C API Reference](https://docs.openvino.ai/2024/api/api_reference.html) + * [C API Reference](https://docs.openvino.ai/2025/api/api_reference.html) diff --git a/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md b/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md index 190157c98c0922..e42ff8f6a68bec 100644 --- a/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md +++ b/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md @@ -73,4 +73,4 @@ https://github.com/openvinotoolkit/openvino/blob/d96c25844d6cfd5ad131539c8a09282 ## See also * [OpenVINO™ README](../../../../README.md) * [C API developer guide](../README.md) - * [C API Reference](https://docs.openvino.ai/2024/api/api_reference.html) \ No newline at end of file + * [C API Reference](https://docs.openvino.ai/2025/api/api_reference.html) \ No newline at end of file diff --git a/src/bindings/c/docs/how_to_write_unit_test.md b/src/bindings/c/docs/how_to_write_unit_test.md index b694cb97be42ae..90fe663d1947bc 100644 --- a/src/bindings/c/docs/how_to_write_unit_test.md +++ b/src/bindings/c/docs/how_to_write_unit_test.md @@ -14,5 +14,5 @@ https://github.com/openvinotoolkit/openvino/blob/d96c25844d6cfd5ad131539c8a09282 ## See also * [OpenVINO™ README](../../../../README.md) * [C API developer guide](../README.md) - * [C API Reference](https://docs.openvino.ai/2024/api/api_reference.html) + * [C API Reference](https://docs.openvino.ai/2025/api/api_reference.html) diff --git a/src/bindings/js/node/README.md b/src/bindings/js/node/README.md index 355a54838b061c..7b3b33ee8383b5 100644 --- a/src/bindings/js/node/README.md +++ b/src/bindings/js/node/README.md @@ -20,7 +20,7 @@ Use the **openvino-node** package: const { addon: ov } = require('openvino-node'); ``` -Refer to the complete description of the `addon` API in the [documentation](https://docs.openvino.ai/2024/api/nodejs_api/addon.html). +Refer to the complete description of the `addon` API in the [documentation](https://docs.openvino.ai/2025/api/nodejs_api/addon.html). See the [samples](https://github.com/openvinotoolkit/openvino/blob/master/samples/js/node/README.md) for more details on how to use it. @@ -38,7 +38,7 @@ To use the package in development of Electron applications on Windows, make sure ## Documentation & Samples -- [OpenVINO™ Node.js API](https://docs.openvino.ai/2024/api/nodejs_api/nodejs_api.html) +- [OpenVINO™ Node.js API](https://docs.openvino.ai/2025/api/nodejs_api/nodejs_api.html) - [OpenVINO™ Node.js Bindings Examples of Usage](https://github.com/openvinotoolkit/openvino/blob/master/samples/js/node/README.md) ## Live Sample diff --git a/src/bindings/python/README.md b/src/bindings/python/README.md index c6e2b595ba61a4..0e920c2b809f50 100644 --- a/src/bindings/python/README.md +++ b/src/bindings/python/README.md @@ -41,8 +41,8 @@ If you want to contribute to OpenVINO Python API, here is the list of learning m * [OpenVINO™ README](../../../README.md) * [OpenVINO™ Core Components](../../README.md) -* [OpenVINO™ Python API Reference](https://docs.openvino.ai/2024/api/ie_python_api/api.html) -* [OpenVINO™ Python API Advanced Inference](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-advanced-inference.html) -* [OpenVINO™ Python API Exclusives](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-exclusives.html) +* [OpenVINO™ Python API Reference](https://docs.openvino.ai/2025/api/ie_python_api/api.html) +* [OpenVINO™ Python API Advanced Inference](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-advanced-inference.html) +* [OpenVINO™ Python API Exclusives](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-exclusives.html) * [pybind11 repository](https://github.com/pybind/pybind11) * [pybind11 documentation](https://pybind11.readthedocs.io/en/stable/) diff --git a/src/bindings/python/docs/build.md b/src/bindings/python/docs/build.md index 36aecd4350d2d5..324b3f6407941a 100644 --- a/src/bindings/python/docs/build.md +++ b/src/bindings/python/docs/build.md @@ -18,7 +18,7 @@ To learn more about wheels and their use cases, check out the article [What Are OpenVINO can be built based on specific virtual environments such as [venv](https://docs.python.org/3/tutorial/venv.html), [virtualenv](https://virtualenv.pypa.io/en/latest/) or [pyenv](https://github.com/pyenv/pyenv). It is highly recommended to use virtual environments during development. They improve development process and allow better management of Python versions and packages. -*Note: Supported Python versions can be found in ["System Requirements"](https://docs.openvino.ai/nightly/about-openvino/release-notes-openvino/system-requirements.html).* +*Note: Supported Python versions can be found in ["System Requirements"](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/system-requirements.html).* ### Example: using pyenv with OpenVINO™ on Linux based system diff --git a/src/bindings/python/src/openvino/preprocess/README.md b/src/bindings/python/src/openvino/preprocess/README.md index 53b3c9367bacc1..eba70927aa7733 100644 --- a/src/bindings/python/src/openvino/preprocess/README.md +++ b/src/bindings/python/src/openvino/preprocess/README.md @@ -55,6 +55,6 @@ If you have any questions, feature requests or want us to review your PRs, send * [OpenVINO™ README](../../../README.md) * [OpenVINO™ Core Components](../../README.md) -* [OpenVINO™ Python API Reference](https://docs.openvino.ai/2024/api/ie_python_api/api.html) -* [OpenVINO™ Python API Advanced Inference](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-advanced-inference.html) -* [OpenVINO™ Python API Exclusives](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-exclusives.html) +* [OpenVINO™ Python API Reference](https://docs.openvino.ai/2025/api/ie_python_api/api.html) +* [OpenVINO™ Python API Advanced Inference](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-advanced-inference.html) +* [OpenVINO™ Python API Exclusives](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/python-api-exclusives.html) diff --git a/src/bindings/python/wheel/setup.py b/src/bindings/python/wheel/setup.py index 620ce30f33dbca..1b3bed6220b7c5 100644 --- a/src/bindings/python/wheel/setup.py +++ b/src/bindings/python/wheel/setup.py @@ -792,7 +792,7 @@ def concat_files(input_files, output_file): long_description_md = WORKING_DIR / "build" / "pypi-openvino-rt.md" long_description_md.parent.mkdir(exist_ok=True) concat_files(md_files, long_description_md) - docs_url = "https://docs.openvino.ai/nightly/index.html" + docs_url = "https://docs.openvino.ai/2025/index.html" OPENVINO_VERSION = WHEEL_VERSION[0:8] setup( diff --git a/src/common/transformations/src/transformations/sdpa_to_paged_attention/state_management_pattern.cpp b/src/common/transformations/src/transformations/sdpa_to_paged_attention/state_management_pattern.cpp index f282baf355d06e..26f79693ac5ff6 100644 --- a/src/common/transformations/src/transformations/sdpa_to_paged_attention/state_management_pattern.cpp +++ b/src/common/transformations/src/transformations/sdpa_to_paged_attention/state_management_pattern.cpp @@ -329,7 +329,7 @@ ov::pass::StateManagementPattern::StateManagementPattern(ParameterVector& kv_par auto sdpa_node = pattern_map.at(pattern_map.count(sdpa_with_4_inputs) ? sdpa_with_4_inputs : sdpa_with_5_inputs).get_node(); // E and Ev are from the SDPA specification at - // https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets/operation-specs/sequence/scaled-dot-product-attention.html + // https://docs.openvino.ai/2025/documentation/openvino-ir-format/operation-sets/operation-specs/sequence/scaled-dot-product-attention.html auto E = sdpa_node->get_input_tensor(1).get_partial_shape()[-1]; auto Ev = sdpa_node->get_input_tensor(2).get_partial_shape()[-1]; // in common case may not match E diff --git a/src/core/README.md b/src/core/README.md index c9df3328fb6b9e..f67e8ac7ae21dd 100644 --- a/src/core/README.md +++ b/src/core/README.md @@ -2,7 +2,7 @@ OpenVINO Core is a part of OpenVINO Runtime library. The component is responsible for: - * Model representation - component provides classes for manipulation with models inside the OpenVINO Runtime. For more information please read [Model representation in OpenVINO Runtime User Guide](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html) + * Model representation - component provides classes for manipulation with models inside the OpenVINO Runtime. For more information please read [Model representation in OpenVINO Runtime User Guide](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html) * Operation representation - contains all from the box supported OpenVINO operations and opsets. For more information read [Operations enabling flow guide](./docs/operation_enabling_flow.md). * Model modification - component provides base classes which allow to develop transformation passes for model modification. For more information read [Transformation enabling flow guide](#todo). @@ -26,7 +26,7 @@ OpenVINO Core has the next structure: ## Tutorials * [How to add new operations](./docs/operation_enabling_flow.md). - * [How to add OpenVINO Extension](https://docs.openvino.ai/2024/documentation/openvino-extensibility.html). This document is based on the [template_extension](./template_extension/new/). + * [How to add OpenVINO Extension](https://docs.openvino.ai/2025/documentation/openvino-extensibility.html). This document is based on the [template_extension](./template_extension/new/). * [How to debug the component](./docs/debug_capabilities.md). ## See also diff --git a/src/core/docs/api_details.md b/src/core/docs/api_details.md index 1d1e888b5799fd..e3557efb3e8b82 100644 --- a/src/core/docs/api_details.md +++ b/src/core/docs/api_details.md @@ -17,7 +17,7 @@ OpenVINO Core API contains two folders: ## Main structures for model representation -* `ov::Model` is located in [openvino/core/model.hpp](../include/openvino/core/model.hpp) and provides API for model representation. For more details, read [OpenVINO Model Representation Guide](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html). +* `ov::Model` is located in [openvino/core/model.hpp](../include/openvino/core/model.hpp) and provides API for model representation. For more details, read [OpenVINO Model Representation Guide](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html). * `ov::Node` is a base class for all OpenVINO operations, the class is located in the [openvino/core/node.hpp](../include/openvino/core/node.hpp). * `ov::Shape` and `ov::PartialShape` classes represent shapes in OpenVINO, these classes are located in the [openvino/core/shape.hpp](../include/openvino/core/shape.hpp) and [openvino/core/partial_shape.hpp](../include/openvino/core/partial_shape.hpp) respectively. For more information, read [OpenVINO Shapes representation](./shape_propagation.md#openvino-shapes-representation). * `ov::element::Type` class represents element type for OpenVINO Tensors and Operations. The class is located in the [openvino/core/type/element_type.hpp](../include/openvino/core/type/element_type.hpp). diff --git a/src/core/docs/debug_capabilities.md b/src/core/docs/debug_capabilities.md index cdfa503f2e9d46..fb7914cfd4af66 100644 --- a/src/core/docs/debug_capabilities.md +++ b/src/core/docs/debug_capabilities.md @@ -2,7 +2,7 @@ OpenVINO Core contains a set of different debug capabilities that make developer life easier by collecting information about object statuses during OpenVINO Runtime execution and reporting this information to the developer. -* OpenVINO Model debug capabilities are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html#model-debug-capabilities). +* OpenVINO Model debug capabilities are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html#model-debug-capabilities). ## See also * [OpenVINO™ Core README](../README.md) diff --git a/src/frontends/ir/README.md b/src/frontends/ir/README.md index 080cf76f2cc9c2..d1eb03c9e7cdcc 100644 --- a/src/frontends/ir/README.md +++ b/src/frontends/ir/README.md @@ -11,7 +11,7 @@ flowchart LR openvino(openvino library) ir--Read ir---ir_fe ir_fe--Create ov::Model--->openvino - click ir "https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets.html" + click ir "https://docs.openvino.ai/2025/documentation/openvino-ir-format/operation-sets.html" ``` The primary function of the OpenVINO IR Frontend is to load an OpenVINO IR into memory. diff --git a/src/frontends/paddle/README.md b/src/frontends/paddle/README.md index 810c84e03d687d..ef2eb2b68d8c83 100644 --- a/src/frontends/paddle/README.md +++ b/src/frontends/paddle/README.md @@ -21,7 +21,7 @@ OpenVINO Paddle Frontend has the following structure: ## Debug capabilities -Developers can use OpenVINO Model debug capabilities that are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html#model-debug-capabilities). +Developers can use OpenVINO Model debug capabilities that are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application/model-representation.html#model-debug-capabilities). ## Tutorials diff --git a/src/frontends/pytorch/README.md b/src/frontends/pytorch/README.md index 07fb1fc2abf89f..8c34b5017f2ca1 100644 --- a/src/frontends/pytorch/README.md +++ b/src/frontends/pytorch/README.md @@ -115,7 +115,7 @@ In rare cases, converting PyTorch operations requires transformation. The main difference between transformation and translation is that transformation works on the graph rather than on the `NodeContext` of a single operation. This means that some functionality provided by `NodeContext` is not accessible in transformation and usually -requires working with `PtFramworkNode` directly. [General rules](https://docs.openvino.ai/2024/documentation/openvino-extensibility/transformation-api.html) +requires working with `PtFramworkNode` directly. [General rules](https://docs.openvino.ai/2025/documentation/openvino-extensibility/transformation-api.html) for writing transformations also apply to PT FE transformations. ### PyTorch Frontend Layer Tests @@ -264,7 +264,7 @@ and we will see `torch.randn_like` function call on that line. Some operations can be translated incorrectly. For example PyTorch allow to pass different data types in the operation while OpenVINO usually requires same types for all inputs of the operation (more information about what types -OpenVINO operation can accept can be found in [documentation](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets/operation-specs.html)). +OpenVINO operation can accept can be found in [documentation](https://docs.openvino.ai/2025/documentation/openvino-ir-format/operation-sets/operation-specs.html)). PyTorch has set rules for types alignment, to solve this issue PyTorch Frontend has `align_eltwise_input_types` helper function which aligns types of two inputs. If this function is not used when needed or if it used incorrectly that diff --git a/src/frontends/tensorflow/README.md b/src/frontends/tensorflow/README.md index d6aea567cc100a..6713b5a2b8f6fd 100644 --- a/src/frontends/tensorflow/README.md +++ b/src/frontends/tensorflow/README.md @@ -139,15 +139,15 @@ The main rules for loaders implementation: In rare cases, TensorFlow operation conversion requires two transformations (`Loader` and `Internal Transformation`). In the first step, `Loader` must convert a TF operation into [Internal Operation](../tensorflow_common/helper_ops) that is used temporarily by the conversion pipeline. -The internal operation implementation must also contain the `validate_and_infer_types()` method as similar to [OpenVINO Core](https://docs.openvino.ai/2024/api/c_cpp_api/group__ov__ops__cpp__api.html) operations. +The internal operation implementation must also contain the `validate_and_infer_types()` method as similar to [OpenVINO Core](https://docs.openvino.ai/2025/api/c_cpp_api/group__ov__ops__cpp__api.html) operations. Here is an example of an implementation for the internal operation `SparseFillEmptyRows` used to convert Wide and Deep models. https://github.com/openvinotoolkit/openvino/blob/7f3c95c161bc78ab2aefa6eab8b008142fb945bc/src/frontends/tensorflow/src/helper_ops/sparse_fill_empty_rows.hpp#L17-L55 In the second step, `Internal Transformation` based on `ov::pass::MatcherPass` must convert sub-graphs with internal operations into sub-graphs consisting only of the OpenVINO opset. -For more information about `ov::pass::MatcherPass` based transformations and their development, read [Overview of Transformations API](https://docs.openvino.ai/2024/documentation/openvino-extensibility/transformation-api.html) -and [OpenVINO Matcher Pass](https://docs.openvino.ai/2024/documentation/openvino-extensibility/transformation-api/matcher-pass.html) documentation. +For more information about `ov::pass::MatcherPass` based transformations and their development, read [Overview of Transformations API](https://docs.openvino.ai/2025/documentation/openvino-extensibility/transformation-api.html) +and [OpenVINO Matcher Pass](https://docs.openvino.ai/2025/documentation/openvino-extensibility/transformation-api/matcher-pass.html) documentation. The internal transformation must be called in the `ov::frontend::tensorflow::FrontEnd::normalize()` method. It is important to check the order of applying internal transformations to avoid situations when some internal operation breaks a graph pattern with an internal operation for another internal transformation. diff --git a/src/frontends/tensorflow/src/frontend.cpp b/src/frontends/tensorflow/src/frontend.cpp index e4e35c42b08b35..86f96f48be8908 100644 --- a/src/frontends/tensorflow/src/frontend.cpp +++ b/src/frontends/tensorflow/src/frontend.cpp @@ -470,7 +470,7 @@ std::shared_ptr FrontEnd::convert(const ov::frontend::InputModel::Ptr "provides conversion extension(s): " << unsupported_ops_from_tokenizers << ". Install OpenVINO Tokenizers, refer to the documentation: " - "https://docs.openvino.ai/2024/openvino-workflow-generative/ov-tokenizers.html \n"; + "https://docs.openvino.ai/2025/openvino-workflow-generative/ov-tokenizers.html \n"; } } diff --git a/src/inference/docs/api_details.md b/src/inference/docs/api_details.md index 7e9df2925804f6..c3ccf1874bb599 100644 --- a/src/inference/docs/api_details.md +++ b/src/inference/docs/api_details.md @@ -8,12 +8,12 @@ OpenVINO Inference API contains two folders: Public OpenVINO Inference API defines global header [openvino/openvino.hpp](../include/openvino/openvino.hpp) which includes all common OpenVINO headers. All Inference components are placed inside the [openvino/runtime](../include/openvino/runtime) folder. -To learn more about the Inference API usage, read [How to integrate OpenVINO with your application](https://docs.openvino.ai/2024/openvino-workflow/running-inference/integrate-openvino-with-your-application.html). +To learn more about the Inference API usage, read [How to integrate OpenVINO with your application](https://docs.openvino.ai/2025/openvino-workflow/running-inference/integrate-openvino-with-your-application.html). The diagram with dependencies is presented on the [OpenVINO Architecture page](../../docs/architecture.md#openvino-inference-pipeline). ## Components of OpenVINO Developer API -OpenVINO Developer API is required for OpenVINO plugin development. This process is described in the [OpenVINO Plugin Development Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html). +OpenVINO Developer API is required for OpenVINO plugin development. This process is described in the [OpenVINO Plugin Development Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html). ## See also * [OpenVINO™ Core README](../README.md) diff --git a/src/plugins/auto/README.md b/src/plugins/auto/README.md index 35c93308409e08..1892c93b1c117b 100644 --- a/src/plugins/auto/README.md +++ b/src/plugins/auto/README.md @@ -20,7 +20,7 @@ The AUTO plugin follows the OpenVINO™ plugin architecture and consists of seve * [src](./src/) - folder contains sources of the AUTO plugin. * [tests](./tests/) - tests for Auto Plugin components. -Learn more in the [OpenVINO™ Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html). +Learn more in the [OpenVINO™ Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html). ## Architecture The diagram below shows an overview of the components responsible for the basic inference flow: diff --git a/src/plugins/auto/docs/architecture.md b/src/plugins/auto/docs/architecture.md index 30ecfc4b429221..3a2d6735ced69e 100644 --- a/src/plugins/auto/docs/architecture.md +++ b/src/plugins/auto/docs/architecture.md @@ -8,7 +8,7 @@ AUTO is a meta plugin in OpenVINO that doesn’t bind to a specific type of hard The logic behind the choice is as follows: * Check what supported devices are available. -* Check performance hint of input setting (For detailed information of performance hint, please read more on the [ov::hint::PerformanceMode](https://docs.openvino.ai/2024/openvino-workflow/running-inference/optimize-inference/high-level-performance-hints.html)). +* Check performance hint of input setting (For detailed information of performance hint, please read more on the [ov::hint::PerformanceMode](https://docs.openvino.ai/2025/openvino-workflow/running-inference/optimize-inference/high-level-performance-hints.html)). * Check precisions of the input model. * Select the highest-priority device capable of supporting the given model for LATENCY hint and THROUGHPUT hint. Or Select all devices capable of supporting the given model for CUMULATIVE THROUGHPUT hint. * If model’s precision is FP32 but there is no device capable of supporting it, offload the model to a device supporting FP16. @@ -21,7 +21,7 @@ The AUTO plugin is also the default plugin for OpenVINO, if the user does not se Compiling the model to accelerator-optimized kernels may take some time. When AUTO selects one accelerator, it can start inference with the system's CPU by default, as it provides very low latency and can start inference with no additional delays. While the CPU is performing inference, AUTO continues to load the model to the device best suited for the purpose and transfers the task to it when ready. -![alt text](https://docs.openvino.ai/2024/_images/autoplugin_accelerate.svg "AUTO cuts first inference latency (FIL) by running inference on the CPU until the GPU is ready") +![alt text](https://docs.openvino.ai/2025/_images/autoplugin_accelerate.svg "AUTO cuts first inference latency (FIL) by running inference on the CPU until the GPU is ready") The user can disable this acceleration feature by excluding CPU from the priority list or disabling `ov::intel_auto::enable_startup_fallback`. Its default value is `true`. diff --git a/src/plugins/auto/docs/integration.md b/src/plugins/auto/docs/integration.md index 334b414bf92309..b720f4614263ea 100644 --- a/src/plugins/auto/docs/integration.md +++ b/src/plugins/auto/docs/integration.md @@ -1,7 +1,7 @@ # AUTO Plugin Integration ## Implement a New Plugin -Refer to [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html) for detailed information on how to implement a new plugin. +Refer to [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html) for detailed information on how to implement a new plugin. Query model method `ov::IPlugin::query_model()` is recommended as it is important for AUTO to quickly make decisions and save selection time. diff --git a/src/plugins/intel_cpu/docs/fake_quantize.md b/src/plugins/intel_cpu/docs/fake_quantize.md index 5364571a56b110..b234c1f56b45fa 100644 --- a/src/plugins/intel_cpu/docs/fake_quantize.md +++ b/src/plugins/intel_cpu/docs/fake_quantize.md @@ -1,5 +1,5 @@ # FakeQuantize in OpenVINO -https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets/operation-specs/quantization/fake-quantize-1.html +https://docs.openvino.ai/2025/documentation/openvino-ir-format/operation-sets/operation-specs/quantization/fake-quantize-1.html definition: ``` diff --git a/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md b/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md index b5e5ae66920a1b..04cf62ba5a6f8b 100644 --- a/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md +++ b/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md @@ -3,7 +3,7 @@ The CPU plugin supports several graph optimization algorithms, such as fusing or removing layers. Refer to the sections below for details. -> **NOTE**: For layer descriptions, see the [IR Notation Reference](https://docs.openvino.ai/2024/documentation/openvino-ir-format/operation-sets/available-opsets.html). +> **NOTE**: For layer descriptions, see the [IR Notation Reference](https://docs.openvino.ai/2025/documentation/openvino-ir-format/operation-sets/available-opsets.html). ## Fusing Convolution and Simple Layers diff --git a/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md b/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md index 47f6a3b76ae5bb..2f63765ccab7e9 100644 --- a/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md +++ b/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md @@ -28,7 +28,7 @@ Some Intel® CPUs might not have integrated GPU, so if you want to run OpenVINO ## 2. Make sure that OpenCL® Runtime is installed -OpenCL runtime is a part of the GPU driver on Windows, but on Linux it should be installed separately. For the installation tips, refer to [OpenVINO docs](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-linux.html) and [OpenCL Compute Runtime docs](https://github.com/intel/compute-runtime/tree/master/opencl/doc). +OpenCL runtime is a part of the GPU driver on Windows, but on Linux it should be installed separately. For the installation tips, refer to [OpenVINO docs](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-linux.html) and [OpenCL Compute Runtime docs](https://github.com/intel/compute-runtime/tree/master/opencl/doc). To get the support of Intel® Iris® Xe MAX Graphics with Linux, follow the [driver installation guide](https://dgpu-docs.intel.com/devices/iris-xe-max-graphics/index.html) ## 3. Make sure that user has all required permissions to work with GPU device @@ -61,7 +61,7 @@ For more details, see the [OpenCL on Linux](https://github.com/bashbaug/OpenCLPa ## 7. If you are using dGPU with XMX, ensure that HW_MATMUL feature is recognized -OpenVINO contains *hello_query_device* sample application: [link](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/hello-query-device.html) +OpenVINO contains *hello_query_device* sample application: [link](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/hello-query-device.html) With this option, you can check whether Intel XMX(Xe Matrix Extension) feature is properly recognized or not. This is a hardware feature to accelerate matrix operations and available on some discrete GPUs. diff --git a/src/plugins/intel_gpu/docs/source_code_structure.md b/src/plugins/intel_gpu/docs/source_code_structure.md index 3e2be4df7c98f5..531fe23d1292f2 100644 --- a/src/plugins/intel_gpu/docs/source_code_structure.md +++ b/src/plugins/intel_gpu/docs/source_code_structure.md @@ -5,7 +5,7 @@ but at some point clDNN became a part of OpenVINO, so now it's a part of overall via embedding of [oneDNN library](https://github.com/oneapi-src/oneDNN) OpenVINO GPU plugin is responsible for: - 1. [IE Plugin API](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html) implementation. + 1. [IE Plugin API](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html) implementation. 2. Translation of a model from common IE semantic (`ov::Function`) into plugin-specific one (`cldnn::topology`), which is then compiled into GPU graph representation (`cldnn::network`). 3. Implementation of OpenVINO operation set for Intel® GPU. diff --git a/src/plugins/intel_npu/tools/compile_tool/README.md b/src/plugins/intel_npu/tools/compile_tool/README.md index 65b30eec8aef50..ac7d5019a57ca1 100644 --- a/src/plugins/intel_npu/tools/compile_tool/README.md +++ b/src/plugins/intel_npu/tools/compile_tool/README.md @@ -3,12 +3,12 @@ This page demonstrates how to use NPU Compile Tool to convert OpenVINO™ Intermediate Representation (IR) of an AI model or a model in ONNX format to a "blob" file that is compiled by NPU NN Compiler and serialized to the format accessible for NPU Driver and NPU Runtime to execute. -## Description +## Description Compile tool is a C++ application that enables you to compile a model for inference on a specific device and export the compiled representation to a binary file. -With this tool, you can compile a model using supported OpenVINO Runtime devices on a machine that does not have the physical device connected, i.e. without NPU driver and Runtime loading, and then transfer a generated file to any machine with the target inference device available. +With this tool, you can compile a model using supported OpenVINO Runtime devices on a machine that does not have the physical device connected, i.e. without NPU driver and Runtime loading, and then transfer a generated file to any machine with the target inference device available. -Using Compile Tool is not a basic approach to end-to-end execution and/or application but mostly suitable for debugging and validation and some specific use cases. If one is looking for the standard way of reducing application startup delays by exporting and reusing the compiled model automatically, refer to [Model Caching article](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.html#model-caching) +Using Compile Tool is not a basic approach to end-to-end execution and/or application but mostly suitable for debugging and validation and some specific use cases. If one is looking for the standard way of reducing application startup delays by exporting and reusing the compiled model automatically, refer to [Model Caching article](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes/npu-device.html#model-caching) ## Workflow of the Compile tool @@ -18,17 +18,17 @@ First, the application reads command-line parameters and loads a model to the Op ### Within NPU Plugin build -See [How to build](https://github.com/openvinotoolkit/openvino/wiki#how-to-build). If `ENABLE_INTEL_NPU=ON` is provided, no additional steps are required for Compile Tool. It will be built unconditionally with every NPU Plugin build. It can be found in `bin` folder. +See [How to build](https://github.com/openvinotoolkit/openvino/wiki#how-to-build). If `ENABLE_INTEL_NPU=ON` is provided, no additional steps are required for Compile Tool. It will be built unconditionally with every NPU Plugin build. It can be found in `bin` folder. If you need to configure a release package layout and have Compile Tool in it, use `cmake --install --component npu_internal` from your `build` folder. After installation compile_tool executable can be found in `/tools/compile_tool` folder. ### Standalone build -#### Prerequisites -* [OpenVINO™ Runtime release package](https://docs.openvino.ai/2024/get-started/install-openvino.html) +#### Prerequisites +* [OpenVINO™ Runtime release package](https://docs.openvino.ai/2025/get-started/install-openvino.html) #### Build instructions -1. Download and install OpenVINO™ Runtime package +1. Download and install OpenVINO™ Runtime package 2. Build Compile Tool ```sh mkdir compile_tool_build && cd compile_tool_build @@ -36,21 +36,21 @@ If you need to configure a release package layout and have Compile Tool in it, u cmake --build . --config Release cmake --install . --prefix ``` - > Note 1: command line instruction might differ on different platforms (e.g. Windows cmd) - > Note 2: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, specifying OpenVINO_DIR and calling `setupvars` script might not be needed. Refer [documentation](https://docs.openvino.ai/2024/get-started/install-openvino.html) for details. + > Note 1: command line instruction might differ on different platforms (e.g. Windows cmd) + > Note 2: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, specifying OpenVINO_DIR and calling `setupvars` script might not be needed. Refer [documentation](https://docs.openvino.ai/2025/get-started/install-openvino.html) for details. > Note 3: `` can be any directory on your filesystem that you want to use for installation including `` if you wish to extend OpenVINO package 3. Verify the installation ```sh source /setupvars.sh /tools/compile_tool/compile_tool -h ``` - > Note 1: command line might differ depending on your platform - > Note 2: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, calling setupvars might not be needed. Refer [documentation](https://docs.openvino.ai/2024/get-started/install-openvino.html) for details. + > Note 1: command line might differ depending on your platform + > Note 2: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, calling setupvars might not be needed. Refer [documentation](https://docs.openvino.ai/2025/get-started/install-openvino.html) for details. Successful build will show the information about Compile Tool CLI options -## How to run +## How to run Running the application with the `-h` option yields the following usage message: ``` diff --git a/src/plugins/intel_npu/tools/single-image-test/README.md b/src/plugins/intel_npu/tools/single-image-test/README.md index 185b1c018b658c..b03d4ea75be823 100644 --- a/src/plugins/intel_npu/tools/single-image-test/README.md +++ b/src/plugins/intel_npu/tools/single-image-test/README.md @@ -1,9 +1,9 @@ # NPU Single Image Test Tool -This page demostrates how to use NPU Single Image Test Tool for end-to-end accuracy validation on a single image or input file with OpenVINO™ Intermediate Representation (IR) of an AI model or a model in ONNX format. +This page demostrates how to use NPU Single Image Test Tool for end-to-end accuracy validation on a single image or input file with OpenVINO™ Intermediate Representation (IR) of an AI model or a model in ONNX format. -## Description +## Description Single Image Test Tool is a C++ application that enables you to pass OpenVINO IR or ONNX model or pre-compiled blob and a single image or any other compatible file with the model inputs and get 2 sets of files with CPU outputs and NPU outputs that can be compared later or straight after the inference if `-run_test` option is passed. @@ -16,14 +16,14 @@ Using Single Image Test is not a basic approach to end-to-end validation or coll ### Within NPU Plugin build -See [How to build](https://github.com/openvinotoolkit/openvino/wiki#how-to-build). If `ENABLE_INTEL_NPU=ON` is provided and `OpenCV` project is linked to the current cmake project, no additional steps are required for Single Image Test. It will be built unconditionally with every NPU Plugin build. It can be found in `bin` folder. +See [How to build](https://github.com/openvinotoolkit/openvino/wiki#how-to-build). If `ENABLE_INTEL_NPU=ON` is provided and `OpenCV` project is linked to the current cmake project, no additional steps are required for Single Image Test. It will be built unconditionally with every NPU Plugin build. It can be found in `bin` folder. If you need to configure a release package layout and have Single Image Test in it, use `cmake --install --component npu_internal` from your `build` folder. After installation single-image-test executable can be found in `/tools/single-image-test` folder. ### Standalone build -#### Prerequisites -* [OpenVINO™ Runtime release package](https://docs.openvino.ai/2024/get-started/install-openvino.html) +#### Prerequisites +* [OpenVINO™ Runtime release package](https://docs.openvino.ai/2025/get-started/install-openvino.html) * [OpenCV: Open Source Computer Vision Library release package](https://opencv.org/get-started/) #### Build instructions @@ -37,10 +37,10 @@ If you need to configure a release package layout and have Single Image Test in cmake --build . --config Release cmake --install . --prefix ``` - > Note 1: command line instruction might differ on different platforms (e.g. Windows cmd) - > Note 2: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, specifying OpenVINO_DIR and calling setupvars might not be needed. Refer [documentation](https://docs.openvino.ai/2024/get-started/install-openvino.html) for details. - > Note 3: depending on OpenCV installation method, there might not be a need to specify OpenCV_DIR. - > Note 4: depending on OpenCV version, cmake configs might be located somewhere else. You need to specify a directory that contains `OpenCVConfig.cmake` file + > Note 1: command line instruction might differ on different platforms (e.g. Windows cmd) + > Note 2: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, specifying OpenVINO_DIR and calling setupvars might not be needed. Refer [documentation](https://docs.openvino.ai/2025/get-started/install-openvino.html) for details. + > Note 3: depending on OpenCV installation method, there might not be a need to specify OpenCV_DIR. + > Note 4: depending on OpenCV version, cmake configs might be located somewhere else. You need to specify a directory that contains `OpenCVConfig.cmake` file > Note 5: `` can be any directory on your filesystem that you want to use for installation including `` if you wish to extend OpenVINO package 1. Verify the installation ```sh @@ -48,14 +48,14 @@ If you need to configure a release package layout and have Single Image Test in source setup_vars_opencv4.sh /tools/single-image-test/single-image-test -help ``` - > Note 1: command line might differ depending on your platform - > Note 2: depending on OpenCV installation method, there might not be a need to call setupvars. - > Note 3: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, calling setupvars might not be needed. Refer [documentation](https://docs.openvino.ai/2024/get-started/install-openvino.html) for details. + > Note 1: command line might differ depending on your platform + > Note 2: depending on OpenCV installation method, there might not be a need to call setupvars. + > Note 3: this example is based on OpenVINO Archive distribution. If you have chosen another installation method, calling setupvars might not be needed. Refer [documentation](https://docs.openvino.ai/2025/get-started/install-openvino.html) for details. Successful build will show the information about Single Image Test Tool CLI options -## How to run +## How to run Running the application with the `-help` option yields the following usage message: ``` @@ -172,7 +172,7 @@ For example, to run inference with mobilenet-v2 model on Intel® Core™ Ultra N Parameters: Network file: mobilenet-v2.xml Input file(s): validation-set/224x224/watch.bmp - Output compiled network file: + Output compiled network file: Color format: RGB Input precision: FP16 Output precision: FP16 @@ -181,14 +181,14 @@ For example, to run inference with mobilenet-v2 model on Intel® Core™ Ultra N Model input layout: NCHW Model output layout: NC Img as binary: 0 - Bin input file precision: + Bin input file precision: Device: CPU - Config file: + Config file: Run test: 0 Performance counters: 0 - Mean_values [channel1,channel2,channel3] - Scale_values [channel1,channel2,channel3] - Log level: + Mean_values [channel1,channel2,channel3] + Scale_values [channel1,channel2,channel3] + Log level: Run single image test Load network mobilenet-v2.xml @@ -247,7 +247,7 @@ For example, to run inference with mobilenet-v2 model on Intel® Core™ Ultra N Parameters: Network file: mobilenet-v2.blob Input file(s): validation-set/224x224/watch.bmp - Output compiled network file: + Output compiled network file: Color format: RGB Input precision: FP16 Output precision: FP16 @@ -256,13 +256,13 @@ For example, to run inference with mobilenet-v2 model on Intel® Core™ Ultra N Model input layout: NCHW Model output layout: NC Img as binary: 0 - Bin input file precision: + Bin input file precision: Device: NPU Config file: mobilenet-v2.conf Run test: 1 Performance counters: 0 - Mean_values [channel1,channel2,channel3] - Scale_values [channel1,channel2,channel3] + Mean_values [channel1,channel2,channel3] + Scale_values [channel1,channel2,channel3] Mode: classification Top K: 1 Tolerance: 0.6 diff --git a/src/plugins/proxy/README.md b/src/plugins/proxy/README.md index 27286e565abf2c..a78469cda30de5 100644 --- a/src/plugins/proxy/README.md +++ b/src/plugins/proxy/README.md @@ -47,5 +47,5 @@ After the creation the proxy plugin has next properties: * [OpenVINO Core Components](../../README.md) * [OpenVINO Plugins](../README.md) * [Developer documentation](../../../docs/dev/index.md) - * [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html) + * [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html) diff --git a/src/plugins/template/README.md b/src/plugins/template/README.md index fb0afb2442a1a4..b6e04e5aae2912 100644 --- a/src/plugins/template/README.md +++ b/src/plugins/template/README.md @@ -35,11 +35,11 @@ $ make -j8 ## Tutorials -* [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html) +* [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html) ## See also * [OpenVINO™ README](../../../README.md) * [OpenVINO Core Components](../../README.md) * [OpenVINO Plugins](../README.md) * [Developer documentation](../../../docs/dev/index.md) - * [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html) + * [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html) diff --git a/tools/benchmark_tool/README.md b/tools/benchmark_tool/README.md index fec7f801d308d5..d300b5cb727128 100644 --- a/tools/benchmark_tool/README.md +++ b/tools/benchmark_tool/README.md @@ -2,13 +2,13 @@ This page demonstrates how to use the Benchmark Python Tool to estimate deep learning inference performance on supported devices. -> **NOTE**: This page describes usage of the Python implementation of the Benchmark Tool. For the C++ implementation, refer to the [Benchmark C++ Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) page. The Python version is recommended for benchmarking models that will be used in Python applications, and the C++ version is recommended for benchmarking models that will be used in C++ applications. Both tools have a similar command interface and backend. +> **NOTE**: This page describes usage of the Python implementation of the Benchmark Tool. For the C++ implementation, refer to the [Benchmark C++ Tool](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) page. The Python version is recommended for benchmarking models that will be used in Python applications, and the C++ version is recommended for benchmarking models that will be used in C++ applications. Both tools have a similar command interface and backend. -For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) +For more detailed information on how this sample works, check the dedicated [article](https://docs.openvino.ai/2025/learn-openvino/openvino-samples/benchmark-tool.html) ## Requriements -The Python benchmark_app is automatically installed when you install OpenVINO Developer Tools using [PyPI](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-pip.html) Before running ``benchmark_app``, make sure the ``openvino_env`` virtual environment is activated, and navigate to the directory where your model is located. +The Python benchmark_app is automatically installed when you install OpenVINO Developer Tools using [PyPI](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-pip.html) Before running ``benchmark_app``, make sure the ``openvino_env`` virtual environment is activated, and navigate to the directory where your model is located. The benchmarking application works with models in the OpenVINO IR (``model.xml`` and ``model.bin``) and ONNX (``model.onnx``) formats. -Make sure to [convert your models](https://docs.openvino.ai/2024/openvino-workflow/model-preparation/convert-model-to-ir.html) if necessary. +Make sure to [convert your models](https://docs.openvino.ai/2025/openvino-workflow/model-preparation/convert-model-to-ir.html) if necessary.