Skip to content

Commit

Permalink
[DOCS] Menu Restructuring for 2025 - pass 5 (#28748)
Browse files Browse the repository at this point in the history
Moving `Compatibility and Support` and `Tool Ecosystem` sections in
menu.
  • Loading branch information
sgolebiewski-intel authored Jan 30, 2025
1 parent 61b6da8 commit 9930aea
Show file tree
Hide file tree
Showing 25 changed files with 25 additions and 25 deletions.
6 changes: 3 additions & 3 deletions docs/articles_en/about-openvino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ About OpenVINO

about-openvino/key-features
about-openvino/performance-benchmarks
about-openvino/compatibility-and-support
OpenVINO Ecosystem <about-openvino/openvino-ecosystem>
about-openvino/contributing
Release Notes <about-openvino/release-notes-openvino>

Expand Down Expand Up @@ -42,8 +42,8 @@ Along with the primary components of model optimization and runtime, the toolkit
* `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__ - a tool for enhanced OpenVINO™ inference to get performance boost with minimal accuracy drop.
* :doc:`Openvino Notebooks <get-started/learn-openvino/interactive-tutorials-python>`- Jupyter Python notebook, which demonstrate key features of the toolkit.
* `OpenVINO Model Server <https://github.com/openvinotoolkit/model_server>`__ - a server that enables scalability via a serving microservice.
* :doc:`OpenVINO Training Extensions <documentation/openvino-ecosystem/openvino-training-extensions>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
* :doc:`Dataset Management Framework (Datumaro) <documentation/openvino-ecosystem/datumaro>` - a tool to build, transform, and analyze datasets.
* :doc:`OpenVINO Training Extensions <about-openvino/openvino-ecosystem/openvino-training-extensions>` – a convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
* :doc:`Dataset Management Framework (Datumaro) <about-openvino/openvino-ecosystem/datumaro>` - a tool to build, transform, and analyze datasets.

Community
##############################################################
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ without the need to convert.

| **OpenVINO Training Extensions**
| :bdg-link-dark:`Github <https://github.com/openvinotoolkit/training_extensions>`
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/documentation/openvino-ecosystem/openvino-training-extensions.html>`
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/about-openvino/openvino-ecosystem/openvino-training-extensions.html>`
A convenient environment to train Deep Learning models and convert them using the OpenVINO™
toolkit for optimized inference.
Expand All @@ -77,7 +77,7 @@ toolkit for optimized inference.

| **OpenVINO Security Addon**
| :bdg-link-dark:`Github <https://github.com/openvinotoolkit/security_addon>`
:bdg-link-success:`User Guide <https://docs.openvino.ai/2025/documentation/openvino-ecosystem/openvino-security-add-on.html>`
:bdg-link-success:`User Guide <https://docs.openvino.ai/2025/about-openvino/openvino-ecosystem/openvino-security-add-on.html>`
A solution for Model Developers and Independent Software Vendors to use secure packaging and
secure model execution.
Expand All @@ -86,7 +86,7 @@ secure model execution.

| **Datumaro**
| :bdg-link-dark:`Github <https://github.com/openvinotoolkit/datumaro>`
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/documentation/openvino-ecosystem/datumaro.html>`
:bdg-link-success:`Overview Page <https://docs.openvino.ai/2025/about-openvino/openvino-ecosystem/datumaro.html>`
A framework and a CLI tool for building, transforming, and analyzing datasets.
|hr|
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ as:
benchmark_app -m <model> -d <device> -i <input>
Each of the :doc:`OpenVINO supported devices <../compatibility-and-support/supported-devices>`
Each of the :doc:`OpenVINO supported devices <../../documentation/compatibility-and-support/supported-devices>`
offers performance settings that contain command-line equivalents in the Benchmark app.

While these settings provide really low-level control for the optimal model performance on a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Performance Information F.A.Q.

Intel partners with vendors all over the world. For a list of Hardware Manufacturers, see the
`Intel® AI: In Production Partners & Solutions Catalog <https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/partners-solutions-catalog.html>`__.
For more details, see the :doc:`Supported Devices <../compatibility-and-support/supported-devices>` article.
For more details, see the :doc:`Supported Devices <../../documentation/compatibility-and-support/supported-devices>` article.


.. dropdown:: How can I optimize my models for better performance or accuracy?
Expand Down
6 changes: 3 additions & 3 deletions docs/articles_en/documentation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,18 @@ Documentation

API Reference <api/api_reference>
OpenVINO IR format and Operation Sets <documentation/openvino-ir-format>
Tool Ecosystem <documentation/openvino-ecosystem>
Compatibility and Support <documentation/compatibility-and-support>
Legacy Features <documentation/legacy-features>
OpenVINO Extensibility <documentation/openvino-extensibility>
OpenVINO™ Security <documentation/openvino-security>
Legacy Features <documentation/legacy-features>


This section provides reference documents that guide you through the OpenVINO toolkit workflow, from preparing models, optimizing them, to deploying them in your own deep learning applications.

| :doc:`API Reference doc path <api/api_reference>`
| A collection of reference articles for OpenVINO C++, C, and Python APIs.
| :doc:`OpenVINO Ecosystem <documentation/openvino-ecosystem>`
| :doc:`OpenVINO Ecosystem <about-openvino/openvino-ecosystem>`
| Apart from the core components, OpenVINO offers tools, plugins, and expansions revolving around it, even if not constituting necessary parts of its workflow. This section gives you an overview of what makes up the OpenVINO toolkit.
| :doc:`OpenVINO Extensibility Mechanism <documentation/openvino-extensibility>`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ deep learning models:
:doc:`NPU <../../openvino-workflow/running-inference/inference-devices-and-modes/npu-device>`.

| For their usage guides, see :doc:`Devices and Modes <../../openvino-workflow/running-inference/inference-devices-and-modes>`.
| For a detailed list of devices, see :doc:`System Requirements <../release-notes-openvino/system-requirements>`.
| For a detailed list of devices, see :doc:`System Requirements <../../about-openvino/release-notes-openvino/system-requirements>`.

Beside running inference with a specific device,
Expand Down Expand Up @@ -43,7 +43,7 @@ Feature Support and API Coverage
:doc:`Multi-stream execution <../../openvino-workflow/running-inference/optimize-inference/optimizing-throughput>` Yes Yes No
:doc:`Model caching <../../openvino-workflow/running-inference/optimize-inference/optimizing-latency/model-caching-overview>` Yes Partial Yes
:doc:`Dynamic shapes <../../openvino-workflow/running-inference/dynamic-shapes>` Yes Partial No
:doc:`Import/Export <../../documentation/openvino-ecosystem>` Yes Yes Yes
:doc:`Import/Export <../../about-openvino/openvino-ecosystem>` Yes Yes Yes
:doc:`Preprocessing acceleration <../../openvino-workflow/running-inference/optimize-inference/optimize-preprocessing>` Yes Yes No
:doc:`Stateful models <../../openvino-workflow/running-inference/stateful-models>` Yes Yes Yes
:doc:`Extensibility <../../documentation/openvino-extensibility>` Yes Yes No
Expand Down
2 changes: 1 addition & 1 deletion docs/articles_en/documentation/openvino-extensibility.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ OpenVINO Extensibility Mechanism

The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle. The list of supported operations is different for each of the supported frameworks.
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <../about-openvino/compatibility-and-support/supported-operations>`.
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <../documentation/compatibility-and-support/supported-operations>`.

Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:

Expand Down
2 changes: 1 addition & 1 deletion docs/articles_en/documentation/openvino-security.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ with encryption or other security tools.
Actual security and privacy requirements depend on your unique deployment scenario.
This section provides general guidance on using OpenVINO tools and libraries securely.
The main security measure for OpenVINO is its
:doc:`Security Add-on <openvino-ecosystem/openvino-security-add-on>`. You can find its description
:doc:`Security Add-on <../about-openvino/openvino-ecosystem/openvino-security-add-on>`. You can find its description
in the Ecosystem section.

.. _encrypted-models:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Supported ONNX Layers
#####################

For the list of supported standard layers, refer to the
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
page.

Additional Resources
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ Supported PaddlePaddle Layers
#############################

For the list of supported standard layers, refer to the
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
page.


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Supported TensorFlow Lite Layers
###################################

For the list of supported standard layers, refer to the
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
page.

Supported TensorFlow Lite Models
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ Supported TensorFlow and TensorFlow 2 Keras Layers
##################################################

For the list of supported standard layers, refer to the
:doc:`Supported Operations <../../about-openvino/compatibility-and-support/supported-operations>`
:doc:`Supported Operations <../../documentation/compatibility-and-support/supported-operations>`
page.


Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ To learn more about dynamic shapes in runtime, refer to the
you can visit `Hugging Face <https://huggingface.co/models>`__.

The OpenVINO Runtime API may present certain limitations in inferring models with undefined
dimensions on some hardware. See the :doc:`Feature support matrix <../../about-openvino/compatibility-and-support/supported-devices>`
dimensions on some hardware. See the :doc:`Feature support matrix <../../documentation/compatibility-and-support/supported-devices>`
for reference. In this case, the ``input`` parameter and the
:doc:`reshape method <../running-inference/changing-input-shape>` can help to resolve undefined
dimensions.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ For the same reason, it is not recommended to leave dimensions as undefined, wit

When specifying bounds, the lower bound is not as important as the upper one. The upper bound allows inference devices to allocate memory for intermediate tensors more precisely. It also allows using a fewer number of tuned kernels for different sizes.
More precisely, benefits of specifying the lower or upper bound is device dependent.
Depending on the plugin, specifying the upper bounds can be required. For information about dynamic shapes support on different devices, refer to the :doc:`feature support table <../../about-openvino/compatibility-and-support/supported-devices>`.
Depending on the plugin, specifying the upper bounds can be required. For information about dynamic shapes support on different devices, refer to the :doc:`feature support table <../../documentation/compatibility-and-support/supported-devices>`.

If the lower and upper bounds for a dimension are known, it is recommended to specify them, even if a plugin can execute a model without the bounds.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ High-level Performance Hints
an inference device.


Even though all :doc:`supported devices <../../../about-openvino/compatibility-and-support/supported-devices>` in OpenVINO™ offer low-level performance settings, utilizing them is not recommended outside of very few cases.
Even though all :doc:`supported devices <../../../documentation/compatibility-and-support/supported-devices>` in OpenVINO™ offer low-level performance settings, utilizing them is not recommended outside of very few cases.
The preferred way to configure performance in OpenVINO Runtime is using performance hints. This is a future-proof solution fully compatible with the :doc:`automatic device selection inference mode <../inference-devices-and-modes/auto-device-selection>` and designed with *portability* in mind.

The hints also set the direction of the configuration in the right order. Instead of mapping the application needs to the low-level performance settings, and keeping an associated application logic to configure each possible device separately, the hints express a target scenario with a single config key and let the *device* configure itself in response.
Expand All @@ -32,7 +32,7 @@ Performance Hints: How It Works

Internally, every device "translates" the value of the hint to the actual performance settings.
For example, the ``ov::hint::PerformanceMode::THROUGHPUT`` selects the number of CPU or GPU streams.
Additionally, the optimal batch size is selected for the GPU and the :doc:`automatic batching <../inference-devices-and-modes/automatic-batching>` is applied whenever possible. To check whether the device supports it, refer to the :doc:`Supported devices <../../../about-openvino/compatibility-and-support/supported-devices>` article.
Additionally, the optimal batch size is selected for the GPU and the :doc:`automatic batching <../inference-devices-and-modes/automatic-batching>` is applied whenever possible. To check whether the device supports it, refer to the :doc:`Supported devices <../../../documentation/compatibility-and-support/supported-devices>` article.

The resulting (device-specific) settings can be queried back from the instance of the ``ov:Compiled_Model``.
Be aware that the ``benchmark_app`` outputs the actual settings for the ``THROUGHPUT`` hint. See the example of the output below:
Expand Down
2 changes: 1 addition & 1 deletion docs/notebooks/auto-device-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ device <https://docs.openvino.ai/2025/openvino-workflow/running-inference/infere
(or AUTO in short) selects the most suitable device for inference by
considering the model precision, power efficiency and processing
capability of the available `compute
devices <https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html>`__.
devices <https://docs.openvino.ai/2025/documentation/compatibility-and-support/supported-devices.html>`__.
The model precision (such as ``FP32``, ``FP16``, ``INT8``, etc.) is the
first consideration to filter out the devices that cannot run the
network efficiently.
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_setup/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -196,5 +196,5 @@ Key Features
LEARN OPENVINO <learn-openvino>
HOW TO USE - MAIN WORKFLOW <openvino-workflow>
HOW TO USE - GENERATIVE AI WORKFLOW <openvino-workflow-generative>
DOCUMENTATION <documentation>
REFERENCE DOCUMENTATION <documentation>
ABOUT OPENVINO <about-openvino>

0 comments on commit 9930aea

Please sign in to comment.