diff --git a/CMakeLists.txt b/CMakeLists.txt index 6c2f770b60aa5b..2187deb8e8ce24 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -48,7 +48,7 @@ endif() project(OpenVINO DESCRIPTION "OpenVINO toolkit" - HOMEPAGE_URL "https://docs.openvino.ai/2024/home.html" + HOMEPAGE_URL "https://docs.openvino.ai/2025/index.html" LANGUAGES C CXX) find_package(OpenVINODeveloperScripts REQUIRED diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c30ce12665ab33..42f778b5d847da 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -52,7 +52,7 @@ product better. Since the market of computing devices is constantly evolving, OpenVINO is always open to extending its support for new hardware. If you want to run inference on a device that is currently not supported, you can see how to develop a new plugin for it in the - [Plugin Developer Guide](https://docs.openvino.ai/2024/documentation/openvino-extensibility/openvino-plugin-library.html). + [Plugin Developer Guide](https://docs.openvino.ai/2025/documentation/openvino-extensibility/openvino-plugin-library.html). ### Improve documentation diff --git a/README.md b/README.md index 9ed2d4690e39e9..55514d9ec11e3d 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ Open-source software toolkit for optimizing and deploying deep learning models.
- Documentation • Blog • Key Features • Tutorials • Integrations • Benchmarks • Generative AI + Documentation • Blog • Key Features • Tutorials • Integrations • Benchmarks • Generative AI
[![PyPI Status](https://badge.fury.io/py/openvino.svg)](https://badge.fury.io/py/openvino) @@ -24,22 +24,22 @@ Open-source software toolkit for optimizing and deploying deep learning models. - **Broad Platform Compatibility**: Reduce resource demands and efficiently deploy on a range of platforms from edge to cloud. OpenVINO™ supports inference on CPU (x86, ARM), GPU (OpenCL capable, integrated and discrete) and AI accelerators (Intel NPU). - **Community and Ecosystem**: Join an active community contributing to the enhancement of deep learning performance across various domains. -Check out the [OpenVINO Cheat Sheet](https://docs.openvino.ai/2024/_static/download/OpenVINO_Quick_Start_Guide.pdf) and [Key Features](https://docs.openvino.ai/2024/about-openvino/key-features.html) for a quick reference. +Check out the [OpenVINO Cheat Sheet](https://docs.openvino.ai/2025/_static/download/OpenVINO_Quick_Start_Guide.pdf) and [Key Features](https://docs.openvino.ai/2025/about-openvino/key-features.html) for a quick reference. ## Installation -[Get your preferred distribution of OpenVINO](https://docs.openvino.ai/2024/get-started/install-openvino.html) or use this command for quick installation: +[Get your preferred distribution of OpenVINO](https://docs.openvino.ai/2025/get-started/install-openvino.html) or use this command for quick installation: ```sh pip install -U openvino ``` -Check [system requirements](https://docs.openvino.ai/2024/about-openvino/system-requirements.html) and [supported devices](https://docs.openvino.ai/2024/about-openvino/compatibility-and-support/supported-devices.html) for detailed information. +Check [system requirements](https://docs.openvino.ai/2025/about-openvino/system-requirements.html) and [supported devices](https://docs.openvino.ai/2025/about-openvino/compatibility-and-support/supported-devices.html) for detailed information. ## Tutorials and Examples -[OpenVINO Quickstart example](https://docs.openvino.ai/2024/get-started.html) will walk you through the basics of deploying your first model. +[OpenVINO Quickstart example](https://docs.openvino.ai/2025/get-started.html) will walk you through the basics of deploying your first model. Learn how to optimize and deploy popular models with the [OpenVINO Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)📚: - [Create an LLM-powered Chatbot using OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/llm-chatbot-generate-api.ipynb) @@ -48,7 +48,7 @@ Learn how to optimize and deploy popular models with the [OpenVINO Notebooks](ht - [Multimodal assistant with LLaVa and OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llava-multimodal-chatbot/llava-multimodal-chatbot-genai.ipynb) - [Automatic speech recognition using Whisper and OpenVINO](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/whisper-asr-genai/whisper-asr-genai.ipynb) -Discover more examples in the [OpenVINO Samples (Python & C++)](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) and [Notebooks (Python)](https://docs.openvino.ai/2024/learn-openvino/interactive-tutorials-python.html). +Discover more examples in the [OpenVINO Samples (Python & C++)](https://docs.openvino.ai/2025/learn-openvino/openvino-samples.html) and [Notebooks (Python)](https://docs.openvino.ai/2025/learn-openvino/interactive-tutorials-python.html). Here are easy-to-follow code examples demonstrating how to run PyTorch and TensorFlow model inference using OpenVINO: @@ -96,11 +96,11 @@ data = np.random.rand(1, 224, 224, 3) output = compiled_model({0: data}) ``` -OpenVINO supports the CPU, GPU, and NPU [devices](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html) and works with models from PyTorch, TensorFlow, ONNX, TensorFlow Lite, PaddlePaddle, and JAX/Flax [frameworks](https://docs.openvino.ai/2024/openvino-workflow/model-preparation.html). It includes [APIs](https://docs.openvino.ai/2024/api/api_reference.html) in C++, Python, C, NodeJS, and offers the GenAI API for optimized model pipelines and performance. +OpenVINO supports the CPU, GPU, and NPU [devices](https://docs.openvino.ai/2025/openvino-workflow/running-inference/inference-devices-and-modes.html) and works with models from PyTorch, TensorFlow, ONNX, TensorFlow Lite, PaddlePaddle, and JAX/Flax [frameworks](https://docs.openvino.ai/2025/openvino-workflow/model-preparation.html). It includes [APIs](https://docs.openvino.ai/2025/api/api_reference.html) in C++, Python, C, NodeJS, and offers the GenAI API for optimized model pipelines and performance. ## Generative AI with OpenVINO -Get started with the OpenVINO GenAI [installation](https://docs.openvino.ai/2024/get-started/install-openvino/install-openvino-genai.html) and refer to the [detailed guide](https://docs.openvino.ai/2024/openvino-workflow-generative/generative-inference.html) to explore the capabilities of Generative AI using OpenVINO. +Get started with the OpenVINO GenAI [installation](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-genai.html) and refer to the [detailed guide](https://docs.openvino.ai/2025/openvino-workflow-generative/generative-inference.html) to explore the capabilities of Generative AI using OpenVINO. Learn how to run LLMs and GenAI with [Samples](https://github.com/openvinotoolkit/openvino.genai/tree/master/samples) in the [OpenVINO™ GenAI repo](https://github.com/openvinotoolkit/openvino.genai). See GenAI in action with Jupyter notebooks: [LLM-powered Chatbot](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/README.md) and [LLM Instruction-following pipeline](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-question-answering/README.md). @@ -122,7 +122,7 @@ Learn how to run LLMs and GenAI with [Samples](https://github.com/openvinotoolki ### Integrations - [🤗Optimum Intel](https://github.com/huggingface/optimum-intel) - grab and use models leveraging OpenVINO within the Hugging Face API. -- [Torch.compile](https://docs.openvino.ai/2024/openvino-workflow/torch-compile.html) - use OpenVINO for Python-native applications by JIT-compiling code into optimized kernels. +- [Torch.compile](https://docs.openvino.ai/2025/openvino-workflow/torch-compile.html) - use OpenVINO for Python-native applications by JIT-compiling code into optimized kernels. - [OpenVINO LLMs inference and serving with vLLM](https://docs.vllm.ai/en/stable/getting_started/openvino-installation.html) - enhance vLLM's fast and easy model serving with the OpenVINO backend. - [OpenVINO Execution Provider for ONNX Runtime](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html) - use OpenVINO as a backend with your existing ONNX Runtime code. - [LlamaIndex](https://docs.llamaindex.ai/en/stable/examples/llm/openvino/) - build context-augmented GenAI applications with the LlamaIndex framework and enhance runtime performance with OpenVINO. @@ -133,7 +133,7 @@ Check out the [Awesome OpenVINO](https://github.com/openvinotoolkit/awesome-open ## Performance -Explore [OpenVINO Performance Benchmarks](https://docs.openvino.ai/2024/about-openvino/performance-benchmarks.html) to discover the optimal hardware configurations and plan your AI deployment based on verified data. +Explore [OpenVINO Performance Benchmarks](https://docs.openvino.ai/2025/about-openvino/performance-benchmarks.html) to discover the optimal hardware configurations and plan your AI deployment based on verified data. ## Contribution and Support @@ -149,7 +149,7 @@ You can ask questions and get support on: ## Resources -* [Release Notes](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino.html) +* [Release Notes](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino.html) * [OpenVINO Blog](https://blog.openvino.ai/) * [OpenVINO™ toolkit on Medium](https://medium.com/@openvino) @@ -164,7 +164,7 @@ You can opt-out at any time by running the command: opt_in_out --opt_out ``` -More Information is available at [OpenVINO™ Telemetry](https://docs.openvino.ai/2024/about-openvino/additional-resources/telemetry.html). +More Information is available at [OpenVINO™ Telemetry](https://docs.openvino.ai/2025/about-openvino/additional-resources/telemetry.html). ## License diff --git a/docs/RELEASE.MD b/docs/RELEASE.MD index b345431f3f2bcf..5f7769e06b51a7 100644 --- a/docs/RELEASE.MD +++ b/docs/RELEASE.MD @@ -13,7 +13,7 @@ This phase takes 2-4 weeks and involves scoping the backlog, prioritizing it, an ### Execution (development of new features) - [OpenVINO Contributing Guide](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md) -- [Code Contribution Guide](https://docs.openvino.ai/2024/about-openvino/contributing/code-contribution-guide.html) +- [Code Contribution Guide](https://docs.openvino.ai/2025/about-openvino/contributing/code-contribution-guide.html) - [OpenVINO First Good Issue](https://github.com/openvinotoolkit/openvino/issues/17502) ### Stabilization (Feature Freeze, Code Freeze milestones) @@ -25,5 +25,5 @@ This phase takes 2-4 weeks and involves scoping the backlog, prioritizing it, an - After Code Freeze, the testing team can perform final regression testing to ensure that recent changes have not introduced new bugs and that the software meets the required quality standards. ### Distribution -- OpenVINO has different types of build distribution: Regular releases, Long-Term Support, Pre-release releases, Nightly builds. Read more here: [OpenVINO Release Policy](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/release-policy.html) +- OpenVINO has different types of build distribution: Regular releases, Long-Term Support, Pre-release releases, Nightly builds. Read more here: [OpenVINO Release Policy](https://docs.openvino.ai/2025/about-openvino/release-notes-openvino/release-policy.html) - Different distribution channels are supported. Explore different options here: [OpenVINO Download](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html) diff --git a/docs/articles_en/about-openvino/contributing.rst b/docs/articles_en/about-openvino/contributing.rst index f14e5f58249259..430b5cd1ef1b3a 100644 --- a/docs/articles_en/about-openvino/contributing.rst +++ b/docs/articles_en/about-openvino/contributing.rst @@ -89,7 +89,7 @@ PR. This way, it will be easier for other developers to track changes. If you want to run inference on a device that is currently not supported, you can see how to develop a new plugin for it in the -`Plugin Developer Guide