diff --git a/README.md b/README.md index f83c9d759b359..652268ec29cac 100644 --- a/README.md +++ b/README.md @@ -77,7 +77,7 @@ pip install vllm Visit our [documentation](https://vllm.readthedocs.io/en/latest/) to learn more. - [Installation](https://vllm.readthedocs.io/en/latest/getting_started/installation.html) - [Quickstart](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html) -- [Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html) +- [List of Supported Models](https://vllm.readthedocs.io/en/latest/models/supported_models.html) ## Contributing diff --git a/docs/source/serving/architecture_helm_deployment.png b/docs/source/assets/deployment/architecture_helm_deployment.png similarity index 100% rename from docs/source/serving/architecture_helm_deployment.png rename to docs/source/assets/deployment/architecture_helm_deployment.png diff --git a/docs/source/contributing/dockerfile/dockerfile.md b/docs/source/contributing/dockerfile/dockerfile.md index 7ffec83333d7d..38ea956ba8dfb 100644 --- a/docs/source/contributing/dockerfile/dockerfile.md +++ b/docs/source/contributing/dockerfile/dockerfile.md @@ -1,7 +1,7 @@ # Dockerfile We provide a to construct the image for running an OpenAI compatible server with vLLM. -More information about deploying with Docker can be found [here](../../serving/deploying_with_docker.md). +More information about deploying with Docker can be found [here](#deployment-docker). Below is a visual representation of the multi-stage Dockerfile. The build graph contains the following nodes: diff --git a/docs/source/contributing/model/registration.md b/docs/source/contributing/model/registration.md index cf1cdb0c9de0f..fe5aa94c52896 100644 --- a/docs/source/contributing/model/registration.md +++ b/docs/source/contributing/model/registration.md @@ -3,7 +3,7 @@ # Model Registration vLLM relies on a model registry to determine how to run each model. -A list of pre-registered architectures can be found on the [Supported Models](#supported-models) page. +A list of pre-registered architectures can be found [here](#supported-models). If your model is not on this list, you must register it to vLLM. This page provides detailed instructions on how to do so. @@ -16,7 +16,7 @@ This gives you the ability to modify the codebase and test your model. After you have implemented your model (see [tutorial](#new-model-basic)), put it into the directory. Then, add your model class to `_VLLM_MODELS` in so that it is automatically registered upon importing vLLM. You should also include an example HuggingFace repository for this model in to run the unit tests. -Finally, update the [Supported Models](#supported-models) documentation page to promote your model! +Finally, update our [list of supported models](#supported-models) to promote your model! ```{important} The list of models in each section should be maintained in alphabetical order. diff --git a/docs/source/serving/deploying_with_docker.md b/docs/source/deployment/docker.md similarity index 98% rename from docs/source/serving/deploying_with_docker.md rename to docs/source/deployment/docker.md index 844bd27800c7a..2df1aca27f1e6 100644 --- a/docs/source/serving/deploying_with_docker.md +++ b/docs/source/deployment/docker.md @@ -1,6 +1,6 @@ -(deploying-with-docker)= +(deployment-docker)= -# Deploying with Docker +# Using Docker ## Use vLLM's Official Docker Image diff --git a/docs/source/serving/deploying_with_bentoml.md b/docs/source/deployment/frameworks/bentoml.md similarity index 89% rename from docs/source/serving/deploying_with_bentoml.md rename to docs/source/deployment/frameworks/bentoml.md index dfa0de4f0f6d7..ea0b5d1d4c93b 100644 --- a/docs/source/serving/deploying_with_bentoml.md +++ b/docs/source/deployment/frameworks/bentoml.md @@ -1,6 +1,6 @@ -(deploying-with-bentoml)= +(deployment-bentoml)= -# Deploying with BentoML +# BentoML [BentoML](https://github.com/bentoml/BentoML) allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-complicant image and deploy it on Kubernetes. diff --git a/docs/source/serving/deploying_with_cerebrium.md b/docs/source/deployment/frameworks/cerebrium.md similarity index 98% rename from docs/source/serving/deploying_with_cerebrium.md rename to docs/source/deployment/frameworks/cerebrium.md index 950064c8c1b10..be018dfb75d7a 100644 --- a/docs/source/serving/deploying_with_cerebrium.md +++ b/docs/source/deployment/frameworks/cerebrium.md @@ -1,6 +1,6 @@ -(deploying-with-cerebrium)= +(deployment-cerebrium)= -# Deploying with Cerebrium +# Cerebrium ```{raw} html

diff --git a/docs/source/serving/deploying_with_dstack.md b/docs/source/deployment/frameworks/dstack.md similarity index 98% rename from docs/source/serving/deploying_with_dstack.md rename to docs/source/deployment/frameworks/dstack.md index 381f5f786ca2c..4142c1d9f1f60 100644 --- a/docs/source/serving/deploying_with_dstack.md +++ b/docs/source/deployment/frameworks/dstack.md @@ -1,6 +1,6 @@ -(deploying-with-dstack)= +(deployment-dstack)= -# Deploying with dstack +# dstack ```{raw} html

diff --git a/docs/source/serving/deploying_with_helm.md b/docs/source/deployment/frameworks/helm.md similarity index 98% rename from docs/source/serving/deploying_with_helm.md rename to docs/source/deployment/frameworks/helm.md index 7286a0a88968f..18ed293191468 100644 --- a/docs/source/serving/deploying_with_helm.md +++ b/docs/source/deployment/frameworks/helm.md @@ -1,6 +1,6 @@ -(deploying-with-helm)= +(deployment-helm)= -# Deploying with Helm +# Helm A Helm chart to deploy vLLM for Kubernetes @@ -38,7 +38,7 @@ chart **including persistent volumes** and deletes the release. ## Architecture -```{image} architecture_helm_deployment.png +```{image} /assets/deployment/architecture_helm_deployment.png ``` ## Values diff --git a/docs/source/deployment/frameworks/index.md b/docs/source/deployment/frameworks/index.md new file mode 100644 index 0000000000000..6a59131d36618 --- /dev/null +++ b/docs/source/deployment/frameworks/index.md @@ -0,0 +1,13 @@ +# Using other frameworks + +```{toctree} +:maxdepth: 1 + +bentoml +cerebrium +dstack +helm +lws +skypilot +triton +``` diff --git a/docs/source/serving/deploying_with_lws.md b/docs/source/deployment/frameworks/lws.md similarity index 91% rename from docs/source/serving/deploying_with_lws.md rename to docs/source/deployment/frameworks/lws.md index 22bab419eaca3..349fa83fbcb9d 100644 --- a/docs/source/serving/deploying_with_lws.md +++ b/docs/source/deployment/frameworks/lws.md @@ -1,6 +1,6 @@ -(deploying-with-lws)= +(deployment-lws)= -# Deploying with LWS +# LWS LeaderWorkerSet (LWS) is a Kubernetes API that aims to address common deployment patterns of AI/ML inference workloads. A major use case is for multi-host/multi-node distributed inference. diff --git a/docs/source/serving/run_on_sky.md b/docs/source/deployment/frameworks/skypilot.md similarity index 98% rename from docs/source/serving/run_on_sky.md rename to docs/source/deployment/frameworks/skypilot.md index 115873ae49292..f02a943026922 100644 --- a/docs/source/serving/run_on_sky.md +++ b/docs/source/deployment/frameworks/skypilot.md @@ -1,6 +1,6 @@ -(on-cloud)= +(deployment-skypilot)= -# Deploying and scaling up with SkyPilot +# SkyPilot ```{raw} html

@@ -12,9 +12,9 @@ vLLM can be **run and scaled to multiple service replicas on clouds and Kubernet ## Prerequisites -- Go to the [HuggingFace model page](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and request access to the model {code}`meta-llama/Meta-Llama-3-8B-Instruct`. +- Go to the [HuggingFace model page](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and request access to the model `meta-llama/Meta-Llama-3-8B-Instruct`. - Check that you have installed SkyPilot ([docs](https://skypilot.readthedocs.io/en/latest/getting-started/installation.html)). -- Check that {code}`sky check` shows clouds or Kubernetes are enabled. +- Check that `sky check` shows clouds or Kubernetes are enabled. ```console pip install skypilot-nightly diff --git a/docs/source/serving/deploying_with_triton.md b/docs/source/deployment/frameworks/triton.md similarity index 87% rename from docs/source/serving/deploying_with_triton.md rename to docs/source/deployment/frameworks/triton.md index 9b0a6f1d54ae8..94d87120159c6 100644 --- a/docs/source/serving/deploying_with_triton.md +++ b/docs/source/deployment/frameworks/triton.md @@ -1,5 +1,5 @@ -(deploying-with-triton)= +(deployment-triton)= -# Deploying with NVIDIA Triton +# NVIDIA Triton The [Triton Inference Server](https://github.com/triton-inference-server) hosts a tutorial demonstrating how to quickly deploy a simple [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) model using vLLM. Please see [Deploying a vLLM model in Triton](https://github.com/triton-inference-server/tutorials/blob/main/Quick_Deploy/vLLM/README.md#deploying-a-vllm-model-in-triton) for more details. diff --git a/docs/source/deployment/integrations/index.md b/docs/source/deployment/integrations/index.md new file mode 100644 index 0000000000000..d47ede8967547 --- /dev/null +++ b/docs/source/deployment/integrations/index.md @@ -0,0 +1,9 @@ +# External Integrations + +```{toctree} +:maxdepth: 1 + +kserve +kubeai +llamastack +``` diff --git a/docs/source/serving/deploying_with_kserve.md b/docs/source/deployment/integrations/kserve.md similarity index 85% rename from docs/source/serving/deploying_with_kserve.md rename to docs/source/deployment/integrations/kserve.md index feaeb5d0ec8a2..c780fd74e8f55 100644 --- a/docs/source/serving/deploying_with_kserve.md +++ b/docs/source/deployment/integrations/kserve.md @@ -1,6 +1,6 @@ -(deploying-with-kserve)= +(deployment-kserve)= -# Deploying with KServe +# KServe vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving. diff --git a/docs/source/serving/deploying_with_kubeai.md b/docs/source/deployment/integrations/kubeai.md similarity index 93% rename from docs/source/serving/deploying_with_kubeai.md rename to docs/source/deployment/integrations/kubeai.md index 3609d7e05acd3..2f5772e075d87 100644 --- a/docs/source/serving/deploying_with_kubeai.md +++ b/docs/source/deployment/integrations/kubeai.md @@ -1,6 +1,6 @@ -(deploying-with-kubeai)= +(deployment-kubeai)= -# Deploying with KubeAI +# KubeAI [KubeAI](https://github.com/substratusai/kubeai) is a Kubernetes operator that enables you to deploy and manage AI models on Kubernetes. It provides a simple and scalable way to deploy vLLM in production. Functionality such as scale-from-zero, load based autoscaling, model caching, and much more is provided out of the box with zero external dependencies. diff --git a/docs/source/serving/serving_with_llamastack.md b/docs/source/deployment/integrations/llamastack.md similarity index 95% rename from docs/source/serving/serving_with_llamastack.md rename to docs/source/deployment/integrations/llamastack.md index 71dadca7ad47c..474d2bdfa9580 100644 --- a/docs/source/serving/serving_with_llamastack.md +++ b/docs/source/deployment/integrations/llamastack.md @@ -1,6 +1,6 @@ -(run-on-llamastack)= +(deployment-llamastack)= -# Serving with Llama Stack +# Llama Stack vLLM is also available via [Llama Stack](https://github.com/meta-llama/llama-stack) . diff --git a/docs/source/serving/deploying_with_k8s.md b/docs/source/deployment/k8s.md similarity index 99% rename from docs/source/serving/deploying_with_k8s.md rename to docs/source/deployment/k8s.md index 5f9b0e4f55ecc..760214e112fba 100644 --- a/docs/source/serving/deploying_with_k8s.md +++ b/docs/source/deployment/k8s.md @@ -1,6 +1,6 @@ -(deploying-with-k8s)= +(deployment-k8s)= -# Deploying with Kubernetes +# Using Kubernetes Using Kubernetes to deploy vLLM is a scalable and efficient way to serve machine learning models. This guide will walk you through the process of deploying vLLM with Kubernetes, including the necessary prerequisites, steps for deployment, and testing. diff --git a/docs/source/serving/deploying_with_nginx.md b/docs/source/deployment/nginx.md similarity index 99% rename from docs/source/serving/deploying_with_nginx.md rename to docs/source/deployment/nginx.md index a1f00d8536465..a58f791c2997b 100644 --- a/docs/source/serving/deploying_with_nginx.md +++ b/docs/source/deployment/nginx.md @@ -1,6 +1,6 @@ (nginxloadbalancer)= -# Deploying with Nginx Loadbalancer +# Using Nginx This document shows how to launch multiple vLLM serving containers and use Nginx to act as a load balancer between the servers. diff --git a/docs/source/design/arch_overview.md b/docs/source/design/arch_overview.md index 2f1280c047672..5e0dd021ad02e 100644 --- a/docs/source/design/arch_overview.md +++ b/docs/source/design/arch_overview.md @@ -57,7 +57,7 @@ More API details can be found in the {doc}`Offline Inference The code for the `LLM` class can be found in . -### OpenAI-compatible API server +### OpenAI-Compatible API Server The second primary interface to vLLM is via its OpenAI-compatible API server. This server can be started using the `vllm serve` command. diff --git a/docs/source/features/disagg_prefill.md b/docs/source/features/disagg_prefill.md index 05226f2dec87c..645dc60807dd3 100644 --- a/docs/source/features/disagg_prefill.md +++ b/docs/source/features/disagg_prefill.md @@ -1,8 +1,12 @@ (disagg-prefill)= -# Disaggregated prefilling (experimental) +# Disaggregated Prefilling (experimental) -This page introduces you the disaggregated prefilling feature in vLLM. This feature is experimental and subject to change. +This page introduces you the disaggregated prefilling feature in vLLM. + +```{note} +This feature is experimental and subject to change. +``` ## Why disaggregated prefilling? diff --git a/docs/source/features/spec_decode.md b/docs/source/features/spec_decode.md index 8c52c97a41e48..bc8a0aa14dc5a 100644 --- a/docs/source/features/spec_decode.md +++ b/docs/source/features/spec_decode.md @@ -1,6 +1,6 @@ (spec-decode)= -# Speculative decoding +# Speculative Decoding ```{warning} Please note that speculative decoding in vLLM is not yet optimized and does diff --git a/docs/source/getting_started/installation/gpu-rocm.md b/docs/source/getting_started/installation/gpu-rocm.md index 796911d7305a6..e36b92513e31d 100644 --- a/docs/source/getting_started/installation/gpu-rocm.md +++ b/docs/source/getting_started/installation/gpu-rocm.md @@ -148,7 +148,7 @@ $ export PYTORCH_ROCM_ARCH="gfx90a;gfx942" $ python3 setup.py develop ``` -This may take 5-10 minutes. Currently, {code}`pip install .` does not work for ROCm installation. +This may take 5-10 minutes. Currently, `pip install .` does not work for ROCm installation. ```{tip} - Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm up step before collecting perf numbers. diff --git a/docs/source/getting_started/installation/hpu-gaudi.md b/docs/source/getting_started/installation/hpu-gaudi.md index 94de169f51a73..1d50cef3bdc83 100644 --- a/docs/source/getting_started/installation/hpu-gaudi.md +++ b/docs/source/getting_started/installation/hpu-gaudi.md @@ -82,7 +82,7 @@ $ python setup.py develop ## Supported Features -- [Offline batched inference](#offline-batched-inference) +- [Offline inference](#offline-inference) - Online inference via [OpenAI-Compatible Server](#openai-compatible-server) - HPU autodetection - no need to manually select device within vLLM - Paged KV cache with algorithms enabled for Intel Gaudi accelerators diff --git a/docs/source/getting_started/quickstart.md b/docs/source/getting_started/quickstart.md index ff216f8af30f9..3f9556165ece4 100644 --- a/docs/source/getting_started/quickstart.md +++ b/docs/source/getting_started/quickstart.md @@ -2,20 +2,20 @@ # Quickstart -This guide will help you quickly get started with vLLM to: +This guide will help you quickly get started with vLLM to perform: -- [Run offline batched inference](#offline-batched-inference) -- [Run OpenAI-compatible inference](#openai-compatible-server) +- [Offline batched inference](#quickstart-offline) +- [Online inference using OpenAI-compatible server](#quickstart-online) ## Prerequisites - OS: Linux - Python: 3.9 -- 3.12 -- GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.) ## Installation -You can install vLLM using pip. It's recommended to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage Python environments. +If you are using NVIDIA GPUs, you can install vLLM using [pip](https://pypi.org/project/vllm/) directly. +It's recommended to use [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html) to create and manage Python environments. ```console $ conda create -n myenv python=3.10 -y @@ -23,9 +23,11 @@ $ conda activate myenv $ pip install vllm ``` -Please refer to the [installation documentation](#installation-index) for more details on installing vLLM. +```{note} +For non-CUDA platforms, please refer [here](#installation-index) for specific instructions on how to install vLLM. +``` -(offline-batched-inference)= +(quickstart-offline)= ## Offline Batched Inference @@ -73,7 +75,7 @@ for output in outputs: print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` -(openai-compatible-server)= +(quickstart-online)= ## OpenAI-Compatible Server diff --git a/docs/source/index.md b/docs/source/index.md index 4bc40bf0f5e41..c335155bd6e14 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -65,32 +65,14 @@ getting_started/troubleshooting getting_started/faq ``` -```{toctree} -:caption: Serving -:maxdepth: 1 - -serving/openai_compatible_server -serving/deploying_with_docker -serving/deploying_with_k8s -serving/deploying_with_helm -serving/deploying_with_nginx -serving/distributed_serving -serving/metrics -serving/integrations -serving/tensorizer -serving/runai_model_streamer -serving/engine_args -serving/env_vars -serving/usage_stats -``` - ```{toctree} :caption: Models :maxdepth: 1 -models/supported_models models/generative_models models/pooling_models +models/supported_models +models/extensions/index ``` ```{toctree} @@ -99,7 +81,6 @@ models/pooling_models features/quantization/index features/lora -features/multimodal_inputs features/tool_calling features/structured_outputs features/automatic_prefix_caching @@ -108,6 +89,32 @@ features/spec_decode features/compatibility_matrix ``` +```{toctree} +:caption: Inference and Serving +:maxdepth: 1 + +serving/offline_inference +serving/openai_compatible_server +serving/multimodal_inputs +serving/distributed_serving +serving/metrics +serving/engine_args +serving/env_vars +serving/usage_stats +serving/integrations/index +``` + +```{toctree} +:caption: Deployment +:maxdepth: 1 + +deployment/docker +deployment/k8s +deployment/nginx +deployment/frameworks/index +deployment/integrations/index +``` + ```{toctree} :caption: Performance :maxdepth: 1 diff --git a/docs/source/models/extensions/index.md b/docs/source/models/extensions/index.md new file mode 100644 index 0000000000000..cff09d12eba47 --- /dev/null +++ b/docs/source/models/extensions/index.md @@ -0,0 +1,8 @@ +# Built-in Extensions + +```{toctree} +:maxdepth: 1 + +runai_model_streamer +tensorizer +``` diff --git a/docs/source/serving/runai_model_streamer.md b/docs/source/models/extensions/runai_model_streamer.md similarity index 98% rename from docs/source/serving/runai_model_streamer.md rename to docs/source/models/extensions/runai_model_streamer.md index d4269050ff574..fe2701194a604 100644 --- a/docs/source/serving/runai_model_streamer.md +++ b/docs/source/models/extensions/runai_model_streamer.md @@ -1,6 +1,6 @@ (runai-model-streamer)= -# Loading Models with Run:ai Model Streamer +# Loading models with Run:ai Model Streamer Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in [Run:ai Model Streamer Documentation](https://github.com/run-ai/runai-model-streamer/blob/master/docs/README.md). diff --git a/docs/source/serving/tensorizer.md b/docs/source/models/extensions/tensorizer.md similarity index 95% rename from docs/source/serving/tensorizer.md rename to docs/source/models/extensions/tensorizer.md index d3dd29d48f730..42ed5c795dd27 100644 --- a/docs/source/serving/tensorizer.md +++ b/docs/source/models/extensions/tensorizer.md @@ -1,6 +1,6 @@ (tensorizer)= -# Loading Models with CoreWeave's Tensorizer +# Loading models with CoreWeave's Tensorizer vLLM supports loading models with [CoreWeave's Tensorizer](https://docs.coreweave.com/coreweave-machine-learning-and-ai/inference/tensorizer). vLLM model tensors that have been serialized to disk, an HTTP/HTTPS endpoint, or S3 endpoint can be deserialized diff --git a/docs/source/models/supported_models.md b/docs/source/models/supported_models.md index 94a8849f7edcd..590bea992d1fc 100644 --- a/docs/source/models/supported_models.md +++ b/docs/source/models/supported_models.md @@ -1,9 +1,9 @@ (supported-models)= -# Supported Models +# List of Supported Models vLLM supports generative and pooling models across various tasks. -If a model supports more than one task, you can set the task via the {code}`--task` argument. +If a model supports more than one task, you can set the task via the `--task` argument. For each task, we list the model architectures that have been implemented in vLLM. Alongside each architecture, we include some popular models that use it. @@ -14,8 +14,8 @@ Alongside each architecture, we include some popular models that use it. By default, vLLM loads models from [HuggingFace (HF) Hub](https://huggingface.co/models). -To determine whether a given model is supported, you can check the {code}`config.json` file inside the HF repository. -If the {code}`"architectures"` field contains a model architecture listed below, then it should be supported in theory. +To determine whether a given model is supported, you can check the `config.json` file inside the HF repository. +If the `"architectures"` field contains a model architecture listed below, then it should be supported in theory. ````{tip} The easiest way to check if your model is really supported at runtime is to run the program below: @@ -48,7 +48,7 @@ To use models from [ModelScope](https://www.modelscope.cn) instead of HuggingFac $ export VLLM_USE_MODELSCOPE=True ``` -And use with {code}`trust_remote_code=True`. +And use with `trust_remote_code=True`. ```python from vllm import LLM @@ -420,15 +420,15 @@ you should explicitly specify the task type to ensure that the model is used in ``` ```{note} -{code}`ssmits/Qwen2-7B-Instruct-embed-base` has an improperly defined Sentence Transformers config. -You should manually set mean pooling by passing {code}`--override-pooler-config '{"pooling_type": "MEAN"}'`. +`ssmits/Qwen2-7B-Instruct-embed-base` has an improperly defined Sentence Transformers config. +You should manually set mean pooling by passing `--override-pooler-config '{"pooling_type": "MEAN"}'`. ``` ```{note} -Unlike base Qwen2, {code}`Alibaba-NLP/gte-Qwen2-7B-instruct` uses bi-directional attention. -You can set {code}`--hf-overrides '{"is_causal": false}'` to change the attention mask accordingly. +Unlike base Qwen2, `Alibaba-NLP/gte-Qwen2-7B-instruct` uses bi-directional attention. +You can set `--hf-overrides '{"is_causal": false}'` to change the attention mask accordingly. -On the other hand, its 1.5B variant ({code}`Alibaba-NLP/gte-Qwen2-1.5B-instruct`) uses causal attention +On the other hand, its 1.5B variant (`Alibaba-NLP/gte-Qwen2-1.5B-instruct`) uses causal attention despite being described otherwise on its model card. ``` @@ -468,8 +468,8 @@ If your model is not in the above list, we will try to automatically convert the {func}`vllm.model_executor.models.adapters.as_reward_model`. By default, we return the hidden states of each token directly. ```{important} -For process-supervised reward models such as {code}`peiyi9979/math-shepherd-mistral-7b-prm`, the pooling config should be set explicitly, -e.g.: {code}`--override-pooler-config '{"pooling_type": "STEP", "step_tag_id": 123, "returned_token_ids": [456, 789]}'`. +For process-supervised reward models such as `peiyi9979/math-shepherd-mistral-7b-prm`, the pooling config should be set explicitly, +e.g.: `--override-pooler-config '{"pooling_type": "STEP", "step_tag_id": 123, "returned_token_ids": [456, 789]}'`. ``` #### Classification (`--task classify`) @@ -537,13 +537,13 @@ The following modalities are supported depending on the model: - **V**ideo - **A**udio -Any combination of modalities joined by {code}`+` are supported. +Any combination of modalities joined by `+` are supported. -- e.g.: {code}`T + I` means that the model supports text-only, image-only, and text-with-image inputs. +- e.g.: `T + I` means that the model supports text-only, image-only, and text-with-image inputs. -On the other hand, modalities separated by {code}`/` are mutually exclusive. +On the other hand, modalities separated by `/` are mutually exclusive. -- e.g.: {code}`T / I` means that the model supports text-only and image-only inputs, but not text-with-image inputs. +- e.g.: `T / I` means that the model supports text-only and image-only inputs, but not text-with-image inputs. See [this page](#multimodal-inputs) on how to pass multi-modal inputs to the model. @@ -731,8 +731,8 @@ See [this page](#generative-models) for more information on how to use generativ + Multiple items can be inputted per text prompt for this modality. ````{important} -To enable multiple multi-modal items per text prompt, you have to set {code}`limit_mm_per_prompt` (offline inference) -or {code}`--limit-mm-per-prompt` (online inference). For example, to enable passing up to 4 images per text prompt: +To enable multiple multi-modal items per text prompt, you have to set `limit_mm_per_prompt` (offline inference) +or `--limit-mm-per-prompt` (online inference). For example, to enable passing up to 4 images per text prompt: ```python llm = LLM( @@ -751,11 +751,11 @@ vLLM currently only supports adding LoRA to the language backbone of multimodal ``` ```{note} -To use {code}`TIGER-Lab/Mantis-8B-siglip-llama3`, you have pass {code}`--hf_overrides '{"architectures": ["MantisForConditionalGeneration"]}'` when running vLLM. +To use `TIGER-Lab/Mantis-8B-siglip-llama3`, you have pass `--hf_overrides '{"architectures": ["MantisForConditionalGeneration"]}'` when running vLLM. ``` ```{note} -The official {code}`openbmb/MiniCPM-V-2` doesn't work yet, so we need to use a fork ({code}`HwwwH/MiniCPM-V-2`) for now. +The official `openbmb/MiniCPM-V-2` doesn't work yet, so we need to use a fork (`HwwwH/MiniCPM-V-2`) for now. For more details, please see: ``` @@ -770,7 +770,7 @@ you should explicitly specify the task type to ensure that the model is used in #### Text Embedding (`--task embed`) -Any text generation model can be converted into an embedding model by passing {code}`--task embed`. +Any text generation model can be converted into an embedding model by passing `--task embed`. ```{note} To get the best results, you should use pooling models that are specifically trained as such. @@ -818,7 +818,7 @@ At vLLM, we are committed to facilitating the integration and support of third-p 2. **Best-Effort Consistency**: While we aim to maintain a level of consistency between the models implemented in vLLM and other frameworks like transformers, complete alignment is not always feasible. Factors like acceleration techniques and the use of low-precision computations can introduce discrepancies. Our commitment is to ensure that the implemented models are functional and produce sensible results. ```{tip} -When comparing the output of {code}`model.generate` from HuggingFace Transformers with the output of {code}`llm.generate` from vLLM, note that the former reads the model's generation config file (i.e., [generation_config.json](https://github.com/huggingface/transformers/blob/19dabe96362803fb0a9ae7073d03533966598b17/src/transformers/generation/utils.py#L1945)) and applies the default parameters for generation, while the latter only uses the parameters passed to the function. Ensure all sampling parameters are identical when comparing outputs. +When comparing the output of `model.generate` from HuggingFace Transformers with the output of `llm.generate` from vLLM, note that the former reads the model's generation config file (i.e., [generation_config.json](https://github.com/huggingface/transformers/blob/19dabe96362803fb0a9ae7073d03533966598b17/src/transformers/generation/utils.py#L1945)) and applies the default parameters for generation, while the latter only uses the parameters passed to the function. Ensure all sampling parameters are identical when comparing outputs. ``` 3. **Issue Resolution and Model Updates**: Users are encouraged to report any bugs or issues they encounter with third-party models. Proposed fixes should be submitted via PRs, with a clear explanation of the problem and the rationale behind the proposed solution. If a fix for one model impacts another, we rely on the community to highlight and address these cross-model dependencies. Note: for bugfix PRs, it is good etiquette to inform the original author to seek their feedback. diff --git a/docs/source/serving/distributed_serving.md b/docs/source/serving/distributed_serving.md index 6fbc1ea104678..b1703249d7224 100644 --- a/docs/source/serving/distributed_serving.md +++ b/docs/source/serving/distributed_serving.md @@ -18,13 +18,13 @@ After adding enough GPUs and nodes to hold the model, you can run vLLM first, wh There is one edge case: if the model fits in a single node with multiple GPUs, but the number of GPUs cannot divide the model size evenly, you can use pipeline parallelism, which splits the model along layers and supports uneven splits. In this case, the tensor parallel size should be 1 and the pipeline parallel size should be the number of GPUs. ``` -## Details for Distributed Inference and Serving +## Running vLLM on a single node vLLM supports distributed tensor-parallel and pipeline-parallel inference and serving. Currently, we support [Megatron-LM's tensor parallel algorithm](https://arxiv.org/pdf/1909.08053.pdf). We manage the distributed runtime with either [Ray](https://github.com/ray-project/ray) or python native multiprocessing. Multiprocessing can be used when deploying on a single node, multi-node inferencing currently requires Ray. -Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured {code}`tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the {code}`LLM` class {code}`distributed_executor_backend` argument or {code}`--distributed-executor-backend` API server argument. Set it to {code}`mp` for multiprocessing or {code}`ray` for Ray. It's not required for Ray to be installed for the multiprocessing case. +Multiprocessing will be used by default when not running in a Ray placement group and if there are sufficient GPUs available on the same node for the configured `tensor_parallel_size`, otherwise Ray will be used. This default can be overridden via the `LLM` class `distributed_executor_backend` argument or `--distributed-executor-backend` API server argument. Set it to `mp` for multiprocessing or `ray` for Ray. It's not required for Ray to be installed for the multiprocessing case. -To run multi-GPU inference with the {code}`LLM` class, set the {code}`tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs: +To run multi-GPU inference with the `LLM` class, set the `tensor_parallel_size` argument to the number of GPUs you want to use. For example, to run inference on 4 GPUs: ```python from vllm import LLM @@ -32,14 +32,14 @@ llm = LLM("facebook/opt-13b", tensor_parallel_size=4) output = llm.generate("San Franciso is a") ``` -To run multi-GPU serving, pass in the {code}`--tensor-parallel-size` argument when starting the server. For example, to run API server on 4 GPUs: +To run multi-GPU serving, pass in the `--tensor-parallel-size` argument when starting the server. For example, to run API server on 4 GPUs: ```console $ vllm serve facebook/opt-13b \ $ --tensor-parallel-size 4 ``` -You can also additionally specify {code}`--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism: +You can also additionally specify `--pipeline-parallel-size` to enable pipeline parallelism. For example, to run API server on 8 GPUs with pipeline parallelism and tensor parallelism: ```console $ vllm serve gpt2 \ @@ -47,7 +47,7 @@ $ --tensor-parallel-size 4 \ $ --pipeline-parallel-size 2 ``` -## Multi-Node Inference and Serving +## Running vLLM on multiple nodes If a single node does not have enough GPUs to hold the model, you can run the model using multiple nodes. It is important to make sure the execution environment is the same on all nodes, including the model path, the Python environment. The recommended way is to use docker images to ensure the same environment, and hide the heterogeneity of the host machines via mapping them into the same docker configuration. diff --git a/docs/source/serving/integrations.md b/docs/source/serving/integrations.md deleted file mode 100644 index d214c77254257..0000000000000 --- a/docs/source/serving/integrations.md +++ /dev/null @@ -1,17 +0,0 @@ -# Integrations - -```{toctree} -:maxdepth: 1 - -run_on_sky -deploying_with_kserve -deploying_with_kubeai -deploying_with_triton -deploying_with_bentoml -deploying_with_cerebrium -deploying_with_lws -deploying_with_dstack -serving_with_langchain -serving_with_llamaindex -serving_with_llamastack -``` diff --git a/docs/source/serving/integrations/index.md b/docs/source/serving/integrations/index.md new file mode 100644 index 0000000000000..371c284981ce9 --- /dev/null +++ b/docs/source/serving/integrations/index.md @@ -0,0 +1,8 @@ +# External Integrations + +```{toctree} +:maxdepth: 1 + +langchain +llamaindex +``` diff --git a/docs/source/serving/serving_with_langchain.md b/docs/source/serving/integrations/langchain.md similarity index 82% rename from docs/source/serving/serving_with_langchain.md rename to docs/source/serving/integrations/langchain.md index 96bd5943f3d64..49ff6e0c32a72 100644 --- a/docs/source/serving/serving_with_langchain.md +++ b/docs/source/serving/integrations/langchain.md @@ -1,10 +1,10 @@ -(run-on-langchain)= +(serving-langchain)= -# Serving with Langchain +# LangChain -vLLM is also available via [Langchain](https://github.com/langchain-ai/langchain) . +vLLM is also available via [LangChain](https://github.com/langchain-ai/langchain) . -To install langchain, run +To install LangChain, run ```console $ pip install langchain langchain_community -q diff --git a/docs/source/serving/serving_with_llamaindex.md b/docs/source/serving/integrations/llamaindex.md similarity index 74% rename from docs/source/serving/serving_with_llamaindex.md rename to docs/source/serving/integrations/llamaindex.md index 98859d8e3f828..9961c181d7e1c 100644 --- a/docs/source/serving/serving_with_llamaindex.md +++ b/docs/source/serving/integrations/llamaindex.md @@ -1,10 +1,10 @@ -(run-on-llamaindex)= +(serving-llamaindex)= -# Serving with llama_index +# LlamaIndex -vLLM is also available via [llama_index](https://github.com/run-llama/llama_index) . +vLLM is also available via [LlamaIndex](https://github.com/run-llama/llama_index) . -To install llamaindex, run +To install LlamaIndex, run ```console $ pip install llama-index-llms-vllm -q diff --git a/docs/source/serving/metrics.md b/docs/source/serving/metrics.md index 2dc78643f6d8f..e6ded2e6dd465 100644 --- a/docs/source/serving/metrics.md +++ b/docs/source/serving/metrics.md @@ -4,7 +4,7 @@ vLLM exposes a number of metrics that can be used to monitor the health of the system. These metrics are exposed via the `/metrics` endpoint on the vLLM OpenAI compatible API server. -You can start the server using Python, or using [Docker](deploying_with_docker.md): +You can start the server using Python, or using [Docker](#deployment-docker): ```console $ vllm serve unsloth/Llama-3.2-1B-Instruct diff --git a/docs/source/features/multimodal_inputs.md b/docs/source/serving/multimodal_inputs.md similarity index 95% rename from docs/source/features/multimodal_inputs.md rename to docs/source/serving/multimodal_inputs.md index 4f45a9f448cf0..0efa09f2869ca 100644 --- a/docs/source/features/multimodal_inputs.md +++ b/docs/source/serving/multimodal_inputs.md @@ -18,7 +18,7 @@ To input multi-modal data, follow this schema in {class}`vllm.inputs.PromptType` ### Image -You can pass a single image to the {code}`'image'` field of the multi-modal dictionary, as shown in the following examples: +You can pass a single image to the `'image'` field of the multi-modal dictionary, as shown in the following examples: ```python llm = LLM(model="llava-hf/llava-1.5-7b-hf") @@ -122,21 +122,21 @@ for o in outputs: ### Video -You can pass a list of NumPy arrays directly to the {code}`'video'` field of the multi-modal dictionary +You can pass a list of NumPy arrays directly to the `'video'` field of the multi-modal dictionary instead of using multi-image input. Full example: ### Audio -You can pass a tuple {code}`(array, sampling_rate)` to the {code}`'audio'` field of the multi-modal dictionary. +You can pass a tuple `(array, sampling_rate)` to the `'audio'` field of the multi-modal dictionary. Full example: ### Embedding To input pre-computed embeddings belonging to a data type (i.e. image, video, or audio) directly to the language model, -pass a tensor of shape {code}`(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary. +pass a tensor of shape `(num_items, feature_size, hidden_size of LM)` to the corresponding field of the multi-modal dictionary. ```python # Inference with image embeddings as input @@ -294,7 +294,7 @@ $ export VLLM_IMAGE_FETCH_TIMEOUT= ### Video -Instead of {code}`image_url`, you can pass a video file via {code}`video_url`. Here is a simple example using [LLaVA-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf). +Instead of `image_url`, you can pass a video file via `video_url`. Here is a simple example using [LLaVA-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf). First, launch the OpenAI-compatible server: @@ -418,7 +418,7 @@ result = chat_completion_from_base64.choices[0].message.content print("Chat completion output from input audio:", result) ``` -Alternatively, you can pass {code}`audio_url`, which is the audio counterpart of {code}`image_url` for image input: +Alternatively, you can pass `audio_url`, which is the audio counterpart of `image_url` for image input: ```python chat_completion_from_url = client.chat.completions.create( diff --git a/docs/source/serving/offline_inference.md b/docs/source/serving/offline_inference.md new file mode 100644 index 0000000000000..83178f7811825 --- /dev/null +++ b/docs/source/serving/offline_inference.md @@ -0,0 +1,79 @@ +(offline-inference)= + +# Offline Inference + +You can run vLLM in your own code on a list of prompts. + +The offline API is based on the {class}`~vllm.LLM` class. +To initialize the vLLM engine, create a new instance of `LLM` and specify the model to run. + +For example, the following code downloads the [`facebook/opt-125m`](https://huggingface.co/facebook/opt-125m) model from HuggingFace +and runs it in vLLM using the default configuration. + +```python +llm = LLM(model="facebook/opt-125m") +``` + +After initializing the `LLM` instance, you can perform model inference using various APIs. +The available APIs depend on the type of model that is being run: + +- [Generative models](#generative-models) output logprobs which are sampled from to obtain the final output text. +- [Pooling models](#pooling-models) output their hidden states directly. + +Please refer to the above pages for more details about each API. + +```{seealso} +[API Reference](/dev/offline_inference/offline_index) +``` + +## Configuration Options + +This section lists the most common options for running the vLLM engine. +For a full list, refer to the [Engine Arguments](#engine-args) page. + +### Reducing memory usage + +Large models might cause your machine to run out of memory (OOM). Here are some options that help alleviate this problem. + +#### Tensor Parallelism (TP) + +Tensor parallelism (`tensor_parallel_size` option) can be used to split the model across multiple GPUs. + +The following code splits the model across 2 GPUs. + +```python +llm = LLM(model="ibm-granite/granite-3.1-8b-instruct", + tensor_parallel_size=2) +``` + +```{important} +To ensure that vLLM initializes CUDA correctly, you should avoid calling related functions (e.g. {func}`torch.cuda.set_device`) +before initializing vLLM. Otherwise, you may run into an error like `RuntimeError: Cannot re-initialize CUDA in forked subprocess`. + +To control which devices are used, please instead set the `CUDA_VISIBLE_DEVICES` environment variable. +``` + +#### Quantization + +Quantized models take less memory at the cost of lower precision. + +Statically quantized models can be downloaded from HF Hub (some popular ones are available at [Neural Magic](https://huggingface.co/neuralmagic)) +and used directly without extra configuration. + +Dynamic quantization is also supported via the `quantization` option -- see [here](#quantization-index) for more details. + +#### Context length and batch size + +You can further reduce memory usage by limit the context length of the model (`max_model_len` option) +and the maximum batch size (`max_num_seqs` option). + +```python +llm = LLM(model="adept/fuyu-8b", + max_model_len=2048, + max_num_seqs=2) +``` + +### Performance optimization and tuning + +You can potentially improve the performance of vLLM by finetuning various options. +Please refer to [this guide](#optimization-and-tuning) for more details. diff --git a/docs/source/serving/openai_compatible_server.md b/docs/source/serving/openai_compatible_server.md index 97e9879075570..1e5ea6357d202 100644 --- a/docs/source/serving/openai_compatible_server.md +++ b/docs/source/serving/openai_compatible_server.md @@ -1,8 +1,10 @@ -# OpenAI Compatible Server +(openai-compatible-server)= -vLLM provides an HTTP server that implements OpenAI's [Completions](https://platform.openai.com/docs/api-reference/completions) and [Chat](https://platform.openai.com/docs/api-reference/chat) API, and more! +# OpenAI-Compatible Server -You can start the server via the [`vllm serve`](#vllm-serve) command, or through [Docker](deploying_with_docker.md): +vLLM provides an HTTP server that implements OpenAI's [Completions API](https://platform.openai.com/docs/api-reference/completions), [Chat API](https://platform.openai.com/docs/api-reference/chat), and more! + +You can start the server via the [`vllm serve`](#vllm-serve) command, or through [Docker](#deployment-docker): ```bash vllm serve NousResearch/Meta-Llama-3-8B-Instruct --dtype auto --api-key token-abc123 ``` diff --git a/docs/source/serving/usage_stats.md b/docs/source/serving/usage_stats.md index 3d02fbab9216e..cfc3cb2576873 100644 --- a/docs/source/serving/usage_stats.md +++ b/docs/source/serving/usage_stats.md @@ -45,7 +45,7 @@ You can preview the collected data by running the following command: tail ~/.config/vllm/usage_stats.json ``` -## Opt-out of Usage Stats Collection +## Opting out You can opt-out of usage stats collection by setting the `VLLM_NO_USAGE_STATS` or `DO_NOT_TRACK` environment variable, or by creating a `~/.config/vllm/do_not_track` file: