Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat] add imagePullSecrets option to helm chart #179

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kalantar
Copy link

An image pull secret is needed to use a vllm image from a private repository. This PR introduces a small change to the helm chart allowing specification of an image pull secret imagePullSecret in a modelSpec (where the repository and tag are specified).

FIX #xxxx (link existing issues this PR will resolve)

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

Copy link
Collaborator

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you edit the values.yaml a bit to add the imagePullSecrets into the comments?
See here:

# modelSpec - configuring multiple serving engines deployments that runs different models
# Each entry in the modelSpec array should contain the following fields:
# - name: (string) The name of the model, e.g., "example-model"
# - repository: (string) The repository of the model, e.g., "vllm/vllm-openai"
# - tag: (string) The tag of the model, e.g., "latest"
# - modelURL: (string) The URL of the model, e.g., "facebook/opt-125m"
#
# - replicaCount: (int) The number of replicas for the model, e.g. 1
# - requestCPU: (int) The number of CPUs requested for the model, e.g. 6
# - requestMemory: (string) The amount of memory requested for the model, e.g., "16Gi"
# - requestGPU: (int) The number of GPUs requested for the model, e.g., 1
#
# - pvcStorage: (Optional, string) The amount of storage requested for the model, e.g., "50Gi".
# - pvcAccessMode: (Optional, list) The access mode policy for the mounted volume, e.g., ["ReadWriteOnce"]
# - storageClass: (Optional, String) The storage class of the PVC e.g., "", default is ""
# - pvcMatchLabels: (Optional, map) The labels to match the PVC, e.g., {model: "opt125m"}
#
# - vllmConfig: (optional, map) The configuration for the VLLM model, supported options are:
# - enablePrefixCaching: (optional, bool) Enable prefix caching, e.g., false
# - enableChunkedPrefill: (optional, bool) Enable chunked prefill, e.g., false
# - maxModelLen: (optional, int) The maximum model length, e.g., 16384
# - dtype: (optional, string) The data type, e.g., "bfloat16"
# - tensorParallelSize: (optional, int) The degree of tensor parallelism, e.g., 2
# - extraArgs: (optional, list) Extra command line arguments to pass to vLLM, e.g., ["--disable-log-requests"]
#
# - lmcacheConfig: (optional, map) The configuration of the LMCache for KV offloading, supported options are:
# - enabled: (optional, bool) Enable LMCache, e.g., true
# - cpuOffloadingBufferSize: (optional, string) The CPU offloading buffer size, e.g., "30"
#
# - hf_token: (optional, string) the Huggingface tokens for this model
#
# - env: (optional, list) The environment variables to set in the container, e.g., your HF_TOKEN
#
# - nodeSelectorTerms: (optional, list) The node selector terms to match the nodes
#
# - shmSize: (optional, string) The size of the shared memory, e.g., "20Gi"
#
# Example:
# modelSpec:
# - name: "mistral"
# repository: "lmcache/vllm-openai"
# tag: "latest"
# modelURL: "mistralai/Mistral-7B-Instruct-v0.2"
# replicaCount: 1
#
# requestCPU: 10
# requestMemory: "64Gi"
# requestGPU: 1
#
# pvcStorage: "50Gi"
# pvcAccessMode:
# - ReadWriteOnce
# pvcMatchLabels:
# model: "mistral"
#
# vllmConfig:
# enableChunkedPrefill: false
# enablePrefixCaching: false
# maxModelLen: 16384
# dtype: "bfloat16"
# extraArgs: ["--disable-log-requests", "--gpu-memory-utilization", "0.8"]
#
# lmcacheConfig:
# enabled: true
# cpuOffloadingBufferSize: "30"
#
# hf_token: <HUGGING_FACE_TOKEN>
#
# nodeSelectorTerms:
# - matchExpressions:
# - key: nvidia.com/gpu.product
# operator: "In"
# values:
# - "NVIDIA-RTX-A6000"

Otherwise LGTM! Thanks for the contribution 🎉!

@ApostaC ApostaC mentioned this pull request Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants