Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] New allowed_token_ids decoding request parameter #6753

Merged
merged 5 commits into from
Jul 29, 2024

Conversation

njhill
Copy link
Member

@njhill njhill commented Jul 24, 2024

This PR adds a new LogitsProcessor for constraining decoded tokens to a fixed set of token ids. This is needed for some of our classification use cases.

It is exposed via a new allowed_token_ids parameter in the OpenAI completion API (wouldn't be applicable to chat use cases).

This PR adds a new LogitsProcessor for constraining decoded tokens to a fixed set of token ids. This can be useful for some classification use cases.

It is exposed via a new allowed_token_ids parameter in the OpenAI completion API (wouldn't be applicable to chat use cases).
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

This reminds me of #5986...

@DarkLight1337 DarkLight1337 self-assigned this Jul 24, 2024
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, just some minor nits.

@njhill
Copy link
Member Author

njhill commented Jul 24, 2024

@DarkLight1337 it's a bit different in that this would be used in conjunction with a limited set of specific class tokens and would probably only be used with max_tokens=1, it's not intended for text generation. #5986 is for excluding certain words from generated text output.

@njhill
Copy link
Member Author

njhill commented Jul 24, 2024

Thanks @DarkLight1337, I'll add a test too.

@mgoin
Copy link
Member

mgoin commented Jul 25, 2024

Could the same result be achieved with high logit_bias on the token ids?

@njhill
Copy link
Member Author

njhill commented Jul 25, 2024

@mgoin we thought about this but it's not really reliable since we don't know what the logits distribution looks like .. what value do we use in a general case to ensure that the tokens of interest all end up higher than any other token, while not overflowing

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) July 26, 2024 02:26
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 26, 2024
@DarkLight1337 DarkLight1337 merged commit 9f69d82 into vllm-project:main Jul 29, 2024
72 checks passed
@njhill njhill deleted the allowed_token_ids branch July 29, 2024 23:38
tjohnson31415 added a commit to tjohnson31415/vllm that referenced this pull request Jul 30, 2024
* upstream/main: (66 commits)
  [Bugfix] Fix PaliGemma MMP (vllm-project#6930)
  [TPU] Fix greedy decoding (vllm-project#6933)
  [Kernel] Tuned int8 kernels for Ada Lovelace (vllm-project#6848)
  [Kernel] Fix marlin divide-by-zero warnings (vllm-project#6904)
  [ci] GHA workflow to remove ready label upon "/notready" comment (vllm-project#6921)
  [Kernel] Remove unused variables in awq/gemm_kernels.cu (vllm-project#6908)
  [Frontend] New `allowed_token_ids` decoding request parameter (vllm-project#6753)
  [Bugfix] Allow vllm to still work if triton is not installed. (vllm-project#6786)
  [TPU] Support tensor parallelism in async llm engine (vllm-project#6891)
  [Kernel] Fix deprecation function warnings squeezellm quant_cuda_kernel (vllm-project#6901)
  [Core] Reduce unnecessary compute when logprobs=None (vllm-project#6532)
  [Kernel] Tuned FP8 Kernels for Ada Lovelace (vllm-project#6677)
  [Model] Initialize support for InternVL2 series models (vllm-project#6514)
  [Misc] Pass cutlass_fp8_supported correctly in fbgemm_fp8 (vllm-project#6871)
  Add Nemotron to PP_SUPPORTED_MODELS (vllm-project#6863)
  [Kernel] Increase precision of GPTQ/AWQ Marlin kernel (vllm-project#6795)
  [TPU] Reduce compilation time & Upgrade PyTorch XLA version  (vllm-project#6856)
  [Docs] Add RunLLM chat widget (vllm-project#6857)
  [Model] Initial support for BLIP-2 (vllm-project#5920)
  [CI/Build][Doc] Update CI and Doc for VLM example changes (vllm-project#6860)
  ...
Xaenalt pushed a commit to Xaenalt/vllm that referenced this pull request Aug 1, 2024
kylesayrs pushed a commit to neuralmagic/vllm that referenced this pull request Aug 17, 2024
@whyiug
Copy link
Contributor

whyiug commented Oct 23, 2024

Why isn’t chat case applicable?
Additionally, how can this be used with multimodal models? I've encountered some classification cases involving multimodal models. Appreciate your guidance. @njhill

Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
@xymou
Copy link

xymou commented Nov 23, 2024

How can this be used with chat completions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants