Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically configure KV cache size #6

Merged
merged 17 commits into from
Mar 12, 2023
Merged

Automatically configure KV cache size #6

merged 17 commits into from
Mar 12, 2023

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Mar 3, 2023

This PR adds OPT memory analyzer to the system, and uses it to automatically determine the KV cache size.

Tested models:

  • OPT-125M
  • OPT-350M
  • OPT-1.3B
  • OPT-2.7B
  • OPT-6.7B
  • OPT-13B

Tested GPUs:

  • A100

@WoosukKwon WoosukKwon merged commit e9d3f2f into main Mar 12, 2023
@WoosukKwon WoosukKwon deleted the autoconfig branch March 12, 2023 07:23
xiangyuT added a commit to xiangyuT/vllm that referenced this pull request Oct 24, 2023
* finish changing scheduler

* finish merge

* fix model

* Fix (vllm-project#5)

* fix problems

* fix

* delete unused params

* remove redundant comments

---------

Co-authored-by: Xiangyu Tian <[email protected]>
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
slyalin pushed a commit to slyalin/vllm that referenced this pull request Mar 21, 2024
mzusman added a commit to mzusman/vllm that referenced this pull request Apr 16, 2024
Co-authored-by: Mor Zusman <[email protected]>
dtrifiro referenced this pull request in dtrifiro/vllm Apr 26, 2024
[CI/Build] Dockerfile.ubi : Remove test stage
Starmys pushed a commit to Starmys/vllm that referenced this pull request May 20, 2024
@alixiaodi alixiaodi mentioned this pull request Aug 2, 2024
zeroorhero pushed a commit to zeroorhero/vllm that referenced this pull request Sep 23, 2024
heheda12345 added a commit to heheda12345/vllm that referenced this pull request Sep 25, 2024
…tokens

[Bugfix] Include encoder_prompt_tokens in num_prompt_tokensin UsageInfo
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant