-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the ability to benchmark multiple models concurrently #850
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
liu-cong
requested review from
achandrasekar,
ahg-g and
annapendleton
as code owners
October 15, 2024 03:07
liu-cong
commented
Oct 15, 2024
benchmarks/benchmark/tools/profile-generator/container/latency_throughput_curve.sh
Outdated
Show resolved
Hide resolved
benchmarks/benchmark/tools/profile-generator/container/latency_throughput_curve.sh
Outdated
Show resolved
Hide resolved
benchmarks/benchmark/tools/profile-generator/container/benchmark_serving.py
Show resolved
Hide resolved
Thanks for sending this out @liu-cong! Change looks good overall and is very useful. @Bslabe123 if you can take a deeper look and make sure the existing cases with jetstream / vllm continue to work as expected that would be great. |
/hold I am testing the terraform changes |
I also tested terraform, looks good to me |
This is useful for benchmarking multiple LoRA adapters. - Also fix the latency_throughput_curve.sh to parse non-integer request rate properly. - Also added "errors" to the benchmark results.
Bslabe123
approved these changes
Oct 22, 2024
achandrasekar
approved these changes
Oct 22, 2024
leroyjb
pushed a commit
to leroyjb/ai-on-gke
that referenced
this pull request
Jan 24, 2025
…dPlatform#850) * Add the ability to benchmark multiple models concurrently. This is useful for benchmarking multiple LoRA adapters. - Also fix the latency_throughput_curve.sh to parse non-integer request rate properly. - Also added "errors" to the benchmark results. * Re-sample requests for each model
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is useful for benchmarking multiple LoRA adapters. I used this to benchmark the gateway: