-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1] Refactor parallel sampling support #13774
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Nice work cleaning this up!
We should verify that this overhead is negligible with a quick benchmark |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @markmc, looks great and I agree doing the aggregation in the output processor is much nicer! Just some minor suggestions to simplify further.
This pull request has merge conflicts that must be resolved before it can be |
1aed2ed
to
75e9d49
Compare
The initial implementation went to great efforts to add parallel sampling as a wrapper at the highest layer of abstraction possible. This resulted in a lot of tricky code to post-process RequestOutputs to aggregate them where necessary. Instead, it probably makes sense to implement parallel sampling at the layer that actually creates RequestOutput objects - i.e. in OutputProcessor. To do this, we simply need to allow for fanning out child requests in LLMEngine.add_request(), passing details of the fan-out to OutputProcessor. This adds some overhead to the n=1 case (see SingularSamplingRequest) in return for significantly less overhead and complication in the parallel sampling case. Signed-off-by: Mark McLoughlin <[email protected]>
Address PR feedback from Nick. Signed-off-by: Mark McLoughlin <[email protected]>
Fixes: TypeError: RequestOutput.__init__() missing 1 required positional argument: 'outputs' Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Keeping the output aggregator on RequestState was just a silly refactoring mistake - it clearly needs to be on the parent request since we are aggregating across child requests. Also address some PR feedback from Nick to make the logic here less confusing. Signed-off-by: Mark McLoughlin <[email protected]>
Now that we're not returning a RequestOutput for all finished requests, we need to perform finished request handling even without a RequestOutput now. Signed-off-by: Mark McLoughlin <[email protected]>
Based on excellent PR feedback from Nick. Signed-off-by: Mark McLoughlin <[email protected]>
Move all the logic for child request output aggregating into parallel_sampling.ParentReq. Signed-off-by: Mark McLoughlin <[email protected]>
0153a5b
to
60fa08d
Compare
The initial implementation in #10980 went to great efforts to add parallel sampling as a wrapper at the highest layer of abstraction possible. This resulted in a lot of tricky code to post-process
RequestOutputs
to aggregate them where necessary.Instead, it probably makes sense to implement parallel sampling at the layer that actually creates
RequestOutput
objects - i.e. inOutputProcessor
To do this, we simply need to allow for fanning out child requests in
LLMEngine.add_request()
, passing details of the fan-out toOutputProcessor.
This adds some overhead to the
n=1
case (seeSingularSamplingRequest
) in return for significantly less overhead and complication in the parallel sampling case.