Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CUDA graph-based all reduce launcher #26

Merged
merged 5 commits into from
Apr 5, 2023
Merged

Add CUDA graph-based all reduce launcher #26

merged 5 commits into from
Apr 5, 2023

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Apr 5, 2023

Related to #22

This PR uses CUDA graph to reduce the CPU overhead of NCCL all reduce operation.

@WoosukKwon WoosukKwon requested a review from zhuohan123 April 5, 2023 09:31
Copy link
Member

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

self.group = get_tensor_model_parallel_group()
self.buffer = torch.empty(
size=(max_num_tokens, hidden_size),
dtype=torch.half, # FIXME: hardcoded dtype
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a dtype argument for this class?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

@WoosukKwon WoosukKwon merged commit 12659a0 into main Apr 5, 2023
@WoosukKwon WoosukKwon deleted the graph branch April 5, 2023 18:17
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
slyalin pushed a commit to slyalin/vllm that referenced this pull request Apr 4, 2024
Disable NPU merged to OV master recently
z103cb referenced this pull request in z103cb/opendatahub_vllm May 16, 2024
Install and configure use of the NCCL version recommended by vLLM via
the [vllm-nccl](https://github.com/vllm-project/vllm-nccl) package. The
install is a little wonky... but this set of changes should work.

Signed-off-by: Travis Johnson <[email protected]>
dtrifiro pushed a commit to dtrifiro/vllm that referenced this pull request May 21, 2024
fxmarty pushed a commit to fxmarty/vllm-public that referenced this pull request May 31, 2024
Update max_context_len for custom paged attention.
tianyil1 pushed a commit to tianyil1/vllm that referenced this pull request Jun 5, 2024
bigPYJ1151 pushed a commit to bigPYJ1151/vllm that referenced this pull request Jun 25, 2024
…inear_fusion_and_prepack

Enable linear fusion/prepack and MOE AWQ fusion
@alixiaodi alixiaodi mentioned this pull request Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants