-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[torch.compile] Adding "torch compile" annotations to some models #9758
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@@ -429,6 +430,7 @@ def forward( | |||
return hidden_states, residual | |||
|
|||
|
|||
@support_torch_compile | |||
class PhiMoEModel(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for this model, it seems directly running it with -tp=2
will fail. the error is:
Failed: Cuda error /workspace/csrc/custom_all_reduce.cuh:336 'invalid argument'
need to investigate it later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: this is unrelated to torch.compile
@@ -360,6 +361,7 @@ def forward( | |||
return hidden_states | |||
|
|||
|
|||
@support_torch_compile | |||
class ArcticModel(nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to run this model successfully on H100, I have to change the config:
"hidden_size": 512,
"intermediate_size": 512,
"num_key_value_heads": 8,
"num_attention_heads": 8,
"num_local_experts": 4,
initially, I want to simply change "num_hidden_layers": 35,
to "num_hidden_layers": 2,
, but I met various random illegal memory access error. might be caused by fused moe kernel, with extremely large input sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the great efforts!
Signed-off-by: qishuai <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Loc Huynh <[email protected]>
Signed-off-by: Sumit Dubey <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
This PR is to complete #9589 and #9632, adding "torch compile" annotations to some moe models and testing whether they can pass the compilation.