-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix][VLM] Make apply_fp8_linear work with >2D input #9812
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we do the same thing in apply_int8_linear?
…#9812) Signed-off-by: Randall Smith <[email protected]>
…#9812) Signed-off-by: NickLucche <[email protected]>
…#9812) Signed-off-by: NickLucche <[email protected]>
…#9812) Signed-off-by: Linkun Chen <[email protected]>
…#9812) Signed-off-by: Loc Huynh <[email protected]>
…#9812) Signed-off-by: Sumit Dubey <[email protected]>
…#9812) Signed-off-by: Maxime Fournioux <[email protected]>
…#9812) Signed-off-by: Tyler Michael Smith <[email protected]>
Needed to work with quantized vision encoders from VLMs. A generalization of the proposed fix in #9800 to all of the cases in apply_fp8_linear