-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI/Build] Add support for Python 3.12 #7035
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these:
🚀 |
I think this doesn't matter anymore since we shipped py38-abi agnostic version |
@simon-mo This is likely true for installing the built wheel, but before this PR you couldn't build vLLM from source on 3.12 because of |
Signed-off-by: Alvant <[email protected]>
This builds locally for me using
Python 3.12.4
. This was pending ontorch==2.4.0
andvllm-flash-attn==2.6.1
, and those have landed now.I have tested installing all CUDA and dev dependencies, along with running a few model tests locally.
FIX #1218
FIX #6877
FIX #6990