-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
don't depend on cuda-nvcc; use cuda-nvcc-impl to avoid pulling in GCC #38
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Hi! This is the friendly conda-forge automerge bot! I considered the following status checks when analyzing this PR:
Thus the PR was not passing and not merged. |
Hm, seems like we got some apparently new incompatibility:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Axel! 🙏
Have a couple questions below
This is ready except for dealing with CUDA 12.8. I'm waiting for a response in triton-lang/triton#5737 (even though that's for the upcoming 3.2, the same principle would apply to 3.1 as well if my proposed backport is deemed OK) Otherwise we can add some |
Let's stick to CUDA 12.6 for the moment As great as it would be to have CUDA 12.8 here, there is some prep work needed Will bring this up for discussion later today |
I'm not talking about building anything with 12.8. But currently triton does not have a runtime constraint on the CUDA version (and I'd like to keep it that way). However, that means that we end up pulling in It's IMO triton's bug to have an
This is the change I'd much rather make, but I wanted to give a bit of time for feedback to arrive in triton-lang/triton#5737. |
Ok for more context CUDA 12.8 adds 2 new architectures. So am thinking about how we roll that out In any event will raise an issue to discuss and link it here |
I don't see how that matters, as long as builds don't ask to build for those architectures. The function I referenced literally says def ptx_get_version(cuda_version) -> int:
'''
Get the highest PTX version supported by the current CUDA driver.
''' Note "highest". My point is that using ptx 85 should be fine regardless of whether the toolchain is 12.6 or 12.8. |
Fixes #37