-
-
Notifications
You must be signed in to change notification settings - Fork 14.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cudaPackages: bump cudaPackages_11 -> cudaPackages_12 #222778
Comments
Will this bump happen after #220341? I am attempting to build 11_8 now on your latest branch to see if it fixes #224150. No idea if xgboost will be happy with 12, but I'll give it a shot. Thanks @SomeoneSerge for keeping me in the loop! |
I guess the bump will happen as soon as someone opens the bump PR and runs a nixpkgs-review on it (with There's no reason we couldn't first update to 11.8: that would likely cause fewer regressions than 12, and would enable you to fix xgboost without resorting to overrides. Still, we'd have to run the builds first |
Thanks, xgboost 1.7.5 should be safe once I get it building with 11_8 because I added the version argument to options. |
Upgrading to CUDA 12 will happen eventually, but I don't realistically see it happening for another few weeks at least. |
Let's open a bump to |
Sure - what does the bump entail? An update to the default in all-packages.nix and a nixpkgs-review to see if anything downstream breaks? |
I am not familiar with the derivation for pytorch, but I see it's coded to accept nixpkgs' default library for CUDA. Should that be left as-is to build with 11.8, or should it be an option? I was under the impression in my xgboost update that it's better to have a configurable version of CUDA using |
We should've bumped long time ago, now we need to do it for #259068 |
Agree that we should bump, but why is it needed for #259068? |
To avoid spawning extra cudaPackageses: https://github.com/NixOS/nixpkgs/pull/259068/files#diff-7c765cc9768cdae049f8712b4ef719bc94ec8ec2e8df2dd50ddf14c6acd4e580R13927 |
ah right, my own comment all along :P |
Hello, what is the state of this? I am currently using |
Hi! Note that *An example of how to override the default (do ping matrix or discourse for details if needed!): import <nixpkgs> {
config.allowUnfree = true;
overlays = [
(final: prev: { cudaPackages = final.cudaPackages_12; })
];
}` |
|
@Turakar note the linked PR too: I had a few evaluation errors when I changed the default |
I just made a PR to add |
CC @NixOS/cuda-maintainers
Blockers:
Thanks @ConnorBaker for the pytorch link
The text was updated successfully, but these errors were encountered: