-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add documentation for installing common pytorch variants #5945
Comments
I think the most common difficulty would be to specify the index url, but I'm glad uv add torch --index-url https://download.pytorch.org/whl/cpu # within a project
uv pip install torch --index-url https://download.pytorch.org/whl/cpu # otherwise |
However, I've always used |
Available to help :) I am on ARM macOS and |
Would also like to point out there's been this python package (recently recommended by @muellerzr, HuggingFace Accelerate maintainer) to install torch with automatic backend detection. Might be a good candidate for a first uv plugin, I guess? |
Sharing my thoughts to help with the discussion. I feel that the biggest challenge here is that the source is platform dependent. On MacOs, PyPi is sufficient. But for Linux and Windows, if one wishes to install a cuda enabled version, they need to specify a URL. I would personally already be helped if I could specify platform-dependent sources for packages. Now the cuda/not cuda question is difficult. I think that theoretically, using optional dependencies can be a way to deal with this. If you want the cuda version (for any platform), install it as my_package[cuda]. If you only need the cpu version (i.e. Github actions), then install it as my_package[cpu]. But it's more of a flag then an optional dependency. It's either/or. So not sure how to deal with that. Maybe there can be "modes" for dependencies, with a default. And if the "cuda" mode is specified, dependencies are installed based on that. |
Regarding this.. #3397 |
Example at #7245 (comment) |
I've been banging my head against this for a couple days and can only report a list of things that don't work. For reference, my goal is to install
Here's what I've tried so far:
I've also tried all variations of the above with anyways, i'm sure there's a way. i just haven't found it yet. i'll keep plugging away. i think my next idea is to define the |
Thanks, that’s a good concrete example. I’ll take a look at it (not tonight, but soon) and report back. |
Update: it seems that the last version I tried, For anyone trying this in the future, here's the specific
Edit: note that Edit 2: looking back at my logs, the reason I hadn't gotten this approach to work before was actually because I hadn't wrapped
And I had concluded I was writing my |
We had a few releases where we dropped this error on the floor, so it might not have even been visible to you sadly. |
I do think that what you ended up with is right, though: [project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["torch==2.4.1 ; sys_platform == 'darwin'", "torch==2.4.1+cpu ; sys_platform != 'darwin'"]
[tool.uv]
index-strategy = "unsafe-best-match"
extra-index-url = ["https://download.pytorch.org/whl/cpu"] |
Honestly, I'm not entirely sure -- my vague understanding is that the wheels that could have CUDA support have explicit One other thing, as I've continued to work on this project: using the above
This is on my macbook, where I would have expected it to resolve to just However, if I manually add |
Variant 1:
Variant 2:
Any idea why neither is working? |
@Leon0402 I think this isn't a UV issue. I accessed it a few minutes ago through the browser and it didn't have the correct content, but it seems to be working now. Do you want to try it again? |
@FishAlchemist Issue persists. I also tried |
@Leon0402 I am unable to replicate this issue on my machine. Would you consider filing a separate issue and providing more specific details based on the prompts? |
Hi I am facing problem in macos for pytorch cpu installation (#8358), is there any suggestions? Thanks! |
Hi UV team. First let me thank you for all the work you're doing on the project. I've just started using it and it is FAST and pretty intuitive (especially coming from poetry). For my use case, I'm trying to generate a lock file that would install either a cpu only or gpu enabled version of pytorch when on a linux platform and just regular pypi pytorch when using a mac.
However, I am getting the below error. I'm not sure where the torch{sys_platform == 'linux' and extra != 'torch-gpu'}is coming from since my expectation is that torch2.4.1+cpu is ONLY installed if extra torch-cpu is requested and extra torch-gpu isn't requested.
I'm mainly looking for any suggestions on how to achieve the end goal and if anyone from the UV team can help me understand why torch2.4.1+cpu is being considered for torch{sys_platform == 'linux' and extra != 'torch-gpu'} |
I got the same promlem but uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu works. It gets uninstalled after every uv sync && uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu That looks stupid but is, do to caching, realy fast. |
Hello there! First real docs PR for uv. 1. I expect this will be rewritten a gazillion times to have a consistent tone with the rest of the docs, despite me trying to stick to it as best as I could. Feel free to edit! 2. I went super on the verbose mode, while also providing a callout with a TLDR on top. Scrap anything you feel it's redundant! 3. I placed the guide under `integrations` since Charlie added the FastAPI integration there. ## Summary <!-- What's the purpose of the change? What does it do, and why? --> Addresses #5945 ## Test Plan <!-- How was it tested? --> I just looked at the docs on the dev server of mkdocs if it looked nice. **I could not test the commands that I wrote work** outside of macOS. If someone among contributors has a Windows/Linux laptop, it should be enough, even for the GPU-supported versions: I expect the installation will just break once torch checks for CUDA (perhaps even at runtime). --------- Co-authored-by: Santiago Castro <[email protected]> Co-authored-by: Charlie Marsh <[email protected]>
😭 but it must be done!
The text was updated successfully, but these errors were encountered: