Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpu/cpu mutex naming #1059

Open
isuruf opened this issue May 9, 2020 · 14 comments
Open

gpu/cpu mutex naming #1059

isuruf opened this issue May 9, 2020 · 14 comments

Comments

@isuruf
Copy link
Member

isuruf commented May 9, 2020

cc @conda-forge/core

As we are adding more packages, we have to decide on the naming.

Currently most packages use <pkg>-proc=*=cpu and <pkg>-proc=*=gpu. While it is great to have a uniform naming scheme, I don't particularly like the -proc name since the name doesn't mean anything. We should decide on the naming soon.

We can fix already existing packages to work with the new name and the old name if we decide on a new name.

@xhochy
Copy link
Member

xhochy commented May 9, 2020

I would prefer the usage of cuda instead of gpu to reflect the technology used.

@isuruf
Copy link
Member Author

isuruf commented May 9, 2020

Thanks @xhochy for the suggestion. I agree. When we have other gpu variants like AMD ones, this will be useful.

@beckermr
Copy link
Member

beckermr commented May 9, 2020

This has already come up for xgboost fwiw.

@jakirkham
Copy link
Member

proc is just short for processor.

@jakirkham
Copy link
Member

That said, I’m not entirely sure why we want to disrupt the naming scheme. This will cause a headache for people using these packages for what it sounds to be purely cosmetic reasons. Before making changes here I’d be interested in hearing about what actual problems this is causing.

@isuruf
Copy link
Member Author

isuruf commented May 9, 2020

Because as a community we need to come to a consensus on how to do this. There are various different schemes. Like in the tensorflow defaults way (_tflow_select), ucx-split (ucx-proc) way, pytorch way (pytorch-cpu/pytorch-gpu) etc.

Btw, we can make this change without disrupting anything by adding a compatibility layer for previous packages or just not touching previous packages.

@beckermr
Copy link
Member

beckermr commented May 9, 2020

Where do we stand on the mutual exclusivity of the variants?

I know in general this is up to the package itself and how it is structured, but a package that wants labeled variants that are not mutually exclusive would have a harder time with a mutex, for example.

@jakirkham
Copy link
Member

Thus far we have not added a mutex in that case.

@isuruf
Copy link
Member Author

isuruf commented May 10, 2020

@beckermr, can you explain more? Is it like python2 and python3 in the same env?

@beckermr
Copy link
Member

My question is not terribly well formulated but I can imagine as a user wanting both packages for cpus and gpus in the same env. Much of this depends on the actual package and how it is structured.

@jakirkham
Copy link
Member

No it’s a fair question. This can come up when the GPU bits live in a separate shared object.

h-vetinari added a commit to h-vetinari/faiss-split-feedstock that referenced this issue May 18, 2020
But keep compatibility package names with pytorch-channel.

Xref. conda-forge/conda-forge.github.io#1059
@scopatz
Copy link
Member

scopatz commented Aug 11, 2020

I also prefer cuda to gpu

@h-vetinari
Copy link
Member

Since conda-forge/pytorch-cpu-feedstock#22, that feedstock has the following build strings:

outputs:
  - name: pytorch
    build:
      string: cuda{{ cuda_compiler_version | replace('.', '') }}py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}  # [cuda_compiler_version != "None"]
      string: cpu_py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                      # [cuda_compiler_version == "None"]

I had asked in that PR:

@h-vetinari:
Just checking what the thought was for the build strings here, since I stumbled over them and tracked them down to this PR - I had been thinking about adding the cuda version to the faiss-buildstrings as well: conda-forge/faiss-split-feedstock@53422a1

Perhaps - like for the mutex naming (cf conda-forge/conda-forge.github.io#1059) - there should be a standard way to do this? Speaking of mutexes - if I understand correctly there's now no "proc" package for pytorch to select the type of installation? Does conda install pytorch=*=cpu still work with the qualifier in front? (wanted to try, but there are no CF packages for windows yet...)

Copying this here because I'd like to have an idea of what should be used before I go forward with the adaptation in conda-forge/faiss-split-feedstock#19.

@leofang
Copy link
Member

leofang commented Jan 11, 2021

For cpu-gpu mutex: In this recipe I showed that it's possible to do a per-recipe mutex:
https://github.com/nsls-ii-forge/qulacs-feedstock/blob/b8afd0011e9412703800b1c69f8b85eff00cffd5/recipe/meta.yaml#L56-L63
It's similar to the mpi mutex, except that it's done on a per-recipe level via multiple outputs so that the recipe maintainers can have full control. Once this is done, the naming scheme is up to the maintainers, and Conda-Forge need not choose a global convention. But the drawback is setting up multiple outputs correctly is very nontrivial.

But, as discussed above whether a mutex is required is really up to the package's design and structure. I can imagine a CPU and GPU build coexist in the same package happily.

For the naming convention: I support cuda for NVIDIA GPUs. I also dislike having proc around, but I can be convinced easily.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

7 participants