-
-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determining CUDA Docker image/compiler version #237
Comments
Should |
Why should it be a CDT? It should be a compiler, no? |
Oh sure it should be a compiler, but we can't redistribute nvcc so isn't a CDT the right way to declare an external dependency? |
This is the problem that the It's similar to how we handle the Visual Studio compiler today. IOW we rely on some VM images to have the compiler (in this case we have Docker images) and we do the bare minimum to explain to Does that make sense? |
To clarify, here's the question I'm thinking about: How do we select the image based on the version of CUDA we want to build against? With Windows we have gotten away with using the same image for all compilers, but we are unable to do this with The only case where we have needed multiple images before was the compiler migration. We could borrow this logic for handling |
I think I see, like we need an |
So What we still need is a way to turn a recipe using |
@jjhelmus, do you have any thoughts on how we should handle this? |
zip_keys should work. conda-smithy uses docker_image variable in conda_build_config.yaml to generate CI files |
cc @mariusvniekerk (as we discussed this last Friday) |
I guess the challenge that Marius and I were discussing is that sometimes we have |
@isuruf, do you know if |
something like,
and you can skip |
No, it doesn't. You'll have to run docker within docker to do that. |
In conda-smithy diff --git a/conda_smithy/configure_feedstock.py b/conda_smithy/configure_feedstock.py
index 3e09f2e..5df3fb6 100644
--- a/conda_smithy/configure_feedstock.py
+++ b/conda_smithy/configure_feedstock.py
@@ -305,6 +305,7 @@ def _collapse_subpackage_variants(list_of_metas, root_path):
}
all_used_vars.update(always_keep_keys)
all_used_vars.update(top_level_vars)
+ all_used_vars.add("docker_image_cuda")
used_key_values = {
key: squished_input_variants[key]
@@ -322,6 +323,9 @@ def _collapse_subpackage_variants(list_of_metas, root_path):
_trim_unused_zip_keys(used_key_values)
_trim_unused_pin_run_as_build(used_key_values)
+ if "cuda_compiler" not in used_key_values:
+ used_key_values.pop("docker_image_cuda", None)
+
# to deduplicate potentially zipped keys, we blow out the collection of variables, then
# do a set operation, then collapse it again
@@ -365,8 +369,11 @@ def finalize_config(config, platform, forge_config):
"""For configs without essential parameters like docker_image
add fallback value.
"""
- if platform.startswith("linux") and not "docker_image" in config:
- config["docker_image"] = [forge_config["docker"]["fallback_image"]]
+ if platform.startswith("linux"):
+ if "docker_image" not in config:
+ config["docker_image"] = [forge_config["docker"]["fallback_image"]]
+ elif "docker_image_cuda" in config:
+ config["docker_image"] = [str(config["docker_image_cuda"][0])]
return config In docker_image_cuda:
- condaforge/linux-anvil-cuda:10.0
- condaforge/linux-anvil-cuda:9.2
cuda_compiler:
- nvcc
cuda_compiler_version:
- 10.0
- 9.2
zip_keys:
-
- cuda_compiler_version
- docker_image_cuda In requirements:
build:
- {{ compiler('cuda') }} |
Hi! Thanks for all the great work here! I am looking forward to use this in several upcoming recipes that need GPU support. This is slightly unrelated since it's not about Docker images but, how are you planning to bring GPU capabilities to Windows and OS X? Through separate Azure Pipelines |
Thanks for taking a look @isuruf! 😄
This
Hopefully we can come up with a better solution than this. 😉
Interesting. So you would propose using either of two different variants for the Docker image based on whether or not CUDA is used? It's worth noting that some packages will have CPU and GPU builds. So we probably want some way to have both here and select between them based on the build. Will give that some more thought. |
Fix a few typos in readme
We recently added a few Docker images based off of Docker images from NVIDIA. Also we have the beginnings of a shim package to use
nvcc
. The next thing to consider is how we can use these in conjunction with each other. Am curious if anyone has thoughts on how we might do this.The text was updated successfully, but these errors were encountered: