Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dask-CUDA + RMM (w/CNMeM deprecation) failure #364

Closed
jakirkham opened this issue Aug 12, 2020 · 2 comments · Fixed by #363
Closed

Dask-CUDA + RMM (w/CNMeM deprecation) failure #364

jakirkham opened this issue Aug 12, 2020 · 2 comments · Fixed by #363

Comments

@jakirkham
Copy link
Member

In a few places and contexts, we have run into issues since RMM PR ( rapidsai/rmm#466 ) was added. Below is the simplest reproducer I could come up with for this issue. Filing this so we can track it publicly and use it to test fixes ( most notably rapidsai/rmm#490 ).

import dask
from dask_cuda import LocalCUDACluster
from distributed import Client


def worker_run():
    worker = dask.distributed.get_worker()
    print("Running on Dask worker: " + str(worker))
    import rmm
    db = rmm.DeviceBuffer(size=2**30)

if __name__ == "__main__":
    print("Starting cluster...")
    cluster = LocalCUDACluster(rmm_pool_size="1GB")
    print("Cluster:")
    print(cluster)
    print("Starting client...")
    client = Client(cluster)
    print("Client:")
    print(client)
    print("Submitting tasks to workers...")
    client.run(worker_run)
@jakirkham
Copy link
Member Author

Fix being proposed in PR ( #363 ).

@jakirkham
Copy link
Member Author

Can confirm this works with the new dask-cuda nightlies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant