-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gc.collect when spilling on demand #771
gc.collect when spilling on demand #771
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @madsbk !
I'm not sure whether the segfault in https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/dask-cuda/job/prb/job/dask-cuda-gpu-test/CUDA=11.0,GPU_LABEL=gpu-a100,LINUX_VER=centos7,PYTHON=3.7/558/display/redirect is due to this PR or not, see below:
It looks like it happened before the actual testing started, so let's rerun and see. |
Codecov Report
@@ Coverage Diff @@
## branch-21.12 #771 +/- ##
===============================================
Coverage ? 57.60%
===============================================
Files ? 21
Lines ? 2944
Branches ? 0
===============================================
Hits ? 1696
Misses ? 1248
Partials ? 0 Continue to review full report at Codecov.
|
Apologies, clicked the wrong button and closed by accident. |
rerun tests |
rerun tests |
1 similar comment
rerun tests |
@gpucibot merge |
Thanks @madsbk ! |
When spilling on demand, we now call
gc.collect()
.