-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean cuda.empty_cache
usage
#8199
Conversation
Codecov Report
@@ Coverage Diff @@
## master #8199 +/- ##
======================================
- Coverage 93% 92% -1%
======================================
Files 212 212
Lines 13588 13664 +76
======================================
- Hits 12626 12619 -7
- Misses 962 1045 +83 |
26ca1d2
to
ff38b10
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fingers crossed ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just curious what was the reason we did it before only for root?
We set the default device: So the context manager was uneccessary |
What does this PR do?
The
empty_cache
function already checks ifis_initialized
:https://github.com/pytorch/pytorch/blob/v1.4.1/torch/cuda/memory.py#L34-L35
And we don't need to set the root device as the default device has been set previously.
Before submitting
PR review