-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feat] Add better support for predict + ddp 2/3 #7215
Conversation
pytorch_lightning/plugins/training_type/training_type_plugin.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 😃 small comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you make the returning of predictions optional? what difference does it make?
@property | ||
def use_spawn(self) -> bool: | ||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would avoid this. Can we do isinstance check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but I find it less clean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is this style of adding properties that identify each plugin will not scale long term and at the finest granularity it is the same as an instance check.
it could be part of the accelerator connector refactor one day. The use_spawn could be handled by the connector.
Co-authored-by: Carlos Mocholí <[email protected]>
@@ -75,3 +76,70 @@ def prepare_for_backward(model: DistributedDataParallel, output: Any): | |||
model.reducer.prepare_for_backward([]) | |||
else: | |||
model.require_forward_param_sync = False | |||
|
|||
|
|||
class UnrepeatedDistributedSampler(DistributedSampler): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is NoPaddingDistributedSampler
clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice cleanup as well @tchaton :)
Any comment here? @tchaton |
What does this PR do?
Here is PR 1/2: #7141
This PR adds better support for predict.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃