-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove training_step returned None user warning when automatic_optimization=False #6339
Comments
Is this not the case already in master? what version are you using? |
I am using the stable release 1.2, so if it has since been addressed in master that is great. In my version this block is nearly the same, but line 743 uses |
@timothybrooks are you training in distributed env (ddp, dp, ...)?? |
Yes in DDP.
…On Fri, Mar 5, 2021 at 6:42 AM Rohit Gupta ***@***.***> wrote:
@timothybrooks <https://github.com/timothybrooks> are you training in
distributed env (ddp, dp, ...)??
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#6339 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACQMGH2BENRLOO7X63PUO3TTCDUTPANCNFSM4YSFFZWA>
.
|
Hello wanted to check in since I am still experiencing this bug in 1.2.6 when automatic_optimization=False and the training step returns none. Since the bug has been closed for a few weeks I wanted to check that it is still planned to be addressed. Thanks! |
@timothybrooks |
Got it, thanks for the update. Looking forward to it! |
🐛 Bug
When using manual optimization by setting the property
automatic_optimization=False
, it is not necessary (and sometimes undesirable) fortraining_step()
to return a loss output. For example, in the case of GANs or other models with complex multi-optimizer setups, manual optimization can be preferable or necessary for the correct behavior. The user warning should therefore be omitted in this case.Since the user calls the update step themselves (
self.manual_backward(); optimizer.step()
) it is not necessary for the training step to return a loss since PL does not need to use the loss to update weights. Furthermore, logging the returned output loss is unhelpful for certain complex multi-optimizer setups, since aggregation of losses from different optimizers is not desired.In short, since the updates and logging when using
automatic_optimization=False
is non-standard and does not always involve returning a loss output, I believe theUserWarning("Your training_step returned None. Did you forget to return an output?")
should be omitted in the case of manual optimization.Basic example for GAN:
The text was updated successfully, but these errors were encountered: