-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bugfix] Accumulated_gradient and TensoBoard #4738
Conversation
Hello @tchaton! Thanks for updating this PR.
Comment last updated at 2020-11-25 12:01:40 UTC |
…thub.com/PyTorchLightning/pytorch-lightning into bugfix/4304_tensorboard_accumulated_grad
…thub.com/PyTorchLightning/pytorch-lightning into bugfix/4304_tensorboard_accumulated_grad
Codecov Report
@@ Coverage Diff @@
## master #4738 +/- ##
======================================
Coverage 93% 93%
======================================
Files 118 118
Lines 9031 9033 +2
======================================
+ Hits 8403 8405 +2
Misses 628 628 |
pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py
Outdated
Show resolved
Hide resolved
Hey @edenlightning, I removed the parameters as you suggested. This PR contains only the fix for tensorboard logging in case of accumulated_gradient > 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure about the APPI, and pls update docs
pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py
Show resolved
Hide resolved
pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py
Show resolved
Hide resolved
pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Hey @williamFalcon , can you review this one ? |
What does this PR do?
This PR tries to improve the logging display on TensorBoard when using accumulated_grad_batches > 1.
It also introduces
log_epoch_metrics_on_step
parameter within the trainer.log_epoch_metrics_on_step
idea was dropped !Fixes #4304
log_epoch_metrics_on_step = True
log_epoch_metrics_on_step = False
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In in short, see following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃