-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make BenchmarkModule compatible with PyTorch Lightning 2.0 #1136
Make BenchmarkModule compatible with PyTorch Lightning 2.0 #1136
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #1136 +/- ##
==========================================
+ Coverage 89.35% 90.68% +1.32%
==========================================
Files 129 131 +2
Lines 5477 5623 +146
==========================================
+ Hits 4894 5099 +205
+ Misses 583 524 -59
... and 4 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the design of this fully yet and why which code is distributed across
on_validation_epoch_start
validation_step
on_validation_epoch_end
I am also a bit unsure about the usage.
E.g. before the change, the top1
was returned by the. validation_step
.
Now this self.max_accuracy
is set by on_validation_epoch_end
.
Is this required by the new pytorch 2.0?
Furthermore, I am wondering whether we have a unittest for parts of it.
We use
Added unit tests. I kept tests quite minimal because we know from the benchmarks that it works correctly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me, thank you for explaining it.
What has changed and why?
The benchmarks are not yet completely compatible with PyTorch Lightning 2.0 as we use the LARS optimizer from PyTorch Lightning Bolts which is not compatible with PyTorch Lightning 2.0: Lightning-Universe/lightning-bolts#962
I guess the simplest solution for this is to copy one of the LARS implementations into our package as the code is pretty simple and we can avoid additional dependencies.
How was it tested?