-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/ptl trainer handling #1371
Fix/ptl trainer handling #1371
Conversation
Codecov ReportBase: 93.58% // Head: 93.67% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #1371 +/- ##
==========================================
+ Coverage 93.58% 93.67% +0.08%
==========================================
Files 94 94
Lines 9390 9385 -5
==========================================
+ Hits 8788 8791 +3
+ Misses 602 594 -8
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! I only added very minor comments (and one question regarding the need to redefine dunder methods in ForecastingModel.
Fixes #1363
Summary (Edited)
Various improvements to TorchForecastingModel
verbose
in fit/predict always get added to the trainermodel.model
andmodel.trainer
anymore by adapting what gets pickledmap_location
andmodel.to_cpu()
-> Loading to CPU and predicting gives identical results for colab and local
-> Loading to CPU and predicting gives results with negligible difference compared on GPU (diff <= +/- 1e-16)
Additional Info
@madtoinou, regarding our discussions, we can't avoid to store the Trainer object as an attribute in TorchForecastingModel, as otherwise the reference gets lost (LightnignModule just points to trainer via a property).