-
Notifications
You must be signed in to change notification settings - Fork 918
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try avoid saving PTL trainer (and redefine it in fit() and predict() calls) #1363
Comments
TorchForecastingModel._setup_trainer() could be adjusted to not save the trainer to self but rather return a Trainer instance. I have found that I use different trainer args for training versus inference. That being said, it is possible on model initialization to save the trainer args and create a trainer for fit() and predict() based on these with the option to override when calling fit() or predict(). |
Ping @dennisbader |
I think that this issue is solved by #1371. |
Yes, this will be solved with #1371. @alexcolpitts96, the PR adjusts |
No description provided.
The text was updated successfully, but these errors were encountered: