-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Checkpointing of auxiliary objects #635
Comments
What I would like to avoid is a custom skorch storing format, even if it's just If we look at the code for Lines 1638 to 1652 in 1f6b542
I wonder if we can build on that and allow to pass arbitrary |
I agree. Although this use case of saving custom objects has been concerned a few times. What was the historical purpose of |
I think that historically, the idea was to give a more general, pickle- and skorch-independent way of storing model parameters. That's why we tried to stick to what PyTorch recommends instead of providing our own storing format, even if that would be more convenient sometimes. At least for me, checkpoints were not even the first concern. |
This has been solved via #652 |
As discussed in PR #621, in some circumstances the criterion might change during training (learnable parameters) which makes it desirable for check pointing.
Additionally, #597 makes it easier to add additional modules, optimizers and criteria which do not adhere to the standard naming, rendering checkpointing for such objects useless as well.
I can see the following options:
f_criterion
to checkpoint and instead of saving only the one optimizer/model/criterion we save all of them in their respective file - we can find those using the prefix and the registered parametersWith (1) I'm a little worried that there is yet another parameter that needs saving in the future, resulting in an ever expanding parameter list for the checkpoint callback. Maybe this is irrational, I'm interested in other opinions.
The text was updated successfully, but these errors were encountered: