You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a current problem? Please describe.
I am trying to load a previously trained NHiTS model and "finetune" it on a new dataset. It entails changing the tensorboard logger, the resetting and changing the LR for the optimizer, removing LR scheduler, but KEEPING THE WEIGHTS, and continuing with a small learning rate some more epochs on new data.
It was a considerable struggle to get it to work, and I did a super hacky and ugly code.
Describe proposed solution
Provide reinit_training() or something alike function that allows for the modification of the trainer / hyperparams of Torch based models, keeping the weights intact, but allowing new training runs eg. for finetuning on new data with different optimizers and learning rates.
Describe potential alternatives
A clear and concise description of any alternative solutions or existing features that might solve it.
Additional context
General context is transfer learning WITH FINETUNING, so not the widspreadly documented zero shot scenario: I have a big model trained on a large dataset, I want to finetune it on my small new dataset but with very small learning rate so as not to suffer catastrophical forgetting.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a current problem? Please describe.
I am trying to load a previously trained NHiTS model and "finetune" it on a new dataset. It entails changing the tensorboard logger, the resetting and changing the LR for the optimizer, removing LR scheduler, but KEEPING THE WEIGHTS, and continuing with a small learning rate some more epochs on new data.
It was a considerable struggle to get it to work, and I did a super hacky and ugly code.
https://gist.github.com/solalatus/e6329879c5c9479ce60af9f6b0e22bb4
Describe proposed solution
Provide reinit_training() or something alike function that allows for the modification of the trainer / hyperparams of Torch based models, keeping the weights intact, but allowing new training runs eg. for finetuning on new data with different optimizers and learning rates.
Describe potential alternatives
A clear and concise description of any alternative solutions or existing features that might solve it.
Additional context
General context is transfer learning WITH FINETUNING, so not the widspreadly documented zero shot scenario: I have a big model trained on a large dataset, I want to finetune it on my small new dataset but with very small learning rate so as not to suffer catastrophical forgetting.
The text was updated successfully, but these errors were encountered: