-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lr_tuner fails if called after tuning batch_size #4616
Comments
Hi! thanks for your contribution!, great first issue! |
@SkafteNicki mind taking a look? |
It seems the solution is to remove the |
Done. First pr on github so be gentle :P edit: checking out failures on pr |
@Palzer I can probably find some time tomorrow to look over your work. Thanks for wanting to contribute :] |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
https://github.com/PyTorchLightning/pytorch-lightning/blob/514cb22bd719e6ca056cacce730c8de875c9dbf6/pytorch_lightning/tuner/lr_finder.py#L411
The line above sets current_step to 2 if the batch_size tuner has already run.
https://github.com/PyTorchLightning/pytorch-lightning/blob/514cb22bd719e6ca056cacce730c8de875c9dbf6/pytorch_lightning/tuner/lr_finder.py#L419-L420
This causes best_loss to be zero and compare against the smoothed loss. This will cause the tuner to terminate and fail.
Also, just looks like this was already supposed to be changed.
The text was updated successfully, but these errors were encountered: