You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running trainer = pl.Trainer(auto_scale_batch_size=True); trainer.tune(model, datamodule=dm) succeeds in pl==1.1.8, but fails in pl==1.2.* (tested both 1.2.0 and 1.2.1) with error:
Traceback (most recent call last):
File "train.py", line 42, in <module>
trainer.tune(model, datamodule=dm)
File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1062, in tune
self.tuner.tune(model, train_dataloader, val_dataloaders, datamodule)
File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py", line 46, in tune
self.scale_batch_size(
File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py", line 104, in scale_batch_size
return scale_batch_size(
File "/home/ubuntu/.local/share/virtualenvs/__project__/lib/python3.8/site-packages/pytorch_lightning/tuner/batch_size_scaling.py", line 79, in scale_batch_size
raise MisconfigurationException(f'Field {batch_arg_name} not found in both `model` and `model.hparams`')
pytorch_lightning.utilities.exceptions.MisconfigurationException: Field batch_size not found in both `model` and `model.hparams`
@colllin Thank you for reporting the issue. I confirmed the bug in 1.2.* which didn't happen in <1.2. I think this issue should be addressed in #5968 by @awaelchli.
🐛 Bug
Running
trainer = pl.Trainer(auto_scale_batch_size=True); trainer.tune(model, datamodule=dm)
succeeds inpl==1.1.8
, but fails inpl==1.2.*
(tested both1.2.0
and1.2.1
) with error:Please reproduce using the BoringModel
https://colab.research.google.com/drive/1vgPLCwLg7uACtb3fxVp-t-__NtZ3onsD?usp=sharing
Expected behavior
Prior to
pl==1.2.0
, it successfully detected and tuned thedm.batch_size
property.Environment
The text was updated successfully, but these errors were encountered: