-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation over the validation set #4634
Comments
Is there a difference between your |
Yes, this would be acceptable, but I forgot to specify that I'm using a data module to manage the data loaders. There is no way to pass the data module but override which loader to use, am I wrong? The solution I'll attempt right now is to copy the |
should we add an |
It sounds good to me. I'm looking at the code of |
@rohitgr7 sounds like a good enhancement (especially if multiple people are requesting it). I see two options:
|
thought this too.. but somehow sounds a bit complex to me.
+1, refactor the |
+1 for option 1 from me too |
cc @PyTorchLightning/core-contributors |
I've been experimenting with the changes required to make this work and I just published #4707 with a very rough proof of concept of what will need to be done. |
I know there will be a # Create modules
model = pl.LightningModule(...)
dm = pl.LightningDataModule(...)
# Manually run prep methods on DataModule
dm.prepare_data()
dm.setup()
# Run test on validation dataset
train = pl.Trainer(...)
train.test(model, test_dataloaders=dm.val_dataloader()) |
What is the recommended way of performing just one evaluation over the validation set? Basically, I'm looking for the equivalent of
trainer.test(...)
but for the validation set.Maybe this is possible using
trainer.fit(...)
and some combination of arguments toTrainer.__init__
?The text was updated successfully, but these errors were encountered: