Skip to content
Discussion options

You must be logged in to vote

since the size of my dataset exceeds my total RAM

That's not unusual. Often datasets don't fit into the RAM and that's fine. DataLoaders are designed to asynchronously load the data from your hard disk into ram and then onto the GPU.

From my understanding, reload_dataloaders_every_epoch=True calls train_dataloader() and val_dataloader() at every epoch, but I don't see the point of doing this if self.mnist_test and self.mnist_train aren't actually being changed.

Yes but first of all this is a toy example and doesn't really do anything interesting. Second even though the dataset does not change, the dataloader will be constructed newly every epoch. You could return dataloader with a new…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@awaelchli
Comment options

@isabellahuang
Comment options

Answer selected by isabellahuang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment