Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Float64 support #2497

Closed
richardk53 opened this issue Jul 4, 2020 · 7 comments · Fixed by #6595
Closed

Add Float64 support #2497

richardk53 opened this issue Jul 4, 2020 · 7 comments · Fixed by #6595
Assignees
Labels
feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on let's do it! approved to implement
Milestone

Comments

@richardk53
Copy link

Float64 precision is currently not supported.
Why? This should be pretty straightforward, raising an Exception if the device or other configurations are not compatible with it.
Are you planning to add support soon?
What is the best workaround currently? Will it work if I do it manually via model.to(torch.float64) and similar for each dataloader or are there some caveats?

@richardk53 richardk53 added feature Is an improvement or enhancement help wanted Open to be worked on labels Jul 4, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Jul 4, 2020

Hi! thanks for your contribution!, great first issue!

@williamFalcon
Copy link
Contributor

i didn’t know it wasn’t supported haha. what’s special about 64 that it doesn’t work?

@richardk53
Copy link
Author

I see ;-)
I thought that the argument precision=64 in the Trainer would take care of casting both the model and the batches to the desired precision, similarly to how it takes care of moving the model and data to the correct device(s).

However, taking Step 1 and Step 2 from the Quickstart in the docs and replacing the trainer by trainer = Trainer(gpus=1, precision=64, use_amp=False) does not make a difference, everything is still FP32.
Also, the documentation states that precision should be 16 or 32.

For this example I can simply do model = model.to(torch.float64) and in the train_dataloader something like dataset = MNIST(os.getcwd(), train=True, download=True, transform=lambda x: transforms.ToTensor()(x).to(torch.float64)) to make it work.

But I think it would be a nice feature to have this done by the Trainer using the precision argument. What do you think?

@williamFalcon
Copy link
Contributor

oooohhh love the idea...
Want to take a stab at the PR? we'll help you finish it :)

@williamFalcon williamFalcon changed the title Float64 Precision not supported? Add Float64 support Jul 4, 2020
@williamFalcon williamFalcon added the let's do it! approved to implement label Jul 4, 2020
@richardk53
Copy link
Author

sure :)

@Borda Borda added the good first issue Good for newcomers label Aug 4, 2020
@Borda
Copy link
Member

Borda commented Aug 4, 2020

@richardk53 how is it going?

@richardk53
Copy link
Author

Hey, sorry, haven’t had the time for this yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Is an improvement or enhancement good first issue Good for newcomers help wanted Open to be worked on let's do it! approved to implement
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants