-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Float64 support #2497
Comments
Hi! thanks for your contribution!, great first issue! |
i didn’t know it wasn’t supported haha. what’s special about 64 that it doesn’t work? |
I see ;-) However, taking Step 1 and Step 2 from the Quickstart in the docs and replacing the trainer by For this example I can simply do But I think it would be a nice feature to have this done by the Trainer using the precision argument. What do you think? |
oooohhh love the idea... |
sure :) |
@richardk53 how is it going? |
Hey, sorry, haven’t had the time for this yet. |
Float64 precision is currently not supported.
Why? This should be pretty straightforward, raising an Exception if the device or other configurations are not compatible with it.
Are you planning to add support soon?
What is the best workaround currently? Will it work if I do it manually via
model.to(torch.float64)
and similar for each dataloader or are there some caveats?The text was updated successfully, but these errors were encountered: