Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A bug in the function convert_to_fp32() #172

Closed
xyh97 opened this issue Sep 26, 2021 · 2 comments
Closed

A bug in the function convert_to_fp32() #172

xyh97 opened this issue Sep 26, 2021 · 2 comments

Comments

@xyh97
Copy link

xyh97 commented Sep 26, 2021

Hi, recently I found that there might be a bug in the function convert_to_fp32().

return recursively_apply(_is_fp16_tensor, tensor, test_type=_is_fp16_tensor)

I think the correct one should be
return recursively_apply(_convert_to_fp32, tensor, test_type=_is_fp16_tensor)

Thanks.

@reppy4620
Copy link
Contributor

reppy4620 commented Sep 26, 2021

I met the error because of this i think.

If set fp_16 = True in Accelerator, model returns "True".

reppy4620 added a commit to reppy4620/accelerate that referenced this issue Sep 26, 2021
@reppy4620 reppy4620 mentioned this issue Sep 26, 2021
sgugger pushed a commit that referenced this issue Sep 26, 2021
@sgugger
Copy link
Collaborator

sgugger commented Sep 26, 2021

Thanks for the report and the hint for the fix. It's fixed by the PR mentioned above and I will do a patch release on Monday with this fix.

@sgugger sgugger closed this as completed Sep 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants