-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BFLOAT16 support #3148
Comments
Also .NET 5 will have Half types |
As a type naming proposal, perhaps
|
We could do what we do with integer types and allow the creation of arbitrary exponent/mantissa bitcount float types on demand. |
Apparently ARM Neoverse v1 will be getting BFLOAT16 support: https://fuse.wikichip.org/news/4564/arm-updates-its-neoverse-roadmap-new-bfloat16-sve-support/ |
If you do, also add BFLOAT19 AKA TF32. If we are following rust naming convention that would be |
LLVM 11 added support for bfloat16: https://llvm.org/docs/LangRef.html#floating-point-types |
BFLOAT16 is a new floating-point format. It's a 16-bit floating point format with an 8 bit exponent and 7 bit mantissa (vs 5 bit exponent, 11 bit mantissa of a half-precision float which is currently
f16
) designed for deep learning.Selected excerpts:
f16b
.References:
As a more general issue: how should we add new numeric types going forward? e.g. Unum. With zig not supporting operator overloading, such types would have to be provided by the core for ergonomic use.
The text was updated successfully, but these errors were encountered: