Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BFLOAT16 support #3148

Open
daurnimator opened this issue Sep 1, 2019 · 6 comments
Open

BFLOAT16 support #3148

daurnimator opened this issue Sep 1, 2019 · 6 comments
Labels
proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Milestone

Comments

@daurnimator
Copy link
Contributor

daurnimator commented Sep 1, 2019

BFLOAT16 is a new floating-point format. It's a 16-bit floating point format with an 8 bit exponent and 7 bit mantissa (vs 5 bit exponent, 11 bit mantissa of a half-precision float which is currently f16) designed for deep learning.

The bfloat16 format is utilized in upcoming Intel AI processors, such as Nervana NNP-L1000, Xeon processors, and Intel FPGAs, Google Cloud TPUs, and TensorFlow. Arm Neon and SVE also supports bfloat16 format.

Selected excerpts:

  • Rust proposal is to call the type f16b.
  • should always have size 2 and alignment 2 on all platforms

References:


As a more general issue: how should we add new numeric types going forward? e.g. Unum. With zig not supporting operator overloading, such types would have to be provided by the core for ergonomic use.

@daurnimator daurnimator added the enhancement Solving this issue will likely involve adding new logic or components to the codebase. label Sep 1, 2019
@andrewrk andrewrk added proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. and removed enhancement Solving this issue will likely involve adding new logic or components to the codebase. labels Sep 2, 2019
@andrewrk andrewrk added this to the 0.6.0 milestone Sep 2, 2019
@andrewrk andrewrk modified the milestones: 0.6.0, 0.7.0 Feb 11, 2020
@msingle
Copy link

msingle commented Sep 1, 2020

Also .NET 5 will have Half types

@marnix
Copy link

marnix commented Sep 4, 2020

As a type naming proposal, perhaps f16_7, so use the mantissa/fraction number of bits? Rationale: Less precision -> lower number.

Short name Long name Description
f16 f16_10 IEEE half-precision 16-bit float / .NET Half type
f32 f32_23 IEEE 754 single-precision 32-bit float
f64 f64_52 IEEE 754 double-precision 64-bit float
(none?) f16_7 bfloat16
? f19_10 NVidia's TensorFloat
? f24_16 AMD's fp24 format

@tgschultz
Copy link
Contributor

We could do what we do with integer types and allow the creation of arbitrary exponent/mantissa bitcount float types on demand.

@daurnimator
Copy link
Contributor Author

daurnimator commented Sep 23, 2020

Apparently ARM Neoverse v1 will be getting BFLOAT16 support: https://fuse.wikichip.org/news/4564/arm-updates-its-neoverse-roadmap-new-bfloat16-sve-support/

@Mouvedia
Copy link

Mouvedia commented Sep 27, 2020

If you do, also add BFLOAT19 AKA TF32. If we are following rust naming convention that would be f19b.

@andrewrk andrewrk modified the milestones: 0.7.0, 0.8.0 Oct 27, 2020
@zigazeljko
Copy link
Contributor

LLVM 11 added support for bfloat16: https://llvm.org/docs/LangRef.html#floating-point-types

@andrewrk andrewrk modified the milestones: 0.8.0, 0.9.0 May 19, 2021
@andrewrk andrewrk modified the milestones: 0.9.0, 0.10.0 Nov 23, 2021
@andrewrk andrewrk modified the milestones: 0.10.0, 0.11.0 Apr 16, 2022
@andrewrk andrewrk modified the milestones: 0.11.0, 0.12.0 Apr 9, 2023
@andrewrk andrewrk modified the milestones: 0.13.0, 0.12.0 Jul 9, 2023
@andrewrk andrewrk modified the milestones: 0.14.0, 0.15.0 Feb 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Projects
None yet
Development

No branches or pull requests

7 participants