Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deprecate global --math-mode=fast flag? #25028

Closed
vtjnash opened this issue Dec 11, 2017 · 7 comments · Fixed by #41638
Closed

deprecate global --math-mode=fast flag? #25028

vtjnash opened this issue Dec 11, 2017 · 7 comments · Fixed by #41638
Labels
maths Mathematical functions

Comments

@vtjnash
Copy link
Member

vtjnash commented Dec 11, 2017

It's invalid for our inference pass to pre-compute floating-point operations and constant propagate the result if this flag gets enabled. Thus, to maintain soundness, we should be pessimizing the inference optimizations we perform when that flag it set. (Of course, that means that despite the flag name, this may result in significant loss of performance). This is pretty hard to do (realistically, we would probably need to rebuild the sysimg with the flag changed).

(While I don't think anything about this needs to happen before v1.0, I think we should at least consider this flag "experimental" for the release so it can be changed in v1.x as needed.)

(Also, while investigating this report, I noticed this issue may also extend to @simd, and that muladd is missing from the inference blacklist.)

@oscardssmith
Copy link
Member

Why can't we precompute floating point operations if fastmath is enabled? It will yield inconsistent results if inference slightly changes, but that's kind of what you sign up for when you use fastmath

@vtjnash
Copy link
Member Author

vtjnash commented Dec 11, 2017

We can precompute most of them, we just don't have that machinery built right now, and we can't propagate the results as strongly as we can when fast-math is disabled (most notably, we have to be more conservative about removing branches).

@oscardssmith
Copy link
Member

oscardssmith commented Dec 11, 2017

Why do we have to be conservative? isn't that exactly the type of thing fastmath allows us to ignore? I feel like I'm missing something really stupid.

@vtjnash
Copy link
Member Author

vtjnash commented Dec 11, 2017

To remove a branch, the compiler first needs to prove that the result of the conditional can't change under any transformation. IEEE math allows us to make that guarantee easily, since it prohibits the compiler from doing things that would violate that assumption (like re-associating arithmetic, treating non-finite numbers as UB, and discarding/increasing precision). It's trivial to build examples where this means either you can't predict the code path that will be taken. Because this flag enables more UB, the compiler must either be more pessimistic about optimizing (very difficult to get right, also slow), or just give up on correctness and declare that the floating point variables are unpredictable. Amusingly, this also means that a fully-optimized program run under fast-math will not be faster than a fully-optimized program optimized to run under IEEE rules. (this derives from the fact that all correct fast-math transforms could instead have been done by hand to the original source code, but that the reverse doesn't work – e.g. a fast-math program can't be compiled to remove undefined behavior).

@oscardssmith
Copy link
Member

Would it be possible to do branch prediction as
if fast math was off? The only cases where this will change the branch taken is when switching to fastmath changes it. Since the fast math says floats can produce incorrect results, this type of result doesn't seem too wrong.

@StefanKarpinski
Copy link
Member

It's control flow that's the problem, not concerns about the result correctness. The trouble is that the compiler has to be more conservative about some optimizations when it doesn't know that other optimization passes might break IEEE rules.

@simonbyrne
Copy link
Contributor

I think the "right way" to do fastmath is using something like Cassette. It would be interesting to try it now to see how the performance compared.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maths Mathematical functions
Projects
None yet
5 participants