Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: ^(::Float64, ::Union{Int, Float64}) incorrect for very large negative exponents #53881

Closed
KlausC opened this issue Mar 28, 2024 · 10 comments · Fixed by #53967
Closed

BUG: ^(::Float64, ::Union{Int, Float64}) incorrect for very large negative exponents #53881

KlausC opened this issue Mar 28, 2024 · 10 comments · Fixed by #53967
Labels
bug Indicates an unexpected problem or unintended behavior maths Mathematical functions regression 1.10 Regression in the 1.10 release

Comments

@KlausC
Copy link
Contributor

KlausC commented Mar 28, 2024

The floating point power function fails for a border case of integer argument:

julia> 2.0 ^ typemin(Int) # should be 0.0
0.5

julia> (-1.0) ^ typemin(Int) # should be 1.0
-1.0

The bug seems to be here: Base.Math.pow_body(::Float64, ::Integer)

@KlausC KlausC changed the title BUG: Base.Math.pow_body(::Float64, typemin(Int) failing BUG: Base.Math.pow_body(::Float64, typemin(Int)) failing Mar 28, 2024
@KlausC
Copy link
Contributor Author

KlausC commented Mar 28, 2024

Further investigation shows, that the powers of big integers are unreliable for ~ n < -2^53:

The first column uses Int, the second Float64 exponent type. Third column the correct value.

julia> [45:62 prevfloat(1.0).^(-2 .^ (45:62)) prevfloat(1.0).^float(-2 .^ (45:62)) exp.(log(big(prevfloat(1.0))) .* (-2 .^ (45:62)))]
18×4 Matrix{BigFloat}:
 45.0      1.00391           1.00391          1.00391
 46.0      1.00781           1.00781          1.00784
 47.0      1.01562           1.01562          1.01575
 48.0      1.03123           1.03123          1.03174
 49.0      1.06233           1.06233          1.06449
 50.0      1.12352           1.12352          1.13315
 51.0      1.23654           1.23654          1.28403
 52.0      1.35914           1.35914          1.64872
 53.0      1.10235e-07       1.10235e-07      2.71828
 54.0    -54.5981          -54.5981           7.38906
 55.0  -8942.87          -8942.87            54.5982
 56.0     -6.22028e+07      -6.22028e+07   2980.96
 57.0     -1.18444e+15      -1.18444e+15      8.88611e+06
 58.0     -1.9329e+29       -1.9329e+29       7.8963e+13
 59.0     -2.44925e+57      -2.44925e+57      6.23515e+27
 60.0     -1.91951e+113     -1.91951e+113     3.88771e+55
 61.0     -5.82523e+224     -5.82523e+224     1.51143e+111
 62.0     Inf               Inf               2.28441e+222

@KlausC KlausC changed the title BUG: Base.Math.pow_body(::Float64, typemin(Int)) failing BUG: ^(::Float64, ::Union{Int, Float64}) failing Mar 28, 2024
@mbauman mbauman added bug Indicates an unexpected problem or unintended behavior maths Mathematical functions labels Mar 28, 2024
@mbauman mbauman changed the title BUG: ^(::Float64, ::Union{Int, Float64}) failing BUG: ^(::Float64, ::Union{Int, Float64}) incorrect for very large negative exponents Mar 28, 2024
@mbauman
Copy link
Member

mbauman commented Mar 28, 2024

The issue here is that inv(x) returns a rounded Float64 of the inverse — with 52 bits precision. In fact, we start losing 1ulp accuracy at -2^27 and it just goes downhill from there. Seems we need a better strategy here.

rx = inv(x)

@oscardssmith
Copy link
Member

oscardssmith commented Mar 28, 2024

@mbauman that gets compensated for on

isfinite(x) && (xnlo = -fma(x, rx, -1.) * rx)

I think something must be going wrong in the error compensation though. Investigating now.

@oscardssmith oscardssmith added the regression 1.10 Regression in the 1.10 release label Mar 28, 2024
@oscardssmith oscardssmith modified the milestones: 1.10, 1.11 Mar 28, 2024
@mbauman
Copy link
Member

mbauman commented Mar 28, 2024

Ah, I see. We still are missing compensation somewhere here:

julia> ulps(b, x) = abs(b^x - Float64(big(b)^x))/eps(b^x)
ulps (generic function with 3 methods)

julia> [ulps(prevfloat(1.0), -2^e) for e in 26:54]
29-element Vector{Float64}:
      0.0
      1.0
      2.0
      8.0
     32.0
    128.0
    512.0
   2048.0
   8192.0
  32768.0
 131073.0
 524302.0
      2.097258e6
      8.389461e6
      3.3561258e7
      1.34272348e8
      5.37307984e8
      2.15098174e9
      8.617942835e9
      3.4584177964e10
      1.3924045658e11
      5.6426422088e11
      2.316653480914e12
      9.762831672521e12
      4.3352610026219e13
      2.13851005933614e14
      1.304153939836051e15
      2.0538755963422595e23
      8.723923242651639e15

@KlausC
Copy link
Contributor Author

KlausC commented Mar 28, 2024

When I fixed the first issue (typemin(Int)), I found the second (inaccuracy) disappeared. Will drop PR soon.

@oscardssmith
Copy link
Member

@KlausC that suggests something you were doing is wrong. From what I can tell, this appears to be an issue caused by me skipping a term in the error compensation that is very small but can accumulate to a significant amount. At this point, I believe that I have corrected for the case of power of 2 exponents, but not for the case of exponents with bits set.

@KlausC
Copy link
Contributor Author

KlausC commented Mar 28, 2024

that suggests something you were doing is wrong. From what I can tell, this appears to be an issue caused by

That is possible. I fixed the first issue primarily. As a side-effect the exploding errors for prevfloat(1.0) powers disappeared. Of course it is possible, that there are more improvements needed.

@KlausC
Copy link
Contributor Author

KlausC commented Mar 28, 2024

BTW, I repeated the ULPs counting after my fix:

julia> ulps(b, x) = abs(b^x - Float64(big(b)^x))/eps(b^x)
ulps (generic function with 1 method)

julia> [26:63 [ulps(prevfloat(1.0), -2^e) for e in 26:63]]
38×2 Matrix{Float64}:
 26.0      0.0
 27.0      0.0
 28.0      0.0
 29.0      0.0
 30.0      0.0
 31.0      0.0
 32.0      0.0
 33.0      0.0
 34.0      0.0
 35.0      0.0
 36.0      0.0
 37.0      0.0
 38.0      0.0
 39.0      0.0
 40.0      0.0
 41.0      0.0
 42.0      0.0
 43.0      0.0
 44.0      0.0
 45.0      0.0
 46.0      0.0
 47.0      0.0
 48.0      0.0
 49.0      0.0
 50.0      0.0
 51.0      0.0
 52.0      0.0
 53.0      1.0
 54.0      1.0
 55.0      3.0
 56.0     12.0
 57.0     34.0
 58.0    144.0
 59.0    646.0
 60.0   3255.0
 61.0  10322.0
 62.0  51893.0
 63.0    NaN

@oscardssmith
Copy link
Member

what about [26:63 [ulps(prevfloat(1.0), 1-2^e) for e in 26:63]]?

@KlausC
Copy link
Contributor Author

KlausC commented Mar 28, 2024

Also fine:

julia> [26:63 [ulps(prevfloat(1.0), -2^e+1) for e in 26:63]]
38×2 Matrix{Float64}:
 26.0      0.0
 27.0      1.0
 28.0      0.0
 29.0      0.0
 30.0      0.0
 31.0      0.0
 32.0      0.0
 33.0      0.0
 34.0      0.0
 35.0      0.0
 36.0      0.0
 37.0      1.0
 38.0      0.0
 39.0      1.0
 40.0      0.0
 41.0      0.0
 42.0      0.0
 43.0      0.0
 44.0      0.0
 45.0      0.0
 46.0      1.0
 47.0      0.0
 48.0      1.0
 49.0      1.0
 50.0      1.0
 51.0      0.0
 52.0      1.0
 53.0      0.0
 54.0      0.0
 55.0      2.0
 56.0     11.0
 57.0     33.0
 58.0    144.0
 59.0    646.0
 60.0   3254.0
 61.0  10321.0
 62.0  51892.0
 63.0    NaN

oscardssmith pushed a commit that referenced this issue Apr 22, 2024
Improve performance of `^(::Float64, n::Integer)` in the case of `abs(n)
> 2^13`.

While `pow_body` is unreliable for `abs(n) > 2^25` this implementation
provides errors of a few ULPs, while runtime is capped to that of the
`Float64` implementation.

Fixes  #53881
See also #53886.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Indicates an unexpected problem or unintended behavior maths Mathematical functions regression 1.10 Regression in the 1.10 release
Projects
None yet
3 participants