Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in max_double implementation #57119

Closed
eschnett opened this issue Jan 21, 2025 · 11 comments · Fixed by #57124
Closed

Error in max_double implementation #57119

eschnett opened this issue Jan 21, 2025 · 11 comments · Fixed by #57124
Labels
bug Indicates an unexpected problem or unintended behavior maths Mathematical functions regression 1.12 Regression in the 1.12 release
Milestone

Comments

@eschnett
Copy link
Contributor

eschnett commented Jan 21, 2025

I see this strange behaviour in Julia 1.12-dev (todays nightly build), first reported at MakieOrg/Makie.jl#4729:

julia> using IntervalSets

julia> f(x) = (iv=0.0..1.0; mod(x,iv))
f (generic function with 1 method)

julia> f(0.0)
NaN

The correct result is 0.0, and this works fine in Julia 1.11, and also when mod is called directly instead of in a function (mod(0.0, 0.0..1.0)).

I believe this is a bug in Julia's constant propagation mechanism. I have posted more details in the Makie.jl issue.

I see that Julia determines already at compile time that the call to mod will return NaN:

julia> @code_warntype f(0.0)
MethodInstance for f(::Float64)
  from f(x) @ Main REPL[1]:1
Arguments
  #self#::Core.Const(Main.f)
  x::Float64
Locals
  iv::ClosedInterval{Float64}
Body::Float64
1%1 = Main.:..::Core.Const(IntervalSets...)
│        (iv = (%1)(0.0, 1.0))
│   %3 = Main.mod::Core.Const(mod)
│   %4 = iv::Core.Const(0.0 .. 1.0)
│   %5 = (%3)(x, %4)::Core.Const(NaN)
└──      return %5

IntervalSets' mod function is

mod(x, i::TypedEndpointsInterval{:closed,:closed}) = mod(x - leftendpoint(i), width(i)) + leftendpoint(i)

and its width function is

function width(A::AbstractInterval)
    _width = rightendpoint(A) - leftendpoint(A)
    max(zero(_width), _width)   # this works when T is a Date
end

I find that removing the line calling max avoids the problem. It seems to me that this line somehow confuses Julia into assuming that the second argument to mod is 0.

@Keno Keno added the regression 1.12 Regression in the 1.12 release label Jan 21, 2025
@eschnett
Copy link
Contributor Author

eschnett commented Jan 21, 2025

Here is a stand-alone test case:

struct IV
    a::Float64
    b::Float64
end
width(iv::IV) = (w=iv.b-iv.a; max(zero(w),w))
Base.mod(x,iv::IV) = mod(x-iv.a, width(iv)) + iv.a
f(x) = (iv=IV(0.0,1.0); mod(x,iv))
f(0.0)

The call to f(0.0) returns NaN. The correct result would be 0.0.

@giordano giordano added this to the 1.12 milestone Jan 21, 2025
@giordano
Copy link
Contributor

Any reference for when this last worked? It'd help for a bisect.

@eschnett
Copy link
Contributor Author

Sorry, no – 1.11 works for me.

@Seelengrab
Copy link
Contributor

It works correctly for me on 195c4d2, which was from #56784, so quite recent.

@adienes
Copy link
Contributor

adienes commented Jan 21, 2025

on my machine this bisects to a861a551b764062d47ab3c2a3748bcf540221bae aka #56371 @gbaraldi

julia> versioninfo()
Julia Version 1.12.0-DEV.1885
Commit a861a551b7 (2025-01-14 04:14 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin24.2.0)
  CPU: 10 × Apple M1 Max
  WORD_SIZE: 64
  LLVM: libLLVM-18.1.7 (ORCJIT, apple-m1)
  GC: Built with stock GC
Threads: 1 default, 0 interactive, 1 GC (on 8 virtual cores)

@eschnett
Copy link
Contributor Author

Look at these two functions from runtime_intrinsics.c:

float max_float(float x, float y) JL_NOTSAFEPOINT
{
    float diff = x - y;
    float argmin = signbit(diff) ? y : x;
    int is_nan = isnan(x) || isnan(y);
    return is_nan ? diff : argmin;
}

double max_double(double x, double y) JL_NOTSAFEPOINT
{
    double diff = x - y;
    double argmin = signbit(diff) ? x : y;
    int is_nan = isnan(x) || isnan(y);
    return is_nan ? diff : argmin;
}

Do you spot the difference? Only one of them is correct...

@eschnett
Copy link
Contributor Author

Spoiler alert: the variable should be called argmax.

@vtjnash vtjnash changed the title Error in constant propagation in Julia 1.12 Error in max_double implementation Jan 22, 2025
@vtjnash vtjnash added backport 1.10 Change should be backported to the 1.10 release backport 1.11 Change should be backported to release-1.11 and removed regression 1.12 Regression in the 1.12 release labels Jan 22, 2025
@gbaraldi
Copy link
Member

This function didn't exist in 1.10/1.11 so no backports needed

@gbaraldi gbaraldi removed backport 1.10 Change should be backported to the 1.10 release backport 1.11 Change should be backported to release-1.11 labels Jan 22, 2025
@vtjnash
Copy link
Member

vtjnash commented Jan 22, 2025

Ah okay, I didn't realize #56371 was new. All new intrinsics added should have a test added to exercise them in test/intrinsics.jl

@giordano giordano added bug Indicates an unexpected problem or unintended behavior maths Mathematical functions labels Jan 22, 2025
@eschnett
Copy link
Contributor Author

@gbaraldi This is still a regression in 1.12...

@Seelengrab Seelengrab added the regression 1.12 Regression in the 1.12 release label Jan 22, 2025
@gbaraldi
Copy link
Member

Too many tags 😕 yeh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Indicates an unexpected problem or unintended behavior maths Mathematical functions regression 1.12 Regression in the 1.12 release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants