-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some reductions over dimensions depend too heavily on zero() #6672
Comments
Narrowing it down...
|
So
which seems like a failure of abs2 because |FloatingPoint|^2 should still be FloatingPoint |
Although its a pretty tough type inferenence problem I guess, because you could really have anything after you are done abs2'ing it. |
Or simply:
|
For I'm increasingly a fan of the notion that the return type of all functions should be
but it doesn't (it would have, before the addition of the keyword). |
...and the solution with the use of first element doesn't really work either because you could just write
It is really hard to support heterogenous arrays for all the numerical stuff. The element type of the array is not really informative when it is abstract. Even if the type inference could return
it wouldn't help for a reduction. Should the return type be |
Is there really even a solution for this on the horizon? Since there's no Despite my |
I had been using Julia's matrix-vector multiplication to build up coefficient expressions for JuMP constraints. This used to work, returning an Array of JuMP.GenericAffExpr, but now it fails because there's no zero(Any). using JuMP
m = Model()
@defVar(m, x[1:4])
vec = x[:] # vec::Array{JuMP.Variable,1}
A = eye(4)
A * vec # ERROR: no method zero(Type{Any}) in generic_matvecmul! at linalg/matmul.jl:321 I also fall foul of sum(X, dim) giving the same error when I try to work around it, though sum(X) works fine. |
Marking as regression. |
@benmoran, in JuMP at least it would be more efficient to use |
@mlubin Yes, for this example, something like
is sufficient. The general case is difficult to handle with the tools we have right now. |
One solution would be a two-pass approach: first, look at all of the elements to figure out the correct type of the reduction, then perform the summation. The simplest implementation might be something like: function sum{T}(A::AbstractArray{T}, region)
z = method_exists(zero, (T,)) ? zero(T)+zero(T) : zero(sum(A))
_sum!(reduction_init(A, region, z), A)
end This still doesn't work where |
Code to reproduce:
The text was updated successfully, but these errors were encountered: