-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can not convert float to the same or more number of bit fixed point #102
Comments
I just run into the same error. The problem is in FixedPointNumbers.jl/src/normed.jl Lines 52 to 55 in cf9ebc4
E.g., in FixedPointNumbers._convert(N0f64, FixedPointNumbers.rawtype(N0f64), 1.0) the julia> y = round(FixedPointNumbers.widen1(FixedPointNumbers.rawone(N0f64)) * 1.0)
1.8446744073709552e19 which is larger than the maximum number that can be represented with julia> big(typemax(UInt64))
18446744073709551615 Maybe it's worth checking if the |
FYI, since the length of the mantissa part of julia> for f=1:32; Float32(exp2(32-f)) |> Normed{UInt32,f};end
ERROR: ArgumentError: Normed{UInt32,25} is a 32-bit type representing 4294967296 values from 0.0 to 128.0; cannot represent 128.0 Similarly, the length of the mantissa part of julia> for f=1:64; Float64(exp2(64-f)) |> Normed{UInt64,f};end
ERROR: ArgumentError: Normed{UInt64,54} is a 64-bit type representing 0 values from 0.0 to 1024.0; cannot represent 1024.0 Fixing this problem may be useful for the round trip testing to check the conversions. |
PR #131 does not change the
Originally posted by @kimikage in #131 (comment) When you are faced by a problem with |
this works in Julia 0.6.2:
this will not work
It looks like when the number of bits is the same or larger, it won’t work.
error message:
ArgumentError: FixedPointNumbers.Normed{UInt64,64} is a 64-bit type representing 0 values from 0.0 to 1.0; cannot represent 1.0
The text was updated successfully, but these errors were encountered: