-
Notifications
You must be signed in to change notification settings - Fork 57
Document the motivation for no mixed type operations #115
Comments
We've discussed modifications to the casting rules a few times at TC39, and the committee has so far come down pretty consistently on the side of maintaining the current rigidity, out of a sense of regularity, predictability and simplicity, even if it will make some code a little more verbose. In this use case, is there a downside to the current BigInt semantics beyond the verbosity of the explicit casts? |
If you want fewer
Are you sure that reading byte by byte is faster than using |
#40 is still open, so I assumed changing casting rules is still up for discussion.
No (aside from a possible performance cost), but can't that be said about all semantics that require explicit casts? On the other hand, is there a practical downside to allowing shifts of Number by BigInt?
Unfortunately this code has different behavior: it returns a negative number if the highest bit is set in
|
Good point. So you have several options, all of which should have acceptable performance:
|
Readability aside, it looks like the performance is inversely proportional to the number of Benchmark results
But that's beside the point. The main point is that every time someone needs to compose a BigInt from multiple Numbers using bitwise operations, they will have to choose between the straightforward and verbose (and possibly slow) approach, or one of multiple alternative options that are less verbose but somewhat less readable. If mixing types were allowed for bitwise operations, there would be one obvious choice. |
I don't think current benchmarks on V8 are a good reason to change the design given #26 (comment) . I guess bitwise operations differ from arithmetic operations in that there is always a correct answer in the domain of BigInts. However, it seems like this would not actually add expressivity, just a quick abbreviation. I don't think this justifies complicating the model of requiring the same type (except comparisons), which is hoped to extend better to operator overloading. |
Could you expand on this (or link somewhere where I can read about it)? Specifically, what would be disadvantage of replacing step 7 in 4.7.1 with the following (or something to that effect)?
|
@seishun Looks like we're getting to the point where we need a document describing the operator overloading design that BigInt was designed for. I've discussed this design with @BrendanEich, @tschneidereit and @keithamus in some detail, in addition to alluding to it in committee presentations. I think any of us could write such a document; we just have to get around to it. We discussed operator overloading in the most recent TC39 meeting. When the notes come out, you'll be able to see some more of the discussion there. |
64-bit integers are often stored as two 32-bit values, see for example https://github.com/dcodeIO/long.js. If one wants to construct a BigInt from such a pair, they will have to perform a left shift by 32 bits, followed by bitwise OR. However, bitwise operations currently require both operands to be of the same type, which results in somewhat verbose code:
BigInt(high) << 32n | BigInt(low)
.A more specific use case is Node.js's Buffer class - I would like to add methods for reading and writing 64-bit integers once BigInt is standardized. For performance reasons, the
readUInt64LE
method would likely read individual bytes rather than callreadUInt32LE
, so it would look like this (and analogously for the other methods):The README states that mixed operands are disallowed because there is no type that would encompass both arbitrary integers and double-precision floats. But bitwise operations on Numbers convert their operands to 32-bit integers, so in this case there is a "more general" type, and that is BigInt.
The text was updated successfully, but these errors were encountered: