This method is intentionally generic; it does not require that its *this* value be an Array. Therefore it can be transferred to other kinds of objects for use as a method.
@@ -38227,25 +39581,15 @@
1. Let _index_ be 0.
1. Repeat,
1. If _array_ has a [[TypedArrayName]] internal slot, then
- 1. If IsDetachedBuffer(_array_.[[ViewedArrayBuffer]]) is *true*, throw a *TypeError* exception.
- 1. Let _len_ be _array_.[[ArrayLength]].
+ 1. Let _taRecord_ be MakeTypedArrayWithBufferWitnessRecord(_array_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. Else,
1. Let _len_ be ? LengthOfArrayLike(_array_).
1. If _index_ ≥ _len_, return NormalCompletion(*undefined*).
- 1. If _kind_ is ~key~, perform ? GeneratorYield(CreateIterResultObject(𝔽(_index_), *false*)).
+ 1. Let _indexNumber_ be 𝔽(_index_).
+ 1. If _kind_ is ~key~, then
+ 1. Let _result_ be _indexNumber_.
1. Else,
- 1. Let _elementKey_ be ! ToString(𝔽(_index_)).
+ 1. Let _elementKey_ be ! ToString(_indexNumber_).
1. Let _elementValue_ be ? Get(_array_, _elementKey_).
- 1. If _kind_ is ~value~, perform ? GeneratorYield(CreateIterResultObject(_elementValue_, *false*)).
+ 1. If _kind_ is ~value~, then
+ 1. Let _result_ be _elementValue_.
1. Else,
1. Assert: _kind_ is ~key+value~.
- 1. Let _result_ be CreateArrayFromList(« 𝔽(_index_), _elementValue_ »).
- 1. Perform ? GeneratorYield(CreateIterResultObject(_result_, *false*)).
+ 1. Let _result_ be CreateArrayFromList(« _indexNumber_, _elementValue_ »).
+ 1. Perform ? GeneratorYield(CreateIterResultObject(_result_, *false*)).
1. Set _index_ to _index_ + 1.
1. Return CreateIteratorFromClosure(_closure_, *"%ArrayIteratorPrototype%"*, %ArrayIteratorPrototype%).
@@ -39251,7 +40743,7 @@ TypedArray Objects
%Int8Array%
- ~Int8~
+ ~int8~
|
1
@@ -39269,7 +40761,7 @@ TypedArray Objects
%Uint8Array%
|
- ~Uint8~
+ ~uint8~
|
1
@@ -39287,7 +40779,7 @@ TypedArray Objects
%Uint8ClampedArray%
|
- ~Uint8C~
+ ~uint8clamped~
|
1
@@ -39305,7 +40797,7 @@ TypedArray Objects
%Int16Array%
|
- ~Int16~
+ ~int16~
|
2
@@ -39323,7 +40815,7 @@ TypedArray Objects
%Uint16Array%
|
- ~Uint16~
+ ~uint16~
|
2
@@ -39341,7 +40833,7 @@ TypedArray Objects
%Int32Array%
|
- ~Int32~
+ ~int32~
|
4
@@ -39359,7 +40851,7 @@ TypedArray Objects
%Uint32Array%
|
- ~Uint32~
+ ~uint32~
|
4
@@ -39377,7 +40869,7 @@ TypedArray Objects
%BigInt64Array%
|
- ~BigInt64~
+ ~bigint64~
|
8
@@ -39395,7 +40887,7 @@ TypedArray Objects
%BigUint64Array%
|
- ~BigUint64~
+ ~biguint64~
|
8
@@ -39413,7 +40905,7 @@ TypedArray Objects
%Float32Array%
|
- ~Float32~
+ ~float32~
|
4
@@ -39430,7 +40922,7 @@ TypedArray Objects
%Float64Array%
|
- ~Float64~
+ ~float64~
|
8
@@ -39481,15 +40973,16 @@ %TypedArray%.from ( _source_ [ , _mapfn_ [ , _thisArg_ ] ] )
1. Let _C_ be the *this* value.
1. If IsConstructor(_C_) is *false*, throw a *TypeError* exception.
- 1. If _mapfn_ is *undefined*, let _mapping_ be *false*.
+ 1. If _mapfn_ is *undefined*, then
+ 1. Let _mapping_ be *false*.
1. Else,
1. If IsCallable(_mapfn_) is *false*, throw a *TypeError* exception.
1. Let _mapping_ be *true*.
1. Let _usingIterator_ be ? GetMethod(_source_, @@iterator).
1. If _usingIterator_ is not *undefined*, then
- 1. Let _values_ be ? IterableToList(_source_, _usingIterator_).
+ 1. Let _values_ be ? IteratorToList(? GetIteratorFromMethod(_source_, _usingIterator_)).
1. Let _len_ be the number of elements in _values_.
- 1. Let _targetObj_ be ? TypedArrayCreate(_C_, « 𝔽(_len_) »).
+ 1. Let _targetObj_ be ? TypedArrayCreateFromConstructor(_C_, « 𝔽(_len_) »).
1. Let _k_ be 0.
1. Repeat, while _k_ < _len_,
1. Let _Pk_ be ! ToString(𝔽(_k_)).
@@ -39497,7 +40990,8 @@ %TypedArray%.from ( _source_ [ , _mapfn_ [ , _thisArg_ ] ] )
1. Remove the first element from _values_.
1. If _mapping_ is *true*, then
1. Let _mappedValue_ be ? Call(_mapfn_, _thisArg_, « _kValue_, 𝔽(_k_) »).
- 1. Else, let _mappedValue_ be _kValue_.
+ 1. Else,
+ 1. Let _mappedValue_ be _kValue_.
1. Perform ? Set(_targetObj_, _Pk_, _mappedValue_, *true*).
1. Set _k_ to _k_ + 1.
1. Assert: _values_ is now an empty List.
@@ -39505,14 +40999,15 @@ %TypedArray%.from ( _source_ [ , _mapfn_ [ , _thisArg_ ] ] )
1. NOTE: _source_ is not an Iterable so assume it is already an array-like object.
1. Let _arrayLike_ be ! ToObject(_source_).
1. Let _len_ be ? LengthOfArrayLike(_arrayLike_).
- 1. Let _targetObj_ be ? TypedArrayCreate(_C_, « 𝔽(_len_) »).
+ 1. Let _targetObj_ be ? TypedArrayCreateFromConstructor(_C_, « 𝔽(_len_) »).
1. Let _k_ be 0.
1. Repeat, while _k_ < _len_,
1. Let _Pk_ be ! ToString(𝔽(_k_)).
1. Let _kValue_ be ? Get(_arrayLike_, _Pk_).
1. If _mapping_ is *true*, then
1. Let _mappedValue_ be ? Call(_mapfn_, _thisArg_, « _kValue_, 𝔽(_k_) »).
- 1. Else, let _mappedValue_ be _kValue_.
+ 1. Else,
+ 1. Let _mappedValue_ be _kValue_.
1. Perform ? Set(_targetObj_, _Pk_, _mappedValue_, *true*).
1. Set _k_ to _k_ + 1.
1. Return _targetObj_.
@@ -39526,7 +41021,7 @@ %TypedArray%.of ( ..._items_ )
1. Let _len_ be the number of elements in _items_.
1. Let _C_ be the *this* value.
1. If IsConstructor(_C_) is *false*, throw a *TypeError* exception.
- 1. Let _newObj_ be ? TypedArrayCreate(_C_, « 𝔽(_len_) »).
+ 1. Let _newObj_ be ? TypedArrayCreateFromConstructor(_C_, « 𝔽(_len_) »).
1. Let _k_ be 0.
1. Repeat, while _k_ < _len_,
1. Let _kValue_ be _items_[_k_].
@@ -39570,8 +41065,8 @@ Properties of the %TypedArray% Prototype Object
%TypedArray%.prototype.at ( _index_ )
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. Let _relativeIndex_ be ? ToIntegerOrInfinity(_index_).
1. If _relativeIndex_ ≥ 0, then
1. Let _k_ be _relativeIndex_.
@@ -39601,9 +41096,8 @@ get %TypedArray%.prototype.byteLength
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[TypedArrayName]]).
1. Assert: _O_ has a [[ViewedArrayBuffer]] internal slot.
- 1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, return *+0*𝔽.
- 1. Let _size_ be _O_.[[ByteLength]].
+ 1. Let _taRecord_ be MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. Let _size_ be TypedArrayByteLength(_taRecord_).
1. Return 𝔽(_size_).
@@ -39615,8 +41109,8 @@ get %TypedArray%.prototype.byteOffset
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[TypedArrayName]]).
1. Assert: _O_ has a [[ViewedArrayBuffer]] internal slot.
- 1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, return *+0*𝔽.
+ 1. Let _taRecord_ be MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, return *+0*𝔽.
1. Let _offset_ be _O_.[[ByteOffset]].
1. Return 𝔽(_offset_).
@@ -39633,29 +41127,32 @@ %TypedArray%.prototype.copyWithin ( _target_, _start_ [ , _end_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. Let _relativeTarget_ be ? ToIntegerOrInfinity(_target_).
- 1. If _relativeTarget_ is -∞, let _to_ be 0.
- 1. Else if _relativeTarget_ < 0, let _to_ be max(_len_ + _relativeTarget_, 0).
- 1. Else, let _to_ be min(_relativeTarget_, _len_).
+ 1. If _relativeTarget_ = -∞, let _targetIndex_ be 0.
+ 1. Else if _relativeTarget_ < 0, let _targetIndex_ be max(_len_ + _relativeTarget_, 0).
+ 1. Else, let _targetIndex_ be min(_relativeTarget_, _len_).
1. Let _relativeStart_ be ? ToIntegerOrInfinity(_start_).
- 1. If _relativeStart_ is -∞, let _from_ be 0.
- 1. Else if _relativeStart_ < 0, let _from_ be max(_len_ + _relativeStart_, 0).
- 1. Else, let _from_ be min(_relativeStart_, _len_).
+ 1. If _relativeStart_ = -∞, let _startIndex_ be 0.
+ 1. Else if _relativeStart_ < 0, let _startIndex_ be max(_len_ + _relativeStart_, 0).
+ 1. Else, let _startIndex_ be min(_relativeStart_, _len_).
1. If _end_ is *undefined*, let _relativeEnd_ be _len_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
- 1. If _relativeEnd_ is -∞, let _final_ be 0.
- 1. Else if _relativeEnd_ < 0, let _final_ be max(_len_ + _relativeEnd_, 0).
- 1. Else, let _final_ be min(_relativeEnd_, _len_).
- 1. Let _count_ be min(_final_ - _from_, _len_ - _to_).
+ 1. If _relativeEnd_ = -∞, let _endIndex_ be 0.
+ 1. Else if _relativeEnd_ < 0, let _endIndex_ be max(_len_ + _relativeEnd_, 0).
+ 1. Else, let _endIndex_ be min(_relativeEnd_, _len_).
+ 1. Let _count_ be min(_endIndex_ - _startIndex_, _len_ - _targetIndex_).
1. If _count_ > 0, then
1. NOTE: The copying must be performed in a manner that preserves the bit-level encoding of the source data.
1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
+ 1. Set _taRecord_ to MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Set _len_ to TypedArrayLength(_taRecord_).
1. Let _elementSize_ be TypedArrayElementSize(_O_).
1. Let _byteOffset_ be _O_.[[ByteOffset]].
- 1. Let _toByteIndex_ be _to_ × _elementSize_ + _byteOffset_.
- 1. Let _fromByteIndex_ be _from_ × _elementSize_ + _byteOffset_.
+ 1. Let _bufferByteLimit_ be (_len_ × _elementSize_) + _byteOffset_.
+ 1. Let _toByteIndex_ be (_targetIndex_ × _elementSize_) + _byteOffset_.
+ 1. Let _fromByteIndex_ be (_startIndex_ × _elementSize_) + _byteOffset_.
1. Let _countBytes_ be _count_ × _elementSize_.
1. If _fromByteIndex_ < _toByteIndex_ and _toByteIndex_ < _fromByteIndex_ + _countBytes_, then
1. Let _direction_ be -1.
@@ -39664,11 +41161,14 @@ %TypedArray%.prototype.copyWithin ( _target_, _start_ [ , _end_ ] )
1. Else,
1. Let _direction_ be 1.
1. Repeat, while _countBytes_ > 0,
- 1. Let _value_ be GetValueFromBuffer(_buffer_, _fromByteIndex_, ~Uint8~, *true*, ~Unordered~).
- 1. Perform SetValueInBuffer(_buffer_, _toByteIndex_, ~Uint8~, _value_, *true*, ~Unordered~).
- 1. Set _fromByteIndex_ to _fromByteIndex_ + _direction_.
- 1. Set _toByteIndex_ to _toByteIndex_ + _direction_.
- 1. Set _countBytes_ to _countBytes_ - 1.
+ 1. If _fromByteIndex_ < _bufferByteLimit_ and _toByteIndex_ < _bufferByteLimit_, then
+ 1. Let _value_ be GetValueFromBuffer(_buffer_, _fromByteIndex_, ~uint8~, *true*, ~unordered~).
+ 1. Perform SetValueInBuffer(_buffer_, _toByteIndex_, ~uint8~, _value_, *true*, ~unordered~).
+ 1. Set _fromByteIndex_ to _fromByteIndex_ + _direction_.
+ 1. Set _toByteIndex_ to _toByteIndex_ + _direction_.
+ 1. Set _countBytes_ to _countBytes_ - 1.
+ 1. Else,
+ 1. Set _countBytes_ to 0.
1. Return _O_.
@@ -39678,7 +41178,7 @@ %TypedArray%.prototype.entries ( )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
+ 1. Perform ? ValidateTypedArray(_O_, ~seq-cst~).
1. Return CreateArrayIterator(_O_, ~key+value~).
@@ -39689,8 +41189,8 @@ %TypedArray%.prototype.every ( _callbackfn_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
1. Let _k_ be 0.
1. Repeat, while _k_ < _len_,
@@ -39710,20 +41210,24 @@ %TypedArray%.prototype.fill ( _value_ [ , _start_ [ , _end_ ] ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If _O_.[[ContentType]] is ~BigInt~, set _value_ to ? ToBigInt(_value_).
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. If _O_.[[ContentType]] is ~bigint~, set _value_ to ? ToBigInt(_value_).
1. Otherwise, set _value_ to ? ToNumber(_value_).
1. Let _relativeStart_ be ? ToIntegerOrInfinity(_start_).
- 1. If _relativeStart_ is -∞, let _k_ be 0.
- 1. Else if _relativeStart_ < 0, let _k_ be max(_len_ + _relativeStart_, 0).
- 1. Else, let _k_ be min(_relativeStart_, _len_).
+ 1. If _relativeStart_ = -∞, let _startIndex_ be 0.
+ 1. Else if _relativeStart_ < 0, let _startIndex_ be max(_len_ + _relativeStart_, 0).
+ 1. Else, let _startIndex_ be min(_relativeStart_, _len_).
1. If _end_ is *undefined*, let _relativeEnd_ be _len_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
- 1. If _relativeEnd_ is -∞, let _final_ be 0.
- 1. Else if _relativeEnd_ < 0, let _final_ be max(_len_ + _relativeEnd_, 0).
- 1. Else, let _final_ be min(_relativeEnd_, _len_).
- 1. If IsDetachedBuffer(_O_.[[ViewedArrayBuffer]]) is *true*, throw a *TypeError* exception.
- 1. Repeat, while _k_ < _final_,
+ 1. If _relativeEnd_ = -∞, let _endIndex_ be 0.
+ 1. Else if _relativeEnd_ < 0, let _endIndex_ be max(_len_ + _relativeEnd_, 0).
+ 1. Else, let _endIndex_ be min(_relativeEnd_, _len_).
+ 1. Set _taRecord_ to MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Set _len_ to TypedArrayLength(_taRecord_).
+ 1. Set _endIndex_ to min(_endIndex_, _len_).
+ 1. Let _k_ be _startIndex_.
+ 1. Repeat, while _k_ < _endIndex_,
1. Let _Pk_ be ! ToString(𝔽(_k_)).
1. Perform ! Set(_O_, _Pk_, _value_, *true*).
1. Set _k_ to _k_ + 1.
@@ -39737,8 +41241,8 @@ %TypedArray%.prototype.filter ( _callbackfn_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
1. Let _kept_ be a new empty List.
1. Let _captured_ be 0.
@@ -39767,17 +41271,10 @@ %TypedArray%.prototype.find ( _predicate_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If IsCallable(_predicate_) is *false*, throw a *TypeError* exception.
- 1. Let _k_ be 0.
- 1. Repeat, while _k_ < _len_,
- 1. Let _Pk_ be ! ToString(𝔽(_k_)).
- 1. Let _kValue_ be ! Get(_O_, _Pk_).
- 1. Let _testResult_ be ToBoolean(? Call(_predicate_, _thisArg_, « _kValue_, 𝔽(_k_), _O_ »)).
- 1. If _testResult_ is *true*, return _kValue_.
- 1. Set _k_ to _k_ + 1.
- 1. Return *undefined*.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. Let _findRec_ be ? FindViaPredicate(_O_, _len_, ~ascending~, _predicate_, _thisArg_).
+ 1. Return _findRec_.[[Value]].
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -39788,17 +41285,10 @@ %TypedArray%.prototype.findIndex ( _predicate_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If IsCallable(_predicate_) is *false*, throw a *TypeError* exception.
- 1. Let _k_ be 0.
- 1. Repeat, while _k_ < _len_,
- 1. Let _Pk_ be ! ToString(𝔽(_k_)).
- 1. Let _kValue_ be ! Get(_O_, _Pk_).
- 1. Let _testResult_ be ToBoolean(? Call(_predicate_, _thisArg_, « _kValue_, 𝔽(_k_), _O_ »)).
- 1. If _testResult_ is *true*, return 𝔽(_k_).
- 1. Set _k_ to _k_ + 1.
- 1. Return *-1*𝔽.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. Let _findRec_ be ? FindViaPredicate(_O_, _len_, ~ascending~, _predicate_, _thisArg_).
+ 1. Return _findRec_.[[Index]].
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -39809,17 +41299,10 @@ %TypedArray%.prototype.findLast ( _predicate_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If IsCallable(_predicate_) is *false*, throw a *TypeError* exception.
- 1. Let _k_ be _len_ - 1.
- 1. Repeat, while _k_ ≥ 0,
- 1. Let _Pk_ be ! ToString(𝔽(_k_)).
- 1. Let _kValue_ be ! Get(_O_, _Pk_).
- 1. Let _testResult_ be ToBoolean(? Call(_predicate_, _thisArg_, « _kValue_, 𝔽(_k_), _O_ »)).
- 1. If _testResult_ is *true*, return _kValue_.
- 1. Set _k_ to _k_ - 1.
- 1. Return *undefined*.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. Let _findRec_ be ? FindViaPredicate(_O_, _len_, ~descending~, _predicate_, _thisArg_).
+ 1. Return _findRec_.[[Value]].
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -39830,17 +41313,10 @@ %TypedArray%.prototype.findLastIndex ( _predicate_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If IsCallable(_predicate_) is *false*, throw a *TypeError* exception.
- 1. Let _k_ be _len_ - 1.
- 1. Repeat, while _k_ ≥ 0,
- 1. Let _Pk_ be ! ToString(𝔽(_k_)).
- 1. Let _kValue_ be ! Get(_O_, _Pk_).
- 1. Let _testResult_ be ToBoolean(? Call(_predicate_, _thisArg_, « _kValue_, 𝔽(_k_), _O_ »)).
- 1. If _testResult_ is *true*, return 𝔽(_k_).
- 1. Set _k_ to _k_ - 1.
- 1. Return *-1*𝔽.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. Let _findRec_ be ? FindViaPredicate(_O_, _len_, ~descending~, _predicate_, _thisArg_).
+ 1. Return _findRec_.[[Index]].
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -39851,8 +41327,8 @@ %TypedArray%.prototype.forEach ( _callbackfn_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
1. Let _k_ be 0.
1. Repeat, while _k_ < _len_,
@@ -39871,13 +41347,13 @@ %TypedArray%.prototype.includes ( _searchElement_ [ , _fromIndex_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If _len_ is 0, return *false*.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. If _len_ = 0, return *false*.
1. Let _n_ be ? ToIntegerOrInfinity(_fromIndex_).
1. Assert: If _fromIndex_ is *undefined*, then _n_ is 0.
- 1. If _n_ is +∞, return *false*.
- 1. Else if _n_ is -∞, set _n_ to 0.
+ 1. If _n_ = +∞, return *false*.
+ 1. Else if _n_ = -∞, set _n_ to 0.
1. If _n_ ≥ 0, then
1. Let _k_ be _n_.
1. Else,
@@ -39898,13 +41374,13 @@ %TypedArray%.prototype.indexOf ( _searchElement_ [ , _fromIndex_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If _len_ is 0, return *-1*𝔽.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. If _len_ = 0, return *-1*𝔽.
1. Let _n_ be ? ToIntegerOrInfinity(_fromIndex_).
1. Assert: If _fromIndex_ is *undefined*, then _n_ is 0.
- 1. If _n_ is +∞, return *-1*𝔽.
- 1. Else if _n_ is -∞, set _n_ to 0.
+ 1. If _n_ = +∞, return *-1*𝔽.
+ 1. Else if _n_ = -∞, set _n_ to 0.
1. If _n_ ≥ 0, then
1. Let _k_ be _n_.
1. Else,
@@ -39914,8 +41390,7 @@ %TypedArray%.prototype.indexOf ( _searchElement_ [ , _fromIndex_ ] )
1. Let _kPresent_ be ! HasProperty(_O_, ! ToString(𝔽(_k_))).
1. If _kPresent_ is *true*, then
1. Let _elementK_ be ! Get(_O_, ! ToString(𝔽(_k_))).
- 1. Let _same_ be IsStrictlyEqual(_searchElement_, _elementK_).
- 1. If _same_ is *true*, return 𝔽(_k_).
+ 1. If IsStrictlyEqual(_searchElement_, _elementK_) is *true*, return 𝔽(_k_).
1. Set _k_ to _k_ + 1.
1. Return *-1*𝔽.
@@ -39928,8 +41403,8 @@ %TypedArray%.prototype.join ( _separator_ )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If _separator_ is *undefined*, let _sep_ be *","*.
1. Else, let _sep_ be ? ToString(_separator_).
1. Let _R_ be the empty String.
@@ -39937,8 +41412,9 @@ %TypedArray%.prototype.join ( _separator_ )
1. Repeat, while _k_ < _len_,
1. If _k_ > 0, set _R_ to the string-concatenation of _R_ and _sep_.
1. Let _element_ be ! Get(_O_, ! ToString(𝔽(_k_))).
- 1. If _element_ is *undefined*, let _next_ be the empty String; otherwise, let _next_ be ! ToString(_element_).
- 1. Set _R_ to the string-concatenation of _R_ and _next_.
+ 1. If _element_ is not *undefined*, then
+ 1. Let _S_ be ! ToString(_element_).
+ 1. Set _R_ to the string-concatenation of _R_ and _S_.
1. Set _k_ to _k_ + 1.
1. Return _R_.
@@ -39950,7 +41426,7 @@ %TypedArray%.prototype.keys ( )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
+ 1. Perform ? ValidateTypedArray(_O_, ~seq-cst~).
1. Return CreateArrayIterator(_O_, ~key~).
@@ -39961,11 +41437,11 @@ %TypedArray%.prototype.lastIndexOf ( _searchElement_ [ , _fromIndex_ ] )This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
- 1. If _len_ is 0, return *-1*𝔽.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. If _len_ = 0, return *-1*𝔽.
1. If _fromIndex_ is present, let _n_ be ? ToIntegerOrInfinity(_fromIndex_); else let _n_ be _len_ - 1.
- 1. If _n_ is -∞, return *-1*𝔽.
+ 1. If _n_ = -∞, return *-1*𝔽.
1. If _n_ ≥ 0, then
1. Let _k_ be min(_n_, _len_ - 1).
1. Else,
@@ -39974,8 +41450,7 @@ %TypedArray%.prototype.lastIndexOf ( _searchElement_ [ , _fromIndex_ ] )𝔽.
@@ -39989,9 +41464,9 @@ get %TypedArray%.prototype.length
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[TypedArrayName]]).
1. Assert: _O_ has [[ViewedArrayBuffer]] and [[ArrayLength]] internal slots.
- 1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, return *+0*𝔽.
- 1. Let _length_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, return *+0*𝔽.
+ 1. Let _length_ be TypedArrayLength(_taRecord_).
1. Return 𝔽(_length_).
This function is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -40003,8 +41478,8 @@ %TypedArray%.prototype.map ( _callbackfn_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
1. Let _A_ be ? TypedArraySpeciesCreate(_O_, « 𝔽(_len_) »).
1. Let _k_ be 0.
@@ -40025,8 +41500,8 @@ %TypedArray%.prototype.reduce ( _callbackfn_ [ , _initialValue_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
1. If _len_ = 0 and _initialValue_ is not present, throw a *TypeError* exception.
1. Let _k_ be 0.
@@ -40053,10 +41528,10 @@ %TypedArray%.prototype.reduceRight ( _callbackfn_ [ , _initialValue_ ] )This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
- 1. If _len_ is 0 and _initialValue_ is not present, throw a *TypeError* exception.
+ 1. If _len_ = 0 and _initialValue_ is not present, throw a *TypeError* exception.
1. Let _k_ be _len_ - 1.
1. Let _accumulator_ be *undefined*.
1. If _initialValue_ is present, then
@@ -40081,8 +41556,8 @@ %TypedArray%.prototype.reverse ( )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. Let _middle_ be floor(_len_ / 2).
1. Let _lower_ be 0.
1. Repeat, while _lower_ ≠ _middle_,
@@ -40131,41 +41606,42 @@
1. Let _targetBuffer_ be _target_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_targetBuffer_) is *true*, throw a *TypeError* exception.
- 1. Let _targetLength_ be _target_.[[ArrayLength]].
+ 1. Let _targetRecord_ be MakeTypedArrayWithBufferWitnessRecord(_target_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_targetRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _targetLength_ be TypedArrayLength(_targetRecord_).
1. Let _srcBuffer_ be _source_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_srcBuffer_) is *true*, throw a *TypeError* exception.
+ 1. Let _srcRecord_ be MakeTypedArrayWithBufferWitnessRecord(_source_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_srcRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _srcLength_ be TypedArrayLength(_srcRecord_).
1. Let _targetType_ be TypedArrayElementType(_target_).
1. Let _targetElementSize_ be TypedArrayElementSize(_target_).
1. Let _targetByteOffset_ be _target_.[[ByteOffset]].
1. Let _srcType_ be TypedArrayElementType(_source_).
1. Let _srcElementSize_ be TypedArrayElementSize(_source_).
- 1. Let _srcLength_ be _source_.[[ArrayLength]].
1. Let _srcByteOffset_ be _source_.[[ByteOffset]].
- 1. If _targetOffset_ is +∞, throw a *RangeError* exception.
+ 1. If _targetOffset_ = +∞, throw a *RangeError* exception.
1. If _srcLength_ + _targetOffset_ > _targetLength_, throw a *RangeError* exception.
- 1. If _target_.[[ContentType]] ≠ _source_.[[ContentType]], throw a *TypeError* exception.
- 1. If both IsSharedArrayBuffer(_srcBuffer_) and IsSharedArrayBuffer(_targetBuffer_) are *true*, then
- 1. If _srcBuffer_.[[ArrayBufferData]] and _targetBuffer_.[[ArrayBufferData]] are the same Shared Data Block values, let _same_ be *true*; else let _same_ be *false*.
- 1. Else, let _same_ be SameValue(_srcBuffer_, _targetBuffer_).
- 1. If _same_ is *true*, then
- 1. Let _srcByteLength_ be _source_.[[ByteLength]].
+ 1. If _target_.[[ContentType]] is not _source_.[[ContentType]], throw a *TypeError* exception.
+ 1. If IsSharedArrayBuffer(_srcBuffer_) is *true*, IsSharedArrayBuffer(_targetBuffer_) is *true*, and _srcBuffer_.[[ArrayBufferData]] is _targetBuffer_.[[ArrayBufferData]], let _sameSharedArrayBuffer_ be *true*; otherwise, let _sameSharedArrayBuffer_ be *false*.
+ 1. If SameValue(_srcBuffer_, _targetBuffer_) is *true* or _sameSharedArrayBuffer_ is *true*, then
+ 1. Let _srcByteLength_ be TypedArrayByteLength(_srcRecord_).
1. Set _srcBuffer_ to ? CloneArrayBuffer(_srcBuffer_, _srcByteOffset_, _srcByteLength_).
1. Let _srcByteIndex_ be 0.
- 1. Else, let _srcByteIndex_ be _srcByteOffset_.
- 1. Let _targetByteIndex_ be _targetOffset_ × _targetElementSize_ + _targetByteOffset_.
- 1. Let _limit_ be _targetByteIndex_ + _targetElementSize_ × _srcLength_.
- 1. If _srcType_ is the same as _targetType_, then
- 1. NOTE: If _srcType_ and _targetType_ are the same, the transfer must be performed in a manner that preserves the bit-level encoding of the source data.
+ 1. Else,
+ 1. Let _srcByteIndex_ be _srcByteOffset_.
+ 1. Let _targetByteIndex_ be (_targetOffset_ × _targetElementSize_) + _targetByteOffset_.
+ 1. Let _limit_ be _targetByteIndex_ + (_targetElementSize_ × _srcLength_).
+ 1. If _srcType_ is _targetType_, then
+ 1. NOTE: The transfer must be performed in a manner that preserves the bit-level encoding of the source data.
1. Repeat, while _targetByteIndex_ < _limit_,
- 1. Let _value_ be GetValueFromBuffer(_srcBuffer_, _srcByteIndex_, ~Uint8~, *true*, ~Unordered~).
- 1. Perform SetValueInBuffer(_targetBuffer_, _targetByteIndex_, ~Uint8~, _value_, *true*, ~Unordered~).
+ 1. Let _value_ be GetValueFromBuffer(_srcBuffer_, _srcByteIndex_, ~uint8~, *true*, ~unordered~).
+ 1. Perform SetValueInBuffer(_targetBuffer_, _targetByteIndex_, ~uint8~, _value_, *true*, ~unordered~).
1. Set _srcByteIndex_ to _srcByteIndex_ + 1.
1. Set _targetByteIndex_ to _targetByteIndex_ + 1.
1. Else,
1. Repeat, while _targetByteIndex_ < _limit_,
- 1. Let _value_ be GetValueFromBuffer(_srcBuffer_, _srcByteIndex_, _srcType_, *true*, ~Unordered~).
- 1. Perform SetValueInBuffer(_targetBuffer_, _targetByteIndex_, _targetType_, _value_, *true*, ~Unordered~).
+ 1. Let _value_ be GetValueFromBuffer(_srcBuffer_, _srcByteIndex_, _srcType_, *true*, ~unordered~).
+ 1. Perform SetValueInBuffer(_targetBuffer_, _targetByteIndex_, _targetType_, _value_, *true*, ~unordered~).
1. Set _srcByteIndex_ to _srcByteIndex_ + _srcElementSize_.
1. Set _targetByteIndex_ to _targetByteIndex_ + _targetElementSize_.
1. Return ~unused~.
@@ -40185,19 +41661,19 @@
It sets multiple values in _target_, starting at index _targetOffset_, reading the values from _source_.
- 1. Let _targetBuffer_ be _target_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_targetBuffer_) is *true*, throw a *TypeError* exception.
- 1. Let _targetLength_ be _target_.[[ArrayLength]].
+ 1. Let _targetRecord_ be MakeTypedArrayWithBufferWitnessRecord(_target_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_targetRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _targetLength_ be TypedArrayLength(_targetRecord_).
1. Let _src_ be ? ToObject(_source_).
1. Let _srcLength_ be ? LengthOfArrayLike(_src_).
- 1. If _targetOffset_ is +∞, throw a *RangeError* exception.
+ 1. If _targetOffset_ = +∞, throw a *RangeError* exception.
1. If _srcLength_ + _targetOffset_ > _targetLength_, throw a *RangeError* exception.
1. Let _k_ be 0.
1. Repeat, while _k_ < _srcLength_,
1. Let _Pk_ be ! ToString(𝔽(_k_)).
1. Let _value_ be ? Get(_src_, _Pk_).
1. Let _targetIndex_ be 𝔽(_targetOffset_ + _k_).
- 1. Perform ? IntegerIndexedElementSet(_target_, _targetIndex_, _value_).
+ 1. Perform ? TypedArraySetElement(_target_, _targetIndex_, _value_).
1. Set _k_ to _k_ + 1.
1. Return ~unused~.
@@ -40210,44 +41686,48 @@ %TypedArray%.prototype.slice ( _start_, _end_ )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _srcArrayLength_ be TypedArrayLength(_taRecord_).
1. Let _relativeStart_ be ? ToIntegerOrInfinity(_start_).
- 1. If _relativeStart_ is -∞, let _k_ be 0.
- 1. Else if _relativeStart_ < 0, let _k_ be max(_len_ + _relativeStart_, 0).
- 1. Else, let _k_ be min(_relativeStart_, _len_).
- 1. If _end_ is *undefined*, let _relativeEnd_ be _len_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
- 1. If _relativeEnd_ is -∞, let _final_ be 0.
- 1. Else if _relativeEnd_ < 0, let _final_ be max(_len_ + _relativeEnd_, 0).
- 1. Else, let _final_ be min(_relativeEnd_, _len_).
- 1. Let _count_ be max(_final_ - _k_, 0).
- 1. Let _A_ be ? TypedArraySpeciesCreate(_O_, « 𝔽(_count_) »).
- 1. If _count_ > 0, then
- 1. If IsDetachedBuffer(_O_.[[ViewedArrayBuffer]]) is *true*, throw a *TypeError* exception.
+ 1. If _relativeStart_ = -∞, let _startIndex_ be 0.
+ 1. Else if _relativeStart_ < 0, let _startIndex_ be max(_srcArrayLength_ + _relativeStart_, 0).
+ 1. Else, let _startIndex_ be min(_relativeStart_, _srcArrayLength_).
+ 1. If _end_ is *undefined*, let _relativeEnd_ be _srcArrayLength_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
+ 1. If _relativeEnd_ = -∞, let _endIndex_ be 0.
+ 1. Else if _relativeEnd_ < 0, let _endIndex_ be max(_srcArrayLength_ + _relativeEnd_, 0).
+ 1. Else, let _endIndex_ be min(_relativeEnd_, _srcArrayLength_).
+ 1. Let _countBytes_ be max(_endIndex_ - _startIndex_, 0).
+ 1. Let _A_ be ? TypedArraySpeciesCreate(_O_, « 𝔽(_countBytes_) »).
+ 1. If _countBytes_ > 0, then
+ 1. Set _taRecord_ to MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Set _endIndex_ to min(_endIndex_, TypedArrayLength(_taRecord_)).
+ 1. Set _countBytes_ to max(_endIndex_ - _startIndex_, 0).
1. Let _srcType_ be TypedArrayElementType(_O_).
1. Let _targetType_ be TypedArrayElementType(_A_).
- 1. If _srcType_ is different from _targetType_, then
- 1. Let _n_ be 0.
- 1. Repeat, while _k_ < _final_,
- 1. Let _Pk_ be ! ToString(𝔽(_k_)).
- 1. Let _kValue_ be ! Get(_O_, _Pk_).
- 1. Perform ! Set(_A_, ! ToString(𝔽(_n_)), _kValue_, *true*).
- 1. Set _k_ to _k_ + 1.
- 1. Set _n_ to _n_ + 1.
- 1. Else,
+ 1. If _srcType_ is _targetType_, then
+ 1. NOTE: The transfer must be performed in a manner that preserves the bit-level encoding of the source data.
1. Let _srcBuffer_ be _O_.[[ViewedArrayBuffer]].
1. Let _targetBuffer_ be _A_.[[ViewedArrayBuffer]].
1. Let _elementSize_ be TypedArrayElementSize(_O_).
- 1. NOTE: If _srcType_ and _targetType_ are the same, the transfer must be performed in a manner that preserves the bit-level encoding of the source data.
1. Let _srcByteOffset_ be _O_.[[ByteOffset]].
- 1. Let _srcByteIndex_ be (_k_ × _elementSize_) + _srcByteOffset_.
+ 1. Let _srcByteIndex_ be (_startIndex_ × _elementSize_) + _srcByteOffset_.
1. Let _targetByteIndex_ be _A_.[[ByteOffset]].
- 1. Let _limit_ be _targetByteIndex_ + _count_ × _elementSize_.
- 1. Repeat, while _targetByteIndex_ < _limit_,
- 1. Let _value_ be GetValueFromBuffer(_srcBuffer_, _srcByteIndex_, ~Uint8~, *true*, ~Unordered~).
- 1. Perform SetValueInBuffer(_targetBuffer_, _targetByteIndex_, ~Uint8~, _value_, *true*, ~Unordered~).
+ 1. Let _endByteIndex_ be _targetByteIndex_ + (_countBytes_ × _elementSize_).
+ 1. Repeat, while _targetByteIndex_ < _endByteIndex_,
+ 1. Let _value_ be GetValueFromBuffer(_srcBuffer_, _srcByteIndex_, ~uint8~, *true*, ~unordered~).
+ 1. Perform SetValueInBuffer(_targetBuffer_, _targetByteIndex_, ~uint8~, _value_, *true*, ~unordered~).
1. Set _srcByteIndex_ to _srcByteIndex_ + 1.
1. Set _targetByteIndex_ to _targetByteIndex_ + 1.
+ 1. Else,
+ 1. Let _n_ be 0.
+ 1. Let _k_ be _startIndex_.
+ 1. Repeat, while _k_ < _endIndex_,
+ 1. Let _Pk_ be ! ToString(𝔽(_k_)).
+ 1. Let _kValue_ be ! Get(_O_, _Pk_).
+ 1. Perform ! Set(_A_, ! ToString(𝔽(_n_)), _kValue_, *true*).
+ 1. Set _k_ to _k_ + 1.
+ 1. Set _n_ to _n_ + 1.
1. Return _A_.
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -40259,8 +41739,8 @@ %TypedArray%.prototype.some ( _callbackfn_ [ , _thisArg_ ] )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
- 1. Let _len_ be _O_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
1. Let _k_ be 0.
1. Repeat, while _k_ < _len_,
@@ -40276,59 +41756,59 @@ %TypedArray%.prototype.some ( _callbackfn_ [ , _thisArg_ ] )
%TypedArray%.prototype.sort ( _comparefn_ )
- This is a distinct method that, except as described below, implements the same requirements as those of `Array.prototype.sort` as defined in . The implementation of this method may be optimized with the knowledge that the *this* value is an object that has a fixed length and whose integer-indexed properties are not sparse.
+ This is a distinct method that, except as described below, implements the same requirements as those of `Array.prototype.sort` as defined in . The implementation of this method may be optimized with the knowledge that the *this* value is an object that has a fixed length and whose integer-indexed properties are not sparse.
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
It performs the following steps when called:
1. If _comparefn_ is not *undefined* and IsCallable(_comparefn_) is *false*, throw a *TypeError* exception.
1. Let _obj_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_obj_).
- 1. Let _len_ be _obj_.[[ArrayLength]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_obj_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
1. NOTE: The following closure performs a numeric comparison rather than the string comparison used in .
1. Let _SortCompare_ be a new Abstract Closure with parameters (_x_, _y_) that captures _comparefn_ and performs the following steps when called:
- 1. Assert: _x_ is a Number and _y_ is a Number, or _x_ is a BigInt and _y_ is a BigInt.
- 1. If _comparefn_ is not *undefined*, then
- 1. Let _v_ be ? ToNumber(? Call(_comparefn_, *undefined*, « _x_, _y_ »)).
- 1. If _v_ is *NaN*, return *+0*𝔽.
- 1. Return _v_.
- 1. If _x_ and _y_ are both *NaN*, return *+0*𝔽.
- 1. If _x_ is *NaN*, return *1*𝔽.
- 1. If _y_ is *NaN*, return *-1*𝔽.
- 1. If _x_ < _y_, return *-1*𝔽.
- 1. If _x_ > _y_, return *1*𝔽.
- 1. If _x_ is *-0*𝔽 and _y_ is *+0*𝔽, return *-1*𝔽.
- 1. If _x_ is *+0*𝔽 and _y_ is *-0*𝔽, return *1*𝔽.
- 1. Return *+0*𝔽.
- 1. Return ? SortIndexedProperties(_obj_, _len_, _SortCompare_).
+ 1. Return ? CompareTypedArrayElements(_x_, _y_, _comparefn_).
+ 1. Let _sortedList_ be ? SortIndexedProperties(_obj_, _len_, _SortCompare_, ~read-through-holes~).
+ 1. Let _j_ be 0.
+ 1. Repeat, while _j_ < _len_,
+ 1. Perform ! Set(_obj_, ! ToString(𝔽(_j_)), _sortedList_[_j_], *true*).
+ 1. Set _j_ to _j_ + 1.
+ 1. Return _obj_.
- Because *NaN* always compares greater than any other value, *NaN* property values always sort to the end of the result when _comparefn_ is not provided.
+ Because *NaN* always compares greater than any other value (see CompareTypedArrayElements), *NaN* property values always sort to the end of the result when _comparefn_ is not provided.
- %TypedArray%.prototype.subarray ( _begin_, _end_ )
- This method returns a new _TypedArray_ whose element type is the same as this _TypedArray_ and whose ArrayBuffer is the same as the ArrayBuffer of this _TypedArray_, referencing the elements in the interval from _begin_ (inclusive) to _end_ (exclusive). If either _begin_ or _end_ is negative, it refers to an index from the end of the array, as opposed to from the beginning.
+ %TypedArray%.prototype.subarray ( _start_, _end_ )
+ This method returns a new _TypedArray_ whose element type is the element type of this _TypedArray_ and whose ArrayBuffer is the ArrayBuffer of this _TypedArray_, referencing the elements in the interval from _start_ (inclusive) to _end_ (exclusive). If either _start_ or _end_ is negative, it refers to an index from the end of the array, as opposed to from the beginning.
It performs the following steps when called:
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[TypedArrayName]]).
1. Assert: _O_ has a [[ViewedArrayBuffer]] internal slot.
1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. Let _srcLength_ be _O_.[[ArrayLength]].
- 1. Let _relativeBegin_ be ? ToIntegerOrInfinity(_begin_).
- 1. If _relativeBegin_ is -∞, let _beginIndex_ be 0.
- 1. Else if _relativeBegin_ < 0, let _beginIndex_ be max(_srcLength_ + _relativeBegin_, 0).
- 1. Else, let _beginIndex_ be min(_relativeBegin_, _srcLength_).
- 1. If _end_ is *undefined*, let _relativeEnd_ be _srcLength_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
- 1. If _relativeEnd_ is -∞, let _endIndex_ be 0.
- 1. Else if _relativeEnd_ < 0, let _endIndex_ be max(_srcLength_ + _relativeEnd_, 0).
- 1. Else, let _endIndex_ be min(_relativeEnd_, _srcLength_).
- 1. Let _newLength_ be max(_endIndex_ - _beginIndex_, 0).
+ 1. Let _srcRecord_ be MakeTypedArrayWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_srcRecord_) is *true*, then
+ 1. Let _srcLength_ be 0.
+ 1. Else,
+ 1. Let _srcLength_ be TypedArrayLength(_srcRecord_).
+ 1. Let _relativeStart_ be ? ToIntegerOrInfinity(_start_).
+ 1. If _relativeStart_ = -∞, let _startIndex_ be 0.
+ 1. Else if _relativeStart_ < 0, let _startIndex_ be max(_srcLength_ + _relativeStart_, 0).
+ 1. Else, let _startIndex_ be min(_relativeStart_, _srcLength_).
1. Let _elementSize_ be TypedArrayElementSize(_O_).
1. Let _srcByteOffset_ be _O_.[[ByteOffset]].
- 1. Let _beginByteOffset_ be _srcByteOffset_ + _beginIndex_ × _elementSize_.
- 1. Let _argumentsList_ be « _buffer_, 𝔽(_beginByteOffset_), 𝔽(_newLength_) ».
+ 1. Let _beginByteOffset_ be _srcByteOffset_ + (_startIndex_ × _elementSize_).
+ 1. If _O_.[[ArrayLength]] is ~auto~ and _end_ is *undefined*, then
+ 1. Let _argumentsList_ be « _buffer_, 𝔽(_beginByteOffset_) ».
+ 1. Else,
+ 1. If _end_ is *undefined*, let _relativeEnd_ be _srcLength_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
+ 1. If _relativeEnd_ = -∞, let _endIndex_ be 0.
+ 1. Else if _relativeEnd_ < 0, let _endIndex_ be max(_srcLength_ + _relativeEnd_, 0).
+ 1. Else, let _endIndex_ be min(_relativeEnd_, _srcLength_).
+ 1. Let _newLength_ be max(_endIndex_ - _startIndex_, 0).
+ 1. Let _argumentsList_ be « _buffer_, 𝔽(_beginByteOffset_), 𝔽(_newLength_) ».
1. Return ? TypedArraySpeciesCreate(_O_, _argumentsList_).
This method is not generic. The *this* value must be an object with a [[TypedArrayName]] internal slot.
@@ -40336,13 +41816,53 @@ %TypedArray%.prototype.subarray ( _begin_, _end_ )
%TypedArray%.prototype.toLocaleString ( [ _reserved1_ [ , _reserved2_ ] ] )
- This is a distinct method that implements the same algorithm as `Array.prototype.toLocaleString` as defined in except that the *this* value's [[ArrayLength]] internal slot is accessed in place of performing a [[Get]] of *"length"*. The implementation of the algorithm may be optimized with the knowledge that the *this* value is an object that has a fixed length and whose integer-indexed properties are not sparse. However, such optimization must not introduce any observable changes in the specified behaviour of the algorithm.
- This method is not generic. ValidateTypedArray is applied to the *this* value prior to evaluating the algorithm. If its result is an abrupt completion that exception is thrown instead of evaluating the algorithm.
+ This is a distinct method that implements the same algorithm as `Array.prototype.toLocaleString` as defined in except that TypedArrayLength is called in place of performing a [[Get]] of *"length"*. The implementation of the algorithm may be optimized with the knowledge that the *this* value has a fixed length when the underlying buffer is not resizable and whose integer-indexed properties are not sparse. However, such optimization must not introduce any observable changes in the specified behaviour of the algorithm.
+ This method is not generic. ValidateTypedArray is called with the *this* value and ~seq-cst~ as arguments prior to evaluating the algorithm. If its result is an abrupt completion that exception is thrown instead of evaluating the algorithm.
If the ECMAScript implementation includes the ECMA-402 Internationalization API this method is based upon the algorithm for `Array.prototype.toLocaleString` that is in the ECMA-402 specification.
+
+ %TypedArray%.prototype.toReversed ( )
+ This method performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _length_ be TypedArrayLength(_taRecord_).
+ 1. Let _A_ be ? TypedArrayCreateSameType(_O_, « 𝔽(_length_) »).
+ 1. Let _k_ be 0.
+ 1. Repeat, while _k_ < _length_,
+ 1. Let _from_ be ! ToString(𝔽(_length_ - _k_ - 1)).
+ 1. Let _Pk_ be ! ToString(𝔽(_k_)).
+ 1. Let _fromValue_ be ! Get(_O_, _from_).
+ 1. Perform ! Set(_A_, _Pk_, _fromValue_, *true*).
+ 1. Set _k_ to _k_ + 1.
+ 1. Return _A_.
+
+
+
+
+ %TypedArray%.prototype.toSorted ( _comparefn_ )
+ This method performs the following steps when called:
+
+ 1. If _comparefn_ is not *undefined* and IsCallable(_comparefn_) is *false*, throw a *TypeError* exception.
+ 1. Let _O_ be the *this* value.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. Let _A_ be ? TypedArrayCreateSameType(_O_, « 𝔽(_len_) »).
+ 1. NOTE: The following closure performs a numeric comparison rather than the string comparison used in .
+ 1. Let _SortCompare_ be a new Abstract Closure with parameters (_x_, _y_) that captures _comparefn_ and performs the following steps when called:
+ 1. Return ? CompareTypedArrayElements(_x_, _y_, _comparefn_).
+ 1. Let _sortedList_ be ? SortIndexedProperties(_O_, _len_, _SortCompare_, ~read-through-holes~).
+ 1. Let _j_ be 0.
+ 1. Repeat, while _j_ < _len_,
+ 1. Perform ! Set(_A_, ! ToString(𝔽(_j_)), _sortedList_[_j_], *true*).
+ 1. Set _j_ to _j_ + 1.
+ 1. Return _A_.
+
+
+
%TypedArray%.prototype.toString ( )
The initial value of the *"toString"* property is %Array.prototype.toString%, defined in .
@@ -40353,11 +41873,36 @@ %TypedArray%.prototype.values ( )
This method performs the following steps when called:
1. Let _O_ be the *this* value.
- 1. Perform ? ValidateTypedArray(_O_).
+ 1. Perform ? ValidateTypedArray(_O_, ~seq-cst~).
1. Return CreateArrayIterator(_O_, ~value~).
+
+ %TypedArray%.prototype.with ( _index_, _value_ )
+ This method performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_O_, ~seq-cst~).
+ 1. Let _len_ be TypedArrayLength(_taRecord_).
+ 1. Let _relativeIndex_ be ? ToIntegerOrInfinity(_index_).
+ 1. If _relativeIndex_ ≥ 0, let _actualIndex_ be _relativeIndex_.
+ 1. Else, let _actualIndex_ be _len_ + _relativeIndex_.
+ 1. If _O_.[[ContentType]] is ~bigint~, let _numericValue_ be ? ToBigInt(_value_).
+ 1. Else, let _numericValue_ be ? ToNumber(_value_).
+ 1. If IsValidIntegerIndex(_O_, 𝔽(_actualIndex_)) is *false*, throw a *RangeError* exception.
+ 1. Let _A_ be ? TypedArrayCreateSameType(_O_, « 𝔽(_len_) »).
+ 1. Let _k_ be 0.
+ 1. Repeat, while _k_ < _len_,
+ 1. Let _Pk_ be ! ToString(𝔽(_k_)).
+ 1. If _k_ = _actualIndex_, let _fromValue_ be _numericValue_.
+ 1. Else, let _fromValue_ be ! Get(_O_, _Pk_).
+ 1. Perform ! Set(_A_, _Pk_, _fromValue_, *true*).
+ 1. Set _k_ to _k_ + 1.
+ 1. Return _A_.
+
+
+
%TypedArray%.prototype [ @@iterator ] ( )
The initial value of the @@iterator property is %TypedArray.prototype.values%, defined in .
@@ -40394,18 +41939,18 @@
It is used to specify the creation of a new TypedArray using a constructor function that is derived from _exemplar_. Unlike ArraySpeciesCreate, which can create non-Array objects through the use of @@species, this operation enforces that the constructor function creates an actual TypedArray.
- 1. Let _defaultConstructor_ be the intrinsic object listed in column one of for _exemplar_.[[TypedArrayName]].
+ 1. Let _defaultConstructor_ be the intrinsic object associated with the constructor name _exemplar_.[[TypedArrayName]] in .
1. Let _constructor_ be ? SpeciesConstructor(_exemplar_, _defaultConstructor_).
- 1. Let _result_ be ? TypedArrayCreate(_constructor_, _argumentList_).
+ 1. Let _result_ be ? TypedArrayCreateFromConstructor(_constructor_, _argumentList_).
1. Assert: _result_ has [[TypedArrayName]] and [[ContentType]] internal slots.
- 1. If _result_.[[ContentType]] ≠ _exemplar_.[[ContentType]], throw a *TypeError* exception.
+ 1. If _result_.[[ContentType]] is not _exemplar_.[[ContentType]], throw a *TypeError* exception.
1. Return _result_.
-
+
- TypedArrayCreate (
+ TypedArrayCreateFromConstructor (
_constructor_: a constructor,
_argumentList_: a List of ECMAScript language values,
): either a normal completion containing a TypedArray or a throw completion
@@ -40416,27 +41961,50 @@
1. Let _newTypedArray_ be ? Construct(_constructor_, _argumentList_).
- 1. Perform ? ValidateTypedArray(_newTypedArray_).
- 1. If _argumentList_ is a List of a single Number, then
- 1. If _newTypedArray_.[[ArrayLength]] < ℝ(_argumentList_[0]), throw a *TypeError* exception.
+ 1. Let _taRecord_ be ? ValidateTypedArray(_newTypedArray_, ~seq-cst~).
+ 1. If the number of elements in _argumentList_ is 1 and _argumentList_[0] is a Number, then
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _length_ be TypedArrayLength(_taRecord_).
+ 1. If _length_ < ℝ(_argumentList_[0]), throw a *TypeError* exception.
1. Return _newTypedArray_.
+
+
+ TypedArrayCreateSameType (
+ _exemplar_: a TypedArray,
+ _argumentList_: a List of ECMAScript language values,
+ ): either a normal completion containing a TypedArray or a throw completion
+
+
+
+ 1. Let _constructor_ be the intrinsic object associated with the constructor name _exemplar_.[[TypedArrayName]] in .
+ 1. Let _result_ be ? TypedArrayCreateFromConstructor(_constructor_, _argumentList_).
+ 1. Assert: _result_ has [[TypedArrayName]] and [[ContentType]] internal slots.
+ 1. Assert: _result_.[[ContentType]] is _exemplar_.[[ContentType]].
+ 1. Return _result_.
+
+
+
ValidateTypedArray (
_O_: an ECMAScript language value,
- ): either a normal completion containing ~unused~ or a throw completion
+ _order_: ~seq-cst~ or ~unordered~,
+ ): either a normal completion containing a TypedArray With Buffer Witness Record or a throw completion
1. Perform ? RequireInternalSlot(_O_, [[TypedArrayName]]).
1. Assert: _O_ has a [[ViewedArrayBuffer]] internal slot.
- 1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. Return ~unused~.
+ 1. Let _taRecord_ be MakeTypedArrayWithBufferWitnessRecord(_O_, _order_).
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Return _taRecord_.
@@ -40465,6 +42033,36 @@
1. Return the Element Type value specified in for _O_.[[TypedArrayName]].
+
+
+
+ CompareTypedArrayElements (
+ _x_: a Number or a BigInt,
+ _y_: a Number or a BigInt,
+ _comparefn_: a function object or *undefined*,
+ ): either a normal completion containing a Number or an abrupt completion
+
+
+
+ 1. Assert: _x_ is a Number and _y_ is a Number, or _x_ is a BigInt and _y_ is a BigInt.
+ 1. If _comparefn_ is not *undefined*, then
+ 1. Let _v_ be ? ToNumber(? Call(_comparefn_, *undefined*, « _x_, _y_ »)).
+ 1. If _v_ is *NaN*, return *+0*𝔽.
+ 1. Return _v_.
+ 1. If _x_ and _y_ are both *NaN*, return *+0*𝔽.
+ 1. If _x_ is *NaN*, return *1*𝔽.
+ 1. If _y_ is *NaN*, return *-1*𝔽.
+ 1. If _x_ < _y_, return *-1*𝔽.
+ 1. If _x_ > _y_, return *1*𝔽.
+ 1. If _x_ is *-0*𝔽 and _y_ is *+0*𝔽, return *-1*𝔽.
+ 1. If _x_ is *+0*𝔽 and _y_ is *-0*𝔽, return *1*𝔽.
+ 1. Return *+0*𝔽.
+
+
+ This performs a numeric comparison rather than the string comparison used in .
+
+
@@ -40475,7 +42073,6 @@ The _TypedArray_ Constructors
is a function whose behaviour differs based upon the number and types of its arguments. The actual behaviour of a call of _TypedArray_ depends upon the number and kind of arguments that are passed to it.
is not intended to be called as a function and will throw an exception when called in that manner.
may be used as the value of an `extends` clause of a class definition. Subclass constructors that intend to inherit the specified _TypedArray_ behaviour must include a `super` call to the _TypedArray_ constructor to create and initialize the subclass instance with the internal state necessary to support the %TypedArray%`.prototype` built-in methods.
- has a *"length"* property whose value is *3*𝔽.
@@ -40502,7 +42099,7 @@ _TypedArray_ ( ..._args_ )
1. Assert: _firstArgument_ is an Object and _firstArgument_ does not have either a [[TypedArrayName]] or an [[ArrayBufferData]] internal slot.
1. Let _usingIterator_ be ? GetMethod(_firstArgument_, @@iterator).
1. If _usingIterator_ is not *undefined*, then
- 1. Let _values_ be ? IterableToList(_firstArgument_, _usingIterator_).
+ 1. Let _values_ be ? IteratorToList(? GetIteratorFromMethod(_firstArgument_, _usingIterator_)).
1. Perform ? InitializeTypedArrayFromList(_O_, _values_).
1. Else,
1. NOTE: _firstArgument_ is not an Iterable so assume it is already an array-like object.
@@ -40529,11 +42126,11 @@
1. Let _proto_ be ? GetPrototypeFromConstructor(_newTarget_, _defaultProto_).
- 1. Let _obj_ be IntegerIndexedObjectCreate(_proto_).
+ 1. Let _obj_ be TypedArrayCreate(_proto_).
1. Assert: _obj_.[[ViewedArrayBuffer]] is *undefined*.
1. Set _obj_.[[TypedArrayName]] to _constructorName_.
- 1. If _constructorName_ is *"BigInt64Array"* or *"BigUint64Array"*, set _obj_.[[ContentType]] to ~BigInt~.
- 1. Otherwise, set _obj_.[[ContentType]] to ~Number~.
+ 1. If _constructorName_ is either *"BigInt64Array"* or *"BigUint64Array"*, set _obj_.[[ContentType]] to ~bigint~.
+ 1. Otherwise, set _obj_.[[ContentType]] to ~number~.
1. If _length_ is not present, then
1. Set _obj_.[[ByteLength]] to 0.
1. Set _obj_.[[ByteOffset]] to 0.
@@ -40555,25 +42152,26 @@
1. Let _srcData_ be _srcArray_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_srcData_) is *true*, throw a *TypeError* exception.
1. Let _elementType_ be TypedArrayElementType(_O_).
1. Let _elementSize_ be TypedArrayElementSize(_O_).
1. Let _srcType_ be TypedArrayElementType(_srcArray_).
1. Let _srcElementSize_ be TypedArrayElementSize(_srcArray_).
1. Let _srcByteOffset_ be _srcArray_.[[ByteOffset]].
- 1. Let _elementLength_ be _srcArray_.[[ArrayLength]].
+ 1. Let _srcRecord_ be MakeTypedArrayWithBufferWitnessRecord(_srcArray_, ~seq-cst~).
+ 1. If IsTypedArrayOutOfBounds(_srcRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _elementLength_ be TypedArrayLength(_srcRecord_).
1. Let _byteLength_ be _elementSize_ × _elementLength_.
- 1. If _elementType_ is the same as _srcType_, then
+ 1. If _elementType_ is _srcType_, then
1. Let _data_ be ? CloneArrayBuffer(_srcData_, _srcByteOffset_, _byteLength_).
1. Else,
1. Let _data_ be ? AllocateArrayBuffer(%ArrayBuffer%, _byteLength_).
- 1. If _srcArray_.[[ContentType]] ≠ _O_.[[ContentType]], throw a *TypeError* exception.
+ 1. If _srcArray_.[[ContentType]] is not _O_.[[ContentType]], throw a *TypeError* exception.
1. Let _srcByteIndex_ be _srcByteOffset_.
1. Let _targetByteIndex_ be 0.
1. Let _count_ be _elementLength_.
1. Repeat, while _count_ > 0,
- 1. Let _value_ be GetValueFromBuffer(_srcData_, _srcByteIndex_, _srcType_, *true*, ~Unordered~).
- 1. Perform SetValueInBuffer(_data_, _targetByteIndex_, _elementType_, _value_, *true*, ~Unordered~).
+ 1. Let _value_ be GetValueFromBuffer(_srcData_, _srcByteIndex_, _srcType_, *true*, ~unordered~).
+ 1. Perform SetValueInBuffer(_data_, _targetByteIndex_, _elementType_, _value_, *true*, ~unordered~).
1. Set _srcByteIndex_ to _srcByteIndex_ + _srcElementSize_.
1. Set _targetByteIndex_ to _targetByteIndex_ + _elementSize_.
1. Set _count_ to _count_ - 1.
@@ -40600,21 +42198,27 @@
1. Let _elementSize_ be TypedArrayElementSize(_O_).
1. Let _offset_ be ? ToIndex(_byteOffset_).
1. If _offset_ modulo _elementSize_ ≠ 0, throw a *RangeError* exception.
+ 1. Let _bufferIsFixedLength_ be IsFixedLengthArrayBuffer(_buffer_).
1. If _length_ is not *undefined*, then
1. Let _newLength_ be ? ToIndex(_length_).
1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. Let _bufferByteLength_ be _buffer_.[[ArrayBufferByteLength]].
- 1. If _length_ is *undefined*, then
- 1. If _bufferByteLength_ modulo _elementSize_ ≠ 0, throw a *RangeError* exception.
- 1. Let _newByteLength_ be _bufferByteLength_ - _offset_.
- 1. If _newByteLength_ < 0, throw a *RangeError* exception.
+ 1. Let _bufferByteLength_ be ArrayBufferByteLength(_buffer_, ~seq-cst~).
+ 1. If _length_ is *undefined* and _bufferIsFixedLength_ is *false*, then
+ 1. If _offset_ > _bufferByteLength_, throw a *RangeError* exception.
+ 1. Set _O_.[[ByteLength]] to ~auto~.
+ 1. Set _O_.[[ArrayLength]] to ~auto~.
1. Else,
- 1. Let _newByteLength_ be _newLength_ × _elementSize_.
- 1. If _offset_ + _newByteLength_ > _bufferByteLength_, throw a *RangeError* exception.
+ 1. If _length_ is *undefined*, then
+ 1. If _bufferByteLength_ modulo _elementSize_ ≠ 0, throw a *RangeError* exception.
+ 1. Let _newByteLength_ be _bufferByteLength_ - _offset_.
+ 1. If _newByteLength_ < 0, throw a *RangeError* exception.
+ 1. Else,
+ 1. Let _newByteLength_ be _newLength_ × _elementSize_.
+ 1. If _offset_ + _newByteLength_ > _bufferByteLength_, throw a *RangeError* exception.
+ 1. Set _O_.[[ByteLength]] to _newByteLength_.
+ 1. Set _O_.[[ArrayLength]] to _newByteLength_ / _elementSize_.
1. Set _O_.[[ViewedArrayBuffer]] to _buffer_.
- 1. Set _O_.[[ByteLength]] to _newByteLength_.
1. Set _O_.[[ByteOffset]] to _offset_.
- 1. Set _O_.[[ArrayLength]] to _newByteLength_ / _elementSize_.
1. Return ~unused~.
@@ -40696,6 +42300,7 @@ Properties of the _TypedArray_ Constructors
Each _TypedArray_ constructor:
- has a [[Prototype]] internal slot whose value is %TypedArray%.
+ - has a *"length"* property whose value is *3*𝔽.
- has a *"name"* property whose value is the String value of the constructor name specified for it in .
- has the following properties:
@@ -40730,13 +42335,13 @@ _TypedArray_.prototype.BYTES_PER_ELEMENT
_TypedArray_.prototype.constructor
- The initial value of a _TypedArray_`.prototype.constructor` is the corresponding %TypedArray% intrinsic object.
+ The initial value of the *"constructor"* property of the prototype for a given _TypedArray_ constructor is the constructor itself.
Properties of _TypedArray_ Instances
- _TypedArray_ instances are Integer-Indexed exotic objects. Each _TypedArray_ instance inherits properties from the corresponding _TypedArray_ prototype object. Each _TypedArray_ instance has the following internal slots: [[TypedArrayName]], [[ViewedArrayBuffer]], [[ByteLength]], [[ByteOffset]], and [[ArrayLength]].
+ _TypedArray_ instances are TypedArrays. Each _TypedArray_ instance inherits properties from the corresponding _TypedArray_ prototype object. Each _TypedArray_ instance has the following internal slots: [[TypedArrayName]], [[ViewedArrayBuffer]], [[ByteLength]], [[ByteOffset]], and [[ArrayLength]].
@@ -40790,17 +42395,16 @@
_adder_ will be invoked, with _target_ as the receiver.
- 1. Let _iteratorRecord_ be ? GetIterator(_iterable_).
+ 1. Let _iteratorRecord_ be ? GetIterator(_iterable_, ~sync~).
1. Repeat,
- 1. Let _next_ be ? IteratorStep(_iteratorRecord_).
- 1. If _next_ is *false*, return _target_.
- 1. Let _nextItem_ be ? IteratorValue(_next_).
- 1. If _nextItem_ is not an Object, then
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, return _target_.
+ 1. If _next_ is not an Object, then
1. Let _error_ be ThrowCompletion(a newly created *TypeError* object).
1. Return ? IteratorClose(_iteratorRecord_, _error_).
- 1. Let _k_ be Completion(Get(_nextItem_, *"0"*)).
+ 1. Let _k_ be Completion(Get(_next_, *"0"*)).
1. IfAbruptCloseIterator(_k_, _iteratorRecord_).
- 1. Let _v_ be Completion(Get(_nextItem_, *"1"*)).
+ 1. Let _v_ be Completion(Get(_next_, *"1"*)).
1. IfAbruptCloseIterator(_v_, _iteratorRecord_).
1. Let _status_ be Completion(Call(_adder_, _target_, « _k_, _v_ »)).
1. IfAbruptCloseIterator(_status_, _iteratorRecord_).
@@ -40819,6 +42423,25 @@ Properties of the Map Constructor
has the following properties:
+
+ Map.groupBy ( _items_, _callbackfn_ )
+
+ _callbackfn_ should be a function that accepts two arguments. `groupBy` calls _callbackfn_ once for each element in _items_, in ascending order, and constructs a new Map. Each value returned by _callbackfn_ is used as a key in the Map. For each such key, the result Map has an entry whose key is that key and whose value is an array containing all the elements for which _callbackfn_ returned that key.
+ _callbackfn_ is called with two arguments: the value of the element and the index of the element.
+ The return value of `groupBy` is a Map.
+
+ This function performs the following steps when called:
+
+ 1. Let _groups_ be ? GroupBy(_items_, _callbackfn_, ~zero~).
+ 1. Let _map_ be ! Construct(%Map%).
+ 1. For each Record { [[Key]], [[Elements]] } _g_ of _groups_, do
+ 1. Let _elements_ be CreateArrayFromList(_g_.[[Elements]]).
+ 1. Let _entry_ be the Record { [[Key]]: _g_.[[Key]], [[Value]]: _elements_ }.
+ 1. Append _entry_ to _map_.[[MapData]].
+ 1. Return _map_.
+
+
+
Map.prototype
The initial value of `Map.prototype` is the Map prototype object.
@@ -40854,8 +42477,7 @@ Map.prototype.clear ( )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
- 1. Let _entries_ be the List that is _M_.[[MapData]].
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[MapData]], do
1. Set _p_.[[Key]] to ~empty~.
1. Set _p_.[[Value]] to ~empty~.
1. Return *undefined*.
@@ -40876,8 +42498,7 @@ Map.prototype.delete ( _key_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
- 1. Let _entries_ be the List that is _M_.[[MapData]].
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[MapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValueZero(_p_.[[Key]], _key_) is *true*, then
1. Set _p_.[[Key]] to ~empty~.
1. Set _p_.[[Value]] to ~empty~.
@@ -40905,11 +42526,11 @@ Map.prototype.forEach ( _callbackfn_ [ , _thisArg_ ] )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
- 1. Let _entries_ be the List that is _M_.[[MapData]].
+ 1. Let _entries_ be _M_.[[MapData]].
1. Let _numEntries_ be the number of elements in _entries_.
1. Let _index_ be 0.
1. Repeat, while _index_ < _numEntries_,
- 1. Let _e_ be the Record { [[Key]], [[Value]] } that is the value of _entries_[_index_].
+ 1. Let _e_ be _entries_[_index_].
1. Set _index_ to _index_ + 1.
1. If _e_.[[Key]] is not ~empty~, then
1. Perform ? Call(_callbackfn_, _thisArg_, « _e_.[[Value]], _e_.[[Key]], _M_ »).
@@ -40931,8 +42552,7 @@ Map.prototype.get ( _key_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
- 1. Let _entries_ be the List that is _M_.[[MapData]].
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[MapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValueZero(_p_.[[Key]], _key_) is *true*, return _p_.[[Value]].
1. Return *undefined*.
@@ -40944,8 +42564,7 @@ Map.prototype.has ( _key_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
- 1. Let _entries_ be the List that is _M_.[[MapData]].
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[MapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValueZero(_p_.[[Key]], _key_) is *true*, return *true*.
1. Return *false*.
@@ -40966,14 +42585,13 @@ Map.prototype.set ( _key_, _value_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
- 1. Let _entries_ be the List that is _M_.[[MapData]].
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[MapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValueZero(_p_.[[Key]], _key_) is *true*, then
1. Set _p_.[[Value]] to _value_.
1. Return _M_.
1. If _key_ is *-0*𝔽, set _key_ to *+0*𝔽.
1. Let _p_ be the Record { [[Key]]: _key_, [[Value]]: _value_ }.
- 1. Append _p_ to _entries_.
+ 1. Append _p_ to _M_.[[MapData]].
1. Return _M_.
@@ -40984,9 +42602,8 @@ get Map.prototype.size
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[MapData]]).
- 1. Let _entries_ be the List that is _M_.[[MapData]].
1. Let _count_ be 0.
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[MapData]], do
1. If _p_.[[Key]] is not ~empty~, set _count_ to _count_ + 1.
1. Return 𝔽(_count_).
@@ -41036,15 +42653,17 @@
1. Perform ? RequireInternalSlot(_map_, [[MapData]]).
1. Let _closure_ be a new Abstract Closure with no parameters that captures _map_ and _kind_ and performs the following steps when called:
- 1. Let _entries_ be the List that is _map_.[[MapData]].
+ 1. Let _entries_ be _map_.[[MapData]].
1. Let _index_ be 0.
1. Let _numEntries_ be the number of elements in _entries_.
1. Repeat, while _index_ < _numEntries_,
- 1. Let _e_ be the Record { [[Key]], [[Value]] } that is the value of _entries_[_index_].
+ 1. Let _e_ be _entries_[_index_].
1. Set _index_ to _index_ + 1.
1. If _e_.[[Key]] is not ~empty~, then
- 1. If _kind_ is ~key~, let _result_ be _e_.[[Key]].
- 1. Else if _kind_ is ~value~, let _result_ be _e_.[[Value]].
+ 1. If _kind_ is ~key~, then
+ 1. Let _result_ be _e_.[[Key]].
+ 1. Else if _kind_ is ~value~, then
+ 1. Let _result_ be _e_.[[Value]].
1. Else,
1. Assert: _kind_ is ~key+value~.
1. Let _result_ be CreateArrayFromList(« _e_.[[Key]], _e_.[[Value]] »).
@@ -41108,12 +42727,11 @@ Set ( [ _iterable_ ] )
1. If _iterable_ is either *undefined* or *null*, return _set_.
1. Let _adder_ be ? Get(_set_, *"add"*).
1. If IsCallable(_adder_) is *false*, throw a *TypeError* exception.
- 1. Let _iteratorRecord_ be ? GetIterator(_iterable_).
+ 1. Let _iteratorRecord_ be ? GetIterator(_iterable_, ~sync~).
1. Repeat,
- 1. Let _next_ be ? IteratorStep(_iteratorRecord_).
- 1. If _next_ is *false*, return _set_.
- 1. Let _nextValue_ be ? IteratorValue(_next_).
- 1. Let _status_ be Completion(Call(_adder_, _set_, « _nextValue_ »)).
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, return _set_.
+ 1. Let _status_ be Completion(Call(_adder_, _set_, « _next_ »)).
1. IfAbruptCloseIterator(_status_, _iteratorRecord_).
@@ -41162,12 +42780,11 @@ Set.prototype.add ( _value_ )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[SetData]]).
- 1. Let _entries_ be the List that is _S_.[[SetData]].
- 1. For each element _e_ of _entries_, do
+ 1. For each element _e_ of _S_.[[SetData]], do
1. If _e_ is not ~empty~ and SameValueZero(_e_, _value_) is *true*, then
1. Return _S_.
1. If _value_ is *-0*𝔽, set _value_ to *+0*𝔽.
- 1. Append _value_ to _entries_.
+ 1. Append _value_ to _S_.[[SetData]].
1. Return _S_.
@@ -41178,9 +42795,8 @@ Set.prototype.clear ( )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[SetData]]).
- 1. Let _entries_ be the List that is _S_.[[SetData]].
- 1. For each element _e_ of _entries_, do
- 1. Replace the element of _entries_ whose value is _e_ with an element whose value is ~empty~.
+ 1. For each element _e_ of _S_.[[SetData]], do
+ 1. Replace the element of _S_.[[SetData]] whose value is _e_ with an element whose value is ~empty~.
1. Return *undefined*.
@@ -41199,10 +42815,9 @@ Set.prototype.delete ( _value_ )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[SetData]]).
- 1. Let _entries_ be the List that is _S_.[[SetData]].
- 1. For each element _e_ of _entries_, do
+ 1. For each element _e_ of _S_.[[SetData]], do
1. If _e_ is not ~empty~ and SameValueZero(_e_, _value_) is *true*, then
- 1. Replace the element of _entries_ whose value is _e_ with an element whose value is ~empty~.
+ 1. Replace the element of _S_.[[SetData]] whose value is _e_ with an element whose value is ~empty~.
1. Return *true*.
1. Return *false*.
@@ -41230,7 +42845,7 @@ Set.prototype.forEach ( _callbackfn_ [ , _thisArg_ ] )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[SetData]]).
1. If IsCallable(_callbackfn_) is *false*, throw a *TypeError* exception.
- 1. Let _entries_ be the List that is _S_.[[SetData]].
+ 1. Let _entries_ be _S_.[[SetData]].
1. Let _numEntries_ be the number of elements in _entries_.
1. Let _index_ be 0.
1. Repeat, while _index_ < _numEntries_,
@@ -41258,8 +42873,7 @@ Set.prototype.has ( _value_ )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[SetData]]).
- 1. Let _entries_ be the List that is _S_.[[SetData]].
- 1. For each element _e_ of _entries_, do
+ 1. For each element _e_ of _S_.[[SetData]], do
1. If _e_ is not ~empty~ and SameValueZero(_e_, _value_) is *true*, return *true*.
1. Return *false*.
@@ -41279,9 +42893,8 @@ get Set.prototype.size
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[SetData]]).
- 1. Let _entries_ be the List that is _S_.[[SetData]].
1. Let _count_ be 0.
- 1. For each element _e_ of _entries_, do
+ 1. For each element _e_ of _S_.[[SetData]], do
1. If _e_ is not ~empty~, set _count_ to _count_ + 1.
1. Return 𝔽(_count_).
@@ -41332,7 +42945,7 @@
1. Perform ? RequireInternalSlot(_set_, [[SetData]]).
1. Let _closure_ be a new Abstract Closure with no parameters that captures _set_ and _kind_ and performs the following steps when called:
1. Let _index_ be 0.
- 1. Let _entries_ be the List that is _set_.[[SetData]].
+ 1. Let _entries_ be _set_.[[SetData]].
1. Let _numEntries_ be the number of elements in _entries_.
1. Repeat, while _index_ < _numEntries_,
1. Let _e_ be _entries_[_index_].
@@ -41379,11 +42992,11 @@ %SetIteratorPrototype% [ @@toStringTag ]
WeakMap Objects
- WeakMaps are collections of key/value pairs where the keys are objects and values may be arbitrary ECMAScript language values. A WeakMap may be queried to see if it contains a key/value pair with a specific key, but no mechanism is provided for enumerating the objects it holds as keys. In certain conditions, objects which are not live are removed as WeakMap keys, as described in .
+ WeakMaps are collections of key/value pairs where the keys are objects and/or symbols and values may be arbitrary ECMAScript language values. A WeakMap may be queried to see if it contains a key/value pair with a specific key, but no mechanism is provided for enumerating the values it holds as keys. In certain conditions, values which are not live are removed as WeakMap keys, as described in .
An implementation may impose an arbitrarily determined latency between the time a key/value pair of a WeakMap becomes inaccessible and the time when the key/value pair is removed from the WeakMap. If this latency was observable to ECMAScript program, it would be a source of indeterminacy that could impact program execution. For that reason, an ECMAScript implementation must not provide any means to observe a key of a WeakMap that does not require the observer to present the observed key.
WeakMaps must be implemented using either hash tables or other mechanisms that, on average, provide access times that are sublinear on the number of key/value pairs in the collection. The data structure used in this specification is only intended to describe the required observable semantics of WeakMaps. It is not intended to be a viable implementation model.
- WeakMap and WeakSets are intended to provide mechanisms for dynamically associating state with an object in a manner that does not “leak” memory resources if, in the absence of the WeakMap or WeakSet, the object otherwise became inaccessible and subject to resource reclamation by the implementation's garbage collection mechanisms. This characteristic can be achieved by using an inverted per-object mapping of weak map instances to keys. Alternatively each weak map may internally store its key to value mappings but this approach requires coordination between the WeakMap or WeakSet implementation and the garbage collector. The following references describe mechanism that may be useful to implementations of WeakMap and WeakSets:
+ WeakMap and WeakSet are intended to provide mechanisms for dynamically associating state with an object or symbol in a manner that does not “leak” memory resources if, in the absence of the WeakMap or WeakSet instance, the object or symbol otherwise became inaccessible and subject to resource reclamation by the implementation's garbage collection mechanisms. This characteristic can be achieved by using an inverted per-object/symbol mapping of WeakMap or WeakSet instances to keys. Alternatively, each WeakMap or WeakSet instance may internally store its key and value data, but this approach requires coordination between the WeakMap or WeakSet implementation and the garbage collector. The following references describe mechanism that may be useful to implementations of WeakMap and WeakSet:
Barry Hayes. 1997. Ephemerons: a new finalization mechanism. In Proceedings of the 12th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications (OOPSLA '97), A. Michael Berman (Ed.). ACM, New York, NY, USA, 176-183, http://doi.acm.org/10.1145/263698.263733.
Alexandra Barros, Roberto Ierusalimschy, Eliminating Cycles in Weak Tables. Journal of Universal Computer Science - J.UCS, vol. 14, no. 21, pp. 3481-3497, 2008, http://www.jucs.org/jucs_14_21/eliminating_cycles_in_weak
@@ -41453,9 +43066,8 @@ WeakMap.prototype.delete ( _key_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[WeakMapData]]).
- 1. Let _entries_ be the List that is _M_.[[WeakMapData]].
- 1. If _key_ is not an Object, return *false*.
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. If CanBeHeldWeakly(_key_) is *false*, return *false*.
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[WeakMapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValue(_p_.[[Key]], _key_) is *true*, then
1. Set _p_.[[Key]] to ~empty~.
1. Set _p_.[[Value]] to ~empty~.
@@ -41473,9 +43085,8 @@ WeakMap.prototype.get ( _key_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[WeakMapData]]).
- 1. Let _entries_ be the List that is _M_.[[WeakMapData]].
- 1. If _key_ is not an Object, return *undefined*.
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. If CanBeHeldWeakly(_key_) is *false*, return *undefined*.
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[WeakMapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValue(_p_.[[Key]], _key_) is *true*, return _p_.[[Value]].
1. Return *undefined*.
@@ -41487,9 +43098,8 @@ WeakMap.prototype.has ( _key_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[WeakMapData]]).
- 1. Let _entries_ be the List that is _M_.[[WeakMapData]].
- 1. If _key_ is not an Object, return *false*.
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. If CanBeHeldWeakly(_key_) is *false*, return *false*.
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[WeakMapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValue(_p_.[[Key]], _key_) is *true*, return *true*.
1. Return *false*.
@@ -41501,14 +43111,13 @@ WeakMap.prototype.set ( _key_, _value_ )
1. Let _M_ be the *this* value.
1. Perform ? RequireInternalSlot(_M_, [[WeakMapData]]).
- 1. Let _entries_ be the List that is _M_.[[WeakMapData]].
- 1. If _key_ is not an Object, throw a *TypeError* exception.
- 1. For each Record { [[Key]], [[Value]] } _p_ of _entries_, do
+ 1. If CanBeHeldWeakly(_key_) is *false*, throw a *TypeError* exception.
+ 1. For each Record { [[Key]], [[Value]] } _p_ of _M_.[[WeakMapData]], do
1. If _p_.[[Key]] is not ~empty~ and SameValue(_p_.[[Key]], _key_) is *true*, then
1. Set _p_.[[Value]] to _value_.
1. Return _M_.
1. Let _p_ be the Record { [[Key]]: _key_, [[Value]]: _value_ }.
- 1. Append _p_ to _entries_.
+ 1. Append _p_ to _M_.[[WeakMapData]].
1. Return _M_.
@@ -41528,8 +43137,8 @@ Properties of WeakMap Instances
WeakSet Objects
- WeakSets are collections of objects. A distinct object may only occur once as an element of a WeakSet's collection. A WeakSet may be queried to see if it contains a specific object, but no mechanism is provided for enumerating the objects it holds. In certain conditions, objects which are not live are removed as WeakSet elements, as described in .
- An implementation may impose an arbitrarily determined latency between the time an object contained in a WeakSet becomes inaccessible and the time when the object is removed from the WeakSet. If this latency was observable to ECMAScript program, it would be a source of indeterminacy that could impact program execution. For that reason, an ECMAScript implementation must not provide any means to determine if a WeakSet contains a particular object that does not require the observer to present the observed object.
+ WeakSets are collections of objects and/or symbols. A distinct object or symbol may only occur once as an element of a WeakSet's collection. A WeakSet may be queried to see if it contains a specific value, but no mechanism is provided for enumerating the values it holds. In certain conditions, values which are not live are removed as WeakSet elements, as described in .
+ An implementation may impose an arbitrarily determined latency between the time a value contained in a WeakSet becomes inaccessible and the time when the value is removed from the WeakSet. If this latency was observable to ECMAScript program, it would be a source of indeterminacy that could impact program execution. For that reason, an ECMAScript implementation must not provide any means to determine if a WeakSet contains a particular value that does not require the observer to present the observed value.
WeakSets must be implemented using either hash tables or other mechanisms that, on average, provide access times that are sublinear on the number of elements in the collection. The data structure used in this specification is only intended to describe the required observable semantics of WeakSets. It is not intended to be a viable implementation model.
See the NOTE in .
@@ -41556,12 +43165,11 @@ WeakSet ( [ _iterable_ ] )
1. If _iterable_ is either *undefined* or *null*, return _set_.
1. Let _adder_ be ? Get(_set_, *"add"*).
1. If IsCallable(_adder_) is *false*, throw a *TypeError* exception.
- 1. Let _iteratorRecord_ be ? GetIterator(_iterable_).
+ 1. Let _iteratorRecord_ be ? GetIterator(_iterable_, ~sync~).
1. Repeat,
- 1. Let _next_ be ? IteratorStep(_iteratorRecord_).
- 1. If _next_ is *false*, return _set_.
- 1. Let _nextValue_ be ? IteratorValue(_next_).
- 1. Let _status_ be Completion(Call(_adder_, _set_, « _nextValue_ »)).
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, return _set_.
+ 1. Let _status_ be Completion(Call(_adder_, _set_, « _next_ »)).
1. IfAbruptCloseIterator(_status_, _iteratorRecord_).
@@ -41598,12 +43206,11 @@ WeakSet.prototype.add ( _value_ )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[WeakSetData]]).
- 1. If _value_ is not an Object, throw a *TypeError* exception.
- 1. Let _entries_ be the List that is _S_.[[WeakSetData]].
- 1. For each element _e_ of _entries_, do
+ 1. If CanBeHeldWeakly(_value_) is *false*, throw a *TypeError* exception.
+ 1. For each element _e_ of _S_.[[WeakSetData]], do
1. If _e_ is not ~empty~ and SameValue(_e_, _value_) is *true*, then
1. Return _S_.
- 1. Append _value_ to _entries_.
+ 1. Append _value_ to _S_.[[WeakSetData]].
1. Return _S_.
@@ -41619,11 +43226,10 @@ WeakSet.prototype.delete ( _value_ )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[WeakSetData]]).
- 1. If _value_ is not an Object, return *false*.
- 1. Let _entries_ be the List that is _S_.[[WeakSetData]].
- 1. For each element _e_ of _entries_, do
+ 1. If CanBeHeldWeakly(_value_) is *false*, return *false*.
+ 1. For each element _e_ of _S_.[[WeakSetData]], do
1. If _e_ is not ~empty~ and SameValue(_e_, _value_) is *true*, then
- 1. Replace the element of _entries_ whose value is _e_ with an element whose value is ~empty~.
+ 1. Replace the element of _S_.[[WeakSetData]] whose value is _e_ with an element whose value is ~empty~.
1. Return *true*.
1. Return *false*.
@@ -41638,9 +43244,8 @@ WeakSet.prototype.has ( _value_ )
1. Let _S_ be the *this* value.
1. Perform ? RequireInternalSlot(_S_, [[WeakSetData]]).
- 1. Let _entries_ be the List that is _S_.[[WeakSetData]].
- 1. If _value_ is not an Object, return *false*.
- 1. For each element _e_ of _entries_, do
+ 1. If CanBeHeldWeakly(_value_) is *false*, return *false*.
+ 1. For each element _e_ of _S_.[[WeakSetData]], do
1. If _e_ is not ~empty~ and SameValue(_e_, _value_) is *true*, return *true*.
1. Return *false*.
@@ -41683,6 +43288,13 @@ Notation
+
+ Fixed-length and Resizable ArrayBuffer Objects
+ A fixed-length ArrayBuffer is an ArrayBuffer whose byte length cannot change after creation.
+ A resizable ArrayBuffer is an ArrayBuffer whose byte length may change after creation via calls to .
+ The kind of ArrayBuffer object that is created depends on the arguments passed to .
+
+
Abstract Operations For ArrayBuffer Objects
@@ -41691,6 +43303,7 @@
AllocateArrayBuffer (
_constructor_: a constructor,
_byteLength_: a non-negative integer,
+ optional _maxByteLength_: a non-negative integer or ~empty~,
): either a normal completion containing an ArrayBuffer or a throw completion
- 1. Let _obj_ be ? OrdinaryCreateFromConstructor(_constructor_, *"%ArrayBuffer.prototype%"*, « [[ArrayBufferData]], [[ArrayBufferByteLength]], [[ArrayBufferDetachKey]] »).
+ 1. Let _slots_ be « [[ArrayBufferData]], [[ArrayBufferByteLength]], [[ArrayBufferDetachKey]] ».
+ 1. If _maxByteLength_ is present and _maxByteLength_ is not ~empty~, let _allocatingResizableBuffer_ be *true*; otherwise let _allocatingResizableBuffer_ be *false*.
+ 1. If _allocatingResizableBuffer_ is *true*, then
+ 1. If _byteLength_ > _maxByteLength_, throw a *RangeError* exception.
+ 1. Append [[ArrayBufferMaxByteLength]] to _slots_.
+ 1. Let _obj_ be ? OrdinaryCreateFromConstructor(_constructor_, *"%ArrayBuffer.prototype%"*, _slots_).
1. Let _block_ be ? CreateByteDataBlock(_byteLength_).
1. Set _obj_.[[ArrayBufferData]] to _block_.
1. Set _obj_.[[ArrayBufferByteLength]] to _byteLength_.
+ 1. If _allocatingResizableBuffer_ is *true*, then
+ 1. If it is not possible to create a Data Block _block_ consisting of _maxByteLength_ bytes, throw a *RangeError* exception.
+ 1. NOTE: Resizable ArrayBuffers are designed to be implementable with in-place growth. Implementations may throw if, for example, virtual memory cannot be reserved up front.
+ 1. Set _obj_.[[ArrayBufferMaxByteLength]] to _maxByteLength_.
1. Return _obj_.
+
+
+ ArrayBufferByteLength (
+ _arrayBuffer_: an ArrayBuffer or SharedArrayBuffer,
+ _order_: ~seq-cst~ or ~unordered~,
+ ): a non-negative integer
+
+
+
+ 1. If IsSharedArrayBuffer(_arrayBuffer_) is *true* and _arrayBuffer_ has an [[ArrayBufferByteLengthData]] internal slot, then
+ 1. Let _bufferByteLengthBlock_ be _arrayBuffer_.[[ArrayBufferByteLengthData]].
+ 1. Let _rawLength_ be GetRawBytesFromSharedBlock(_bufferByteLengthBlock_, 0, ~biguint64~, *true*, _order_).
+ 1. Let _isLittleEndian_ be the value of the [[LittleEndian]] field of the surrounding agent's Agent Record.
+ 1. Return ℝ(RawBytesToNumeric(~biguint64~, _rawLength_, _isLittleEndian_)).
+ 1. Assert: IsDetachedBuffer(_arrayBuffer_) is *false*.
+ 1. Return _arrayBuffer_.[[ArrayBufferByteLength]].
+
+
+
+
+
+ ArrayBufferCopyAndDetach (
+ _arrayBuffer_: an ECMAScript language value,
+ _newLength_: an ECMAScript language value,
+ _preserveResizability_: ~preserve-resizability~ or ~fixed-length~,
+ ): either a normal completion containing an ArrayBuffer or a throw completion
+
+
+
+ 1. Perform ? RequireInternalSlot(_arrayBuffer_, [[ArrayBufferData]]).
+ 1. If IsSharedArrayBuffer(_arrayBuffer_) is *true*, throw a *TypeError* exception.
+ 1. If _newLength_ is *undefined*, then
+ 1. Let _newByteLength_ be _arrayBuffer_.[[ArrayBufferByteLength]].
+ 1. Else,
+ 1. Let _newByteLength_ be ? ToIndex(_newLength_).
+ 1. If IsDetachedBuffer(_arrayBuffer_) is *true*, throw a *TypeError* exception.
+ 1. If _preserveResizability_ is ~preserve-resizability~ and IsFixedLengthArrayBuffer(_arrayBuffer_) is *false*, then
+ 1. Let _newMaxByteLength_ be _arrayBuffer_.[[ArrayBufferMaxByteLength]].
+ 1. Else,
+ 1. Let _newMaxByteLength_ be ~empty~.
+ 1. If _arrayBuffer_.[[ArrayBufferDetachKey]] is not *undefined*, throw a *TypeError* exception.
+ 1. Let _newBuffer_ be ? AllocateArrayBuffer(%ArrayBuffer%, _newByteLength_, _newMaxByteLength_).
+ 1. Let _copyLength_ be min(_newByteLength_, _arrayBuffer_.[[ArrayBufferByteLength]]).
+ 1. Let _fromBlock_ be _arrayBuffer_.[[ArrayBufferData]].
+ 1. Let _toBlock_ be _newBuffer_.[[ArrayBufferData]].
+ 1. Perform CopyDataBlockBytes(_toBlock_, 0, _fromBlock_, 0, _copyLength_).
+ 1. NOTE: Neither creation of the new Data Block nor copying from the old Data Block are observable. Implementations may implement this method as a zero-copy move or a `realloc`.
+ 1. Perform ! DetachArrayBuffer(_arrayBuffer_).
+ 1. Return _newBuffer_.
+
+
+
IsDetachedBuffer (
@@ -41728,6 +43404,8 @@
): either a normal completion containing ~unused~ or a throw completion
1. Assert: IsSharedArrayBuffer(_arrayBuffer_) is *false*.
@@ -41738,7 +43416,7 @@
1. Return ~unused~.
- Detaching an ArrayBuffer instance disassociates the Data Block used as its backing store from the instance and sets the byte length of the buffer to 0. No operations defined by this specification use the DetachArrayBuffer abstract operation. However, an ECMAScript host or implementation may define such operations.
+ Detaching an ArrayBuffer instance disassociates the Data Block used as its backing store from the instance and sets the byte length of the buffer to 0.
@@ -41764,6 +43442,57 @@
+
+
+ GetArrayBufferMaxByteLengthOption (
+ _options_: an ECMAScript language value,
+ ): either a normal completion containing either a non-negative integer or ~empty~, or a throw completion
+
+
+
+ 1. If _options_ is not an Object, return ~empty~.
+ 1. Let _maxByteLength_ be ? Get(_options_, *"maxByteLength"*).
+ 1. If _maxByteLength_ is *undefined*, return ~empty~.
+ 1. Return ? ToIndex(_maxByteLength_).
+
+
+
+
+
+ HostResizeArrayBuffer (
+ _buffer_: an ArrayBuffer,
+ _newByteLength_: a non-negative integer,
+ ): either a normal completion containing either ~handled~ or ~unhandled~, or a throw completion
+
+
+
+ The implementation of HostResizeArrayBuffer must conform to the following requirements:
+
+ - The abstract operation does not detach _buffer_.
+ - If the abstract operation completes normally with ~handled~, _buffer_.[[ArrayBufferByteLength]] is _newByteLength_.
+
+
+ The default implementation of HostResizeArrayBuffer is to return NormalCompletion(~unhandled~).
+
+
+
+
+ IsFixedLengthArrayBuffer (
+ _arrayBuffer_: an ArrayBuffer or a SharedArrayBuffer,
+ ): a Boolean
+
+
+
+ 1. If _arrayBuffer_ has an [[ArrayBufferMaxByteLength]] internal slot, return *false*.
+ 1. Return *true*.
+
+
+
IsUnsignedElementType (
@@ -41775,7 +43504,7 @@
It verifies if the argument _type_ is an unsigned TypedArray element type.
- 1. If _type_ is ~Uint8~, ~Uint8C~, ~Uint16~, ~Uint32~, or ~BigUint64~, return *true*.
+ 1. If _type_ is one of ~uint8~, ~uint8clamped~, ~uint16~, ~uint32~, or ~biguint64~, return *true*.
1. Return *false*.
@@ -41788,10 +43517,10 @@
- 1. If _type_ is ~Int8~, ~Uint8~, ~Int16~, ~Uint16~, ~Int32~, or ~Uint32~, return *true*.
+ 1. If _type_ is one of ~int8~, ~uint8~, ~int16~, ~uint16~, ~int32~, or ~uint32~, return *true*.
1. Return *false*.
@@ -41807,7 +43536,7 @@
It verifies if the argument _type_ is a BigInt TypedArray element type.
- 1. If _type_ is ~BigUint64~ or ~BigInt64~, return *true*.
+ 1. If _type_ is either ~biguint64~ or ~bigint64~, return *true*.
1. Return *false*.
@@ -41816,14 +43545,14 @@
IsNoTearConfiguration (
_type_: a TypedArray element type,
- _order_: ~SeqCst~, ~Unordered~, or ~Init~,
+ _order_: ~seq-cst~, ~unordered~, or ~init~,
): a Boolean
1. If IsUnclampedIntegerElementType(_type_) is *true*, return *true*.
- 1. If IsBigIntElementType(_type_) is *true* and _order_ is not ~Init~ or ~Unordered~, return *true*.
+ 1. If IsBigIntElementType(_type_) is *true* and _order_ is neither ~init~ nor ~unordered~, return *true*.
1. Return *false*.
@@ -41841,11 +43570,11 @@
1. Let _elementSize_ be the Element Size value specified in for Element Type _type_.
1. If _isLittleEndian_ is *false*, reverse the order of the elements of _rawBytes_.
- 1. If _type_ is ~Float32~, then
+ 1. If _type_ is ~float32~, then
1. Let _value_ be the byte elements of _rawBytes_ concatenated and interpreted as a little-endian bit string encoding of an IEEE 754-2019 binary32 value.
1. If _value_ is an IEEE 754-2019 binary32 NaN value, return the *NaN* Number value.
1. Return the Number value that corresponds to _value_.
- 1. If _type_ is ~Float64~, then
+ 1. If _type_ is ~float64~, then
1. Let _value_ be the byte elements of _rawBytes_ concatenated and interpreted as a little-endian bit string encoding of an IEEE 754-2019 binary64 value.
1. If _value_ is an IEEE 754-2019 binary64 NaN value, return the *NaN* Number value.
1. Return the Number value that corresponds to _value_.
@@ -41858,6 +43587,32 @@
+
+
+ GetRawBytesFromSharedBlock (
+ _block_: a Shared Data Block,
+ _byteIndex_: a non-negative integer,
+ _type_: a TypedArray element type,
+ _isTypedArray_: a Boolean,
+ _order_: ~seq-cst~ or ~unordered~,
+ ): a List of byte values
+
+
+
+ 1. Let _elementSize_ be the Element Size value specified in for Element Type _type_.
+ 1. Let _execution_ be the [[CandidateExecution]] field of the surrounding agent's Agent Record.
+ 1. Let _eventsRecord_ be the Agent Events Record of _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
+ 1. If _isTypedArray_ is *true* and IsNoTearConfiguration(_type_, _order_) is *true*, let _noTear_ be *true*; otherwise let _noTear_ be *false*.
+ 1. Let _rawValue_ be a List of length _elementSize_ whose elements are nondeterministically chosen byte values.
+ 1. NOTE: In implementations, _rawValue_ is the result of a non-atomic or atomic read instruction on the underlying hardware. The nondeterminism is a semantic prescription of the memory model to describe observable behaviour of hardware with weak consistency.
+ 1. Let _readEvent_ be ReadSharedMemory { [[Order]]: _order_, [[NoTear]]: _noTear_, [[Block]]: _block_, [[ByteIndex]]: _byteIndex_, [[ElementSize]]: _elementSize_ }.
+ 1. Append _readEvent_ to _eventsRecord_.[[EventList]].
+ 1. Append Chosen Value Record { [[Event]]: _readEvent_, [[ChosenValue]]: _rawValue_ } to _execution_.[[ChosenValues]].
+ 1. Return _rawValue_.
+
+
+
GetValueFromBuffer (
@@ -41865,7 +43620,7 @@
_byteIndex_: a non-negative integer,
_type_: a TypedArray element type,
_isTypedArray_: a Boolean,
- _order_: ~SeqCst~ or ~Unordered~,
+ _order_: ~seq-cst~ or ~unordered~,
optional _isLittleEndian_: a Boolean,
): a Number or a BigInt
@@ -41877,15 +43632,10 @@
1. Let _block_ be _arrayBuffer_.[[ArrayBufferData]].
1. Let _elementSize_ be the Element Size value specified in for Element Type _type_.
1. If IsSharedArrayBuffer(_arrayBuffer_) is *true*, then
- 1. Let _execution_ be the [[CandidateExecution]] field of the surrounding agent's Agent Record.
- 1. Let _eventList_ be the [[EventList]] field of the element of _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
- 1. If _isTypedArray_ is *true* and IsNoTearConfiguration(_type_, _order_) is *true*, let _noTear_ be *true*; otherwise let _noTear_ be *false*.
- 1. Let _rawValue_ be a List of length _elementSize_ whose elements are nondeterministically chosen byte values.
- 1. NOTE: In implementations, _rawValue_ is the result of a non-atomic or atomic read instruction on the underlying hardware. The nondeterminism is a semantic prescription of the memory model to describe observable behaviour of hardware with weak consistency.
- 1. Let _readEvent_ be ReadSharedMemory { [[Order]]: _order_, [[NoTear]]: _noTear_, [[Block]]: _block_, [[ByteIndex]]: _byteIndex_, [[ElementSize]]: _elementSize_ }.
- 1. Append _readEvent_ to _eventList_.
- 1. Append Chosen Value Record { [[Event]]: _readEvent_, [[ChosenValue]]: _rawValue_ } to _execution_.[[ChosenValues]].
- 1. Else, let _rawValue_ be a List whose elements are bytes from _block_ at indices in the interval from _byteIndex_ (inclusive) to _byteIndex_ + _elementSize_ (exclusive).
+ 1. Assert: _block_ is a Shared Data Block.
+ 1. Let _rawValue_ be GetRawBytesFromSharedBlock(_block_, _byteIndex_, _type_, _isTypedArray_, _order_).
+ 1. Else,
+ 1. Let _rawValue_ be a List whose elements are bytes from _block_ at indices in the interval from _byteIndex_ (inclusive) to _byteIndex_ + _elementSize_ (exclusive).
1. Assert: The number of elements in _rawValue_ is _elementSize_.
1. If _isLittleEndian_ is not present, set _isLittleEndian_ to the value of the [[LittleEndian]] field of the surrounding agent's Agent Record.
1. Return RawBytesToNumeric(_type_, _rawValue_, _isLittleEndian_).
@@ -41903,9 +43653,9 @@
- 1. If _type_ is ~Float32~, then
+ 1. If _type_ is ~float32~, then
1. Let _rawBytes_ be a List whose elements are the 4 bytes that are the result of converting _value_ to IEEE 754-2019 binary32 format using roundTiesToEven mode. The bytes are arranged in little endian order. If _value_ is *NaN*, _rawBytes_ may be set to any implementation chosen IEEE 754-2019 binary32 format Not-a-Number encoding. An implementation must always choose the same encoding for each implementation distinguishable *NaN* value.
- 1. Else if _type_ is ~Float64~, then
+ 1. Else if _type_ is ~float64~, then
1. Let _rawBytes_ be a List whose elements are the 8 bytes that are the IEEE 754-2019 binary64 format encoding of _value_. The bytes are arranged in little endian order. If _value_ is *NaN*, _rawBytes_ may be set to any implementation chosen IEEE 754-2019 binary64 format Not-a-Number encoding. An implementation must always choose the same encoding for each implementation distinguishable *NaN* value.
1. Else,
1. Let _n_ be the Element Size value specified in for Element Type _type_.
@@ -41928,7 +43678,7 @@
_type_: a TypedArray element type,
_value_: a Number or a BigInt,
_isTypedArray_: a Boolean,
- _order_: ~SeqCst~, ~Unordered~, or ~Init~,
+ _order_: ~seq-cst~, ~unordered~, or ~init~,
optional _isLittleEndian_: a Boolean,
): ~unused~
@@ -41944,10 +43694,11 @@
1. Let _rawBytes_ be NumericToRawBytes(_type_, _value_, _isLittleEndian_).
1. If IsSharedArrayBuffer(_arrayBuffer_) is *true*, then
1. Let _execution_ be the [[CandidateExecution]] field of the surrounding agent's Agent Record.
- 1. Let _eventList_ be the [[EventList]] field of the element of _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
+ 1. Let _eventsRecord_ be the Agent Events Record of _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
1. If _isTypedArray_ is *true* and IsNoTearConfiguration(_type_, _order_) is *true*, let _noTear_ be *true*; otherwise let _noTear_ be *false*.
- 1. Append WriteSharedMemory { [[Order]]: _order_, [[NoTear]]: _noTear_, [[Block]]: _block_, [[ByteIndex]]: _byteIndex_, [[ElementSize]]: _elementSize_, [[Payload]]: _rawBytes_ } to _eventList_.
- 1. Else, store the individual bytes of _rawBytes_ into _block_, starting at _block_[_byteIndex_].
+ 1. Append WriteSharedMemory { [[Order]]: _order_, [[NoTear]]: _noTear_, [[Block]]: _block_, [[ByteIndex]]: _byteIndex_, [[ElementSize]]: _elementSize_, [[Payload]]: _rawBytes_ } to _eventsRecord_.[[EventList]].
+ 1. Else,
+ 1. Store the individual bytes of _rawBytes_ into _block_, starting at _block_[_byteIndex_].
1. Return ~unused~.
@@ -41960,7 +43711,6 @@
_type_: a TypedArray element type,
_value_: a Number or a BigInt,
_op_: a read-modify-write modification function,
- optional _isLittleEndian_: a Boolean,
): a Number or a BigInt
The value of the *"name"* property of this function is *"get [Symbol.species]"*.
- ArrayBuffer prototype methods normally use their *this* value's constructor to create a derived object. However, a subclass constructor may over-ride that default behaviour by redefining its @@species property.
+ normally uses its *this* value's constructor to create a derived object. However, a subclass constructor may over-ride that default behaviour for the method by redefining its @@species property.
@@ -42077,6 +43828,67 @@ ArrayBuffer.prototype.constructor
The initial value of `ArrayBuffer.prototype.constructor` is %ArrayBuffer%.
+
+ get ArrayBuffer.prototype.detached
+ `ArrayBuffer.prototype.detached` is an accessor property whose set accessor function is *undefined*. Its get accessor function performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
+ 1. If IsSharedArrayBuffer(_O_) is *true*, throw a *TypeError* exception.
+ 1. Return IsDetachedBuffer(_O_).
+
+
+
+
+ get ArrayBuffer.prototype.maxByteLength
+ `ArrayBuffer.prototype.maxByteLength` is an accessor property whose set accessor function is *undefined*. Its get accessor function performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
+ 1. If IsSharedArrayBuffer(_O_) is *true*, throw a *TypeError* exception.
+ 1. If IsDetachedBuffer(_O_) is *true*, return *+0*𝔽.
+ 1. If IsFixedLengthArrayBuffer(_O_) is *true*, then
+ 1. Let _length_ be _O_.[[ArrayBufferByteLength]].
+ 1. Else,
+ 1. Let _length_ be _O_.[[ArrayBufferMaxByteLength]].
+ 1. Return 𝔽(_length_).
+
+
+
+
+ get ArrayBuffer.prototype.resizable
+ `ArrayBuffer.prototype.resizable` is an accessor property whose set accessor function is *undefined*. Its get accessor function performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
+ 1. If IsSharedArrayBuffer(_O_) is *true*, throw a *TypeError* exception.
+ 1. If IsFixedLengthArrayBuffer(_O_) is *false*, return *true*; otherwise return *false*.
+
+
+
+
+ ArrayBuffer.prototype.resize ( _newLength_ )
+ This method performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferMaxByteLength]]).
+ 1. If IsSharedArrayBuffer(_O_) is *true*, throw a *TypeError* exception.
+ 1. Let _newByteLength_ be ? ToIndex(_newLength_).
+ 1. If IsDetachedBuffer(_O_) is *true*, throw a *TypeError* exception.
+ 1. If _newByteLength_ > _O_.[[ArrayBufferMaxByteLength]], throw a *RangeError* exception.
+ 1. Let _hostHandled_ be ? HostResizeArrayBuffer(_O_, _newByteLength_).
+ 1. If _hostHandled_ is ~handled~, return *undefined*.
+ 1. Let _oldBlock_ be _O_.[[ArrayBufferData]].
+ 1. Let _newBlock_ be ? CreateByteDataBlock(_newByteLength_).
+ 1. Let _copyLength_ be min(_newByteLength_, _O_.[[ArrayBufferByteLength]]).
+ 1. Perform CopyDataBlockBytes(_newBlock_, 0, _oldBlock_, 0, _copyLength_).
+ 1. NOTE: Neither creation of the new Data Block nor copying from the old Data Block are observable. Implementations may implement this method as in-place growth or shrinkage.
+ 1. Set _O_.[[ArrayBufferData]] to _newBlock_.
+ 1. Set _O_.[[ArrayBufferByteLength]] to _newByteLength_.
+ 1. Return *undefined*.
+
+
+
ArrayBuffer.prototype.slice ( _start_, _end_ )
This method performs the following steps when called:
@@ -42087,11 +43899,11 @@ ArrayBuffer.prototype.slice ( _start_, _end_ )
1. If IsDetachedBuffer(_O_) is *true*, throw a *TypeError* exception.
1. Let _len_ be _O_.[[ArrayBufferByteLength]].
1. Let _relativeStart_ be ? ToIntegerOrInfinity(_start_).
- 1. If _relativeStart_ is -∞, let _first_ be 0.
+ 1. If _relativeStart_ = -∞, let _first_ be 0.
1. Else if _relativeStart_ < 0, let _first_ be max(_len_ + _relativeStart_, 0).
1. Else, let _first_ be min(_relativeStart_, _len_).
1. If _end_ is *undefined*, let _relativeEnd_ be _len_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
- 1. If _relativeEnd_ is -∞, let _final_ be 0.
+ 1. If _relativeEnd_ = -∞, let _final_ be 0.
1. Else if _relativeEnd_ < 0, let _final_ be max(_len_ + _relativeEnd_, 0).
1. Else, let _final_ be min(_relativeEnd_, _len_).
1. Let _newLen_ be max(_final_ - _first_, 0).
@@ -42102,15 +43914,36 @@ ArrayBuffer.prototype.slice ( _start_, _end_ )
1. If IsDetachedBuffer(_new_) is *true*, throw a *TypeError* exception.
1. If SameValue(_new_, _O_) is *true*, throw a *TypeError* exception.
1. If _new_.[[ArrayBufferByteLength]] < _newLen_, throw a *TypeError* exception.
- 1. NOTE: Side-effects of the above steps may have detached _O_.
+ 1. NOTE: Side-effects of the above steps may have detached or resized _O_.
1. If IsDetachedBuffer(_O_) is *true*, throw a *TypeError* exception.
1. Let _fromBuf_ be _O_.[[ArrayBufferData]].
1. Let _toBuf_ be _new_.[[ArrayBufferData]].
- 1. Perform CopyDataBlockBytes(_toBuf_, 0, _fromBuf_, _first_, _newLen_).
+ 1. Let _currentLen_ be _O_.[[ArrayBufferByteLength]].
+ 1. If _first_ < _currentLen_, then
+ 1. Let _count_ be min(_newLen_, _currentLen_ - _first_).
+ 1. Perform CopyDataBlockBytes(_toBuf_, 0, _fromBuf_, _first_, _count_).
1. Return _new_.
+
+ ArrayBuffer.prototype.transfer ( [ _newLength_ ] )
+ This method performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Return ? ArrayBufferCopyAndDetach(_O_, _newLength_, ~preserve-resizability~).
+
+
+
+
+ ArrayBuffer.prototype.transferToFixedLength ( [ _newLength_ ] )
+ This method performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Return ? ArrayBufferCopyAndDetach(_O_, _newLength_, ~fixed-length~).
+
+
+
ArrayBuffer.prototype [ @@toStringTag ]
The initial value of the @@toStringTag property is the String value *"ArrayBuffer"*.
@@ -42120,15 +43953,39 @@ ArrayBuffer.prototype [ @@toStringTag ]
Properties of ArrayBuffer Instances
- ArrayBuffer instances inherit properties from the ArrayBuffer prototype object. ArrayBuffer instances each have an [[ArrayBufferData]] internal slot, an [[ArrayBufferByteLength]] internal slot, and an [[ArrayBufferDetachKey]] internal slot.
+ ArrayBuffer instances inherit properties from the ArrayBuffer prototype object. ArrayBuffer instances each have an [[ArrayBufferData]] internal slot, an [[ArrayBufferByteLength]] internal slot, and an [[ArrayBufferDetachKey]] internal slot. ArrayBuffer instances which are resizable each have an [[ArrayBufferMaxByteLength]] internal slot.
ArrayBuffer instances whose [[ArrayBufferData]] is *null* are considered to be detached and all operators to access or modify data contained in the ArrayBuffer instance will fail.
ArrayBuffer instances whose [[ArrayBufferDetachKey]] is set to a value other than *undefined* need to have all DetachArrayBuffer calls passing that same "detach key" as an argument, otherwise a TypeError will result. This internal slot is only ever set by certain embedding environments, not by algorithms in this specification.
+
+
+ Resizable ArrayBuffer Guidelines
+
+ The following are guidelines for ECMAScript programmers working with resizable ArrayBuffer.
+ We recommend that programs be tested in their deployment environments where possible. The amount of available physical memory differs greatly between hardware devices. Similarly, virtual memory subsystems also differ greatly between hardware devices as well as operating systems. An application that runs without out-of-memory errors on a 64-bit desktop web browser could run out of memory on a 32-bit mobile web browser.
+ When choosing a value for the *"maxByteLength"* option for resizable ArrayBuffer, we recommend that the smallest possible size for the application be chosen. We recommend that *"maxByteLength"* does not exceed 1,073,741,824 (230 bytes or 1GiB).
+ Please note that successfully constructing a resizable ArrayBuffer for a particular maximum size does not guarantee that future resizes will succeed.
+
+
+
+ The following are guidelines for ECMAScript implementers implementing resizable ArrayBuffer.
+ Resizable ArrayBuffer can be implemented as copying upon resize, as in-place growth via reserving virtual memory up front, or as a combination of both for different values of the constructor's *"maxByteLength"* option.
+ If a host is multi-tenanted (i.e. it runs many ECMAScript applications simultaneously), such as a web browser, and its implementations choose to implement in-place growth by reserving virtual memory, we recommend that both 32-bit and 64-bit implementations throw for values of *"maxByteLength"* ≥ 1GiB to 1.5GiB. This is to reduce the likelihood a single application can exhaust the virtual memory address space and to reduce interoperability risk.
+ If a host does not have virtual memory, such as those running on embedded devices without an MMU, or if a host only implements resizing by copying, it may accept any Number value for the *"maxByteLength"* option. However, we recommend a *RangeError* be thrown if a memory block of the requested size can never be allocated. For example, if the requested size is greater than the maximium amount of usable memory on the device.
+
+
SharedArrayBuffer Objects
+
+ Fixed-length and Growable SharedArrayBuffer Objects
+ A fixed-length SharedArrayBuffer is a SharedArrayBuffer whose byte length cannot change after creation.
+ A growable SharedArrayBuffer is a SharedArrayBuffer whose byte length may increase after creation via calls to .
+ The kind of SharedArrayBuffer object that is created depends on the arguments passed to .
+
+
Abstract Operations for SharedArrayBuffer Objects
@@ -42137,6 +43994,7 @@
AllocateSharedArrayBuffer (
_constructor_: a constructor,
_byteLength_: a non-negative integer,
+ optional _maxByteLength_: a non-negative integer or ~empty~,
): either a normal completion containing a SharedArrayBuffer or a throw completion
- 1. Let _obj_ be ? OrdinaryCreateFromConstructor(_constructor_, *"%SharedArrayBuffer.prototype%"*, « [[ArrayBufferData]], [[ArrayBufferByteLength]] »).
- 1. Let _block_ be ? CreateSharedByteDataBlock(_byteLength_).
+ 1. Let _slots_ be « [[ArrayBufferData]] ».
+ 1. If _maxByteLength_ is present and _maxByteLength_ is not ~empty~, let _allocatingGrowableBuffer_ be *true*; otherwise let _allocatingGrowableBuffer_ be *false*.
+ 1. If _allocatingGrowableBuffer_ is *true*, then
+ 1. If _byteLength_ > _maxByteLength_, throw a *RangeError* exception.
+ 1. Append [[ArrayBufferByteLengthData]] and [[ArrayBufferMaxByteLength]] to _slots_.
+ 1. Else,
+ 1. Append [[ArrayBufferByteLength]] to _slots_.
+ 1. Let _obj_ be ? OrdinaryCreateFromConstructor(_constructor_, *"%SharedArrayBuffer.prototype%"*, _slots_).
+ 1. If _allocatingGrowableBuffer_ is *true*, let _allocLength_ be _maxByteLength_; otherwise let _allocLength_ be _byteLength_.
+ 1. Let _block_ be ? CreateSharedByteDataBlock(_allocLength_).
1. Set _obj_.[[ArrayBufferData]] to _block_.
- 1. Set _obj_.[[ArrayBufferByteLength]] to _byteLength_.
+ 1. If _allocatingGrowableBuffer_ is *true*, then
+ 1. Assert: _byteLength_ ≤ _maxByteLength_.
+ 1. Let _byteLengthBlock_ be ? CreateSharedByteDataBlock(8).
+ 1. Perform SetValueInBuffer(_byteLengthBlock_, 0, ~biguint64~, ℤ(_byteLength_), *true*, ~seq-cst~).
+ 1. Set _obj_.[[ArrayBufferByteLengthData]] to _byteLengthBlock_.
+ 1. Set _obj_.[[ArrayBufferMaxByteLength]] to _maxByteLength_.
+ 1. Else,
+ 1. Set _obj_.[[ArrayBufferByteLength]] to _byteLength_.
1. Return _obj_.
@@ -42170,6 +44043,31 @@
1. Return *true*.
+
+
+
+ HostGrowSharedArrayBuffer (
+ _buffer_: a SharedArrayBuffer,
+ _newByteLength_: a non-negative integer,
+ ): either a normal completion containing either ~handled~ or ~unhandled~, or a throw completion
+
+
+ The implementation of HostGrowSharedArrayBuffer must conform to the following requirements:
+
+ - If the abstract operation does not complete normally with ~unhandled~, and _newByteLength_ < the current byte length of the _buffer_ or _newByteLength_ > _buffer_.[[ArrayBufferMaxByteLength]], throw a *RangeError* exception.
+ - Let _isLittleEndian_ be the value of the [[LittleEndian]] field of the surrounding agent's Agent Record. If the abstract operation completes normally with ~handled~, a WriteSharedMemory or ReadModifyWriteSharedMemory event whose [[Order]] is ~seq-cst~, [[Payload]] is NumericToRawBytes(~biguint64~, _newByteLength_, _isLittleEndian_), [[Block]] is _buffer_.[[ArrayBufferByteLengthData]], [[ByteIndex]] is 0, and [[ElementSize]] is 8 is added to the surrounding agent's candidate execution such that racing calls to `SharedArrayBuffer.prototype.grow` are not "lost", i.e. silently do nothing.
+
+
+
+ The second requirement above is intentionally vague about how or when the current byte length of _buffer_ is read. Because the byte length must be updated via an atomic read-modify-write operation on the underlying hardware, architectures that use load-link/store-conditional or load-exclusive/store-exclusive instruction pairs may wish to keep the paired instructions close in the instruction stream. As such, SharedArrayBuffer.prototype.grow itself does not perform bounds checking on _newByteLength_ before calling HostGrowSharedArrayBuffer, nor is there a requirement on when the current byte length is read.
+ This is in contrast with HostResizeArrayBuffer, which is guaranteed that the value of _newByteLength_ is ≥ 0 and ≤ _buffer_.[[ArrayBufferMaxByteLength]].
+
+
+ The default implementation of HostGrowSharedArrayBuffer is to return NormalCompletion(~unhandled~).
+
@@ -42190,12 +44088,13 @@ The SharedArrayBuffer Constructor
- SharedArrayBuffer ( _length_ )
+ SharedArrayBuffer ( _length_ [ , _options_ ] )
This function performs the following steps when called:
1. If NewTarget is *undefined*, throw a *TypeError* exception.
1. Let _byteLength_ be ? ToIndex(_length_).
- 1. Return ? AllocateSharedArrayBuffer(NewTarget, _byteLength_).
+ 1. Let _requestedMaxByteLength_ be ? GetArrayBufferMaxByteLengthOption(_options_).
+ 1. Return ? AllocateSharedArrayBuffer(NewTarget, _byteLength_, _requestedMaxByteLength_).
@@ -42241,7 +44140,7 @@ get SharedArrayBuffer.prototype.byteLength
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
1. If IsSharedArrayBuffer(_O_) is *false*, throw a *TypeError* exception.
- 1. Let _length_ be _O_.[[ArrayBufferByteLength]].
+ 1. Let _length_ be ArrayBufferByteLength(_O_, ~seq-cst~).
1. Return 𝔽(_length_).
@@ -42251,6 +44150,64 @@ SharedArrayBuffer.prototype.constructor
The initial value of `SharedArrayBuffer.prototype.constructor` is %SharedArrayBuffer%.
+
+ SharedArrayBuffer.prototype.grow ( _newLength_ )
+ This method performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferMaxByteLength]]).
+ 1. If IsSharedArrayBuffer(_O_) is *false*, throw a *TypeError* exception.
+ 1. Let _newByteLength_ be ? ToIndex(_newLength_).
+ 1. Let _hostHandled_ be ? HostGrowSharedArrayBuffer(_O_, _newByteLength_).
+ 1. If _hostHandled_ is ~handled~, return *undefined*.
+ 1. Let _isLittleEndian_ be the value of the [[LittleEndian]] field of the surrounding agent's Agent Record.
+ 1. Let _byteLengthBlock_ be _O_.[[ArrayBufferByteLengthData]].
+ 1. Let _currentByteLengthRawBytes_ be GetRawBytesFromSharedBlock(_byteLengthBlock_, 0, ~biguint64~, *true*, ~seq-cst~).
+ 1. Let _newByteLengthRawBytes_ be NumericToRawBytes(~biguint64~, ℤ(_newByteLength_), _isLittleEndian_).
+ 1. Repeat,
+ 1. NOTE: This is a compare-and-exchange loop to ensure that parallel, racing grows of the same buffer are totally ordered, are not lost, and do not silently do nothing. The loop exits if it was able to attempt to grow uncontended.
+ 1. Let _currentByteLength_ be ℝ(RawBytesToNumeric(~biguint64~, _currentByteLengthRawBytes_, _isLittleEndian_)).
+ 1. If _newByteLength_ = _currentByteLength_, return *undefined*.
+ 1. If _newByteLength_ < _currentByteLength_ or _newByteLength_ > _O_.[[ArrayBufferMaxByteLength]], throw a *RangeError* exception.
+ 1. Let _byteLengthDelta_ be _newByteLength_ - _currentByteLength_.
+ 1. If it is impossible to create a new Shared Data Block value consisting of _byteLengthDelta_ bytes, throw a *RangeError* exception.
+ 1. NOTE: No new Shared Data Block is constructed and used here. The observable behaviour of growable SharedArrayBuffers is specified by allocating a max-sized Shared Data Block at construction time, and this step captures the requirement that implementations that run out of memory must throw a *RangeError*.
+ 1. Let _readByteLengthRawBytes_ be AtomicCompareExchangeInSharedBlock(_byteLengthBlock_, 0, 8, _currentByteLengthRawBytes_, _newByteLengthRawBytes_).
+ 1. If ByteListEqual(_readByteLengthRawBytes_, _currentByteLengthRawBytes_) is *true*, return *undefined*.
+ 1. Set _currentByteLengthRawBytes_ to _readByteLengthRawBytes_.
+
+
+ Spurious failures of the compare-exchange to update the length are prohibited. If the bounds checking for the new length passes and the implementation is not out of memory, a ReadModifyWriteSharedMemory event (i.e. a successful compare-exchange) is always added into the candidate execution.
+ Parallel calls to SharedArrayBuffer.prototype.grow are totally ordered. For example, consider two racing calls: `sab.grow(10)` and `sab.grow(20)`. One of the two calls is guaranteed to win the race. The call to `sab.grow(10)` will never shrink `sab` even if `sab.grow(20)` happened first; in that case it will instead throw a RangeError.
+
+
+
+
+ get SharedArrayBuffer.prototype.growable
+ `SharedArrayBuffer.prototype.growable` is an accessor property whose set accessor function is *undefined*. Its get accessor function performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
+ 1. If IsSharedArrayBuffer(_O_) is *false*, throw a *TypeError* exception.
+ 1. If IsFixedLengthArrayBuffer(_O_) is *false*, return *true*; otherwise return *false*.
+
+
+
+
+ get SharedArrayBuffer.prototype.maxByteLength
+ `SharedArrayBuffer.prototype.maxByteLength` is an accessor property whose set accessor function is *undefined*. Its get accessor function performs the following steps when called:
+
+ 1. Let _O_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
+ 1. If IsSharedArrayBuffer(_O_) is *false*, throw a *TypeError* exception.
+ 1. If IsFixedLengthArrayBuffer(_O_) is *true*, then
+ 1. Let _length_ be _O_.[[ArrayBufferByteLength]].
+ 1. Else,
+ 1. Let _length_ be _O_.[[ArrayBufferMaxByteLength]].
+ 1. Return 𝔽(_length_).
+
+
+
SharedArrayBuffer.prototype.slice ( _start_, _end_ )
This method performs the following steps when called:
@@ -42258,13 +44215,13 @@ SharedArrayBuffer.prototype.slice ( _start_, _end_ )
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[ArrayBufferData]]).
1. If IsSharedArrayBuffer(_O_) is *false*, throw a *TypeError* exception.
- 1. Let _len_ be _O_.[[ArrayBufferByteLength]].
+ 1. Let _len_ be ArrayBufferByteLength(_O_, ~seq-cst~).
1. Let _relativeStart_ be ? ToIntegerOrInfinity(_start_).
- 1. If _relativeStart_ is -∞, let _first_ be 0.
+ 1. If _relativeStart_ = -∞, let _first_ be 0.
1. Else if _relativeStart_ < 0, let _first_ be max(_len_ + _relativeStart_, 0).
1. Else, let _first_ be min(_relativeStart_, _len_).
1. If _end_ is *undefined*, let _relativeEnd_ be _len_; else let _relativeEnd_ be ? ToIntegerOrInfinity(_end_).
- 1. If _relativeEnd_ is -∞, let _final_ be 0.
+ 1. If _relativeEnd_ = -∞, let _final_ be 0.
1. Else if _relativeEnd_ < 0, let _final_ be max(_len_ + _relativeEnd_, 0).
1. Else, let _final_ be min(_relativeEnd_, _len_).
1. Let _newLen_ be max(_final_ - _first_, 0).
@@ -42272,8 +44229,8 @@ SharedArrayBuffer.prototype.slice ( _start_, _end_ )
1. Let _new_ be ? Construct(_ctor_, « 𝔽(_newLen_) »).
1. Perform ? RequireInternalSlot(_new_, [[ArrayBufferData]]).
1. If IsSharedArrayBuffer(_new_) is *false*, throw a *TypeError* exception.
- 1. If _new_.[[ArrayBufferData]] and _O_.[[ArrayBufferData]] are the same Shared Data Block values, throw a *TypeError* exception.
- 1. If _new_.[[ArrayBufferByteLength]] < _newLen_, throw a *TypeError* exception.
+ 1. If _new_.[[ArrayBufferData]] is _O_.[[ArrayBufferData]], throw a *TypeError* exception.
+ 1. If ArrayBufferByteLength(_new_, ~seq-cst~) < _newLen_, throw a *TypeError* exception.
1. Let _fromBuf_ be _O_.[[ArrayBufferData]].
1. Let _toBuf_ be _new_.[[ArrayBufferData]].
1. Perform CopyDataBlockBytes(_toBuf_, 0, _fromBuf_, _first_, _newLen_).
@@ -42281,7 +44238,7 @@ SharedArrayBuffer.prototype.slice ( _start_, _end_ )
-
+
SharedArrayBuffer.prototype [ @@toStringTag ]
The initial value of the @@toStringTag property is the String value *"SharedArrayBuffer"*.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
@@ -42290,12 +44247,32 @@ SharedArrayBuffer.prototype [ @@toStringTag ]
Properties of SharedArrayBuffer Instances
- SharedArrayBuffer instances inherit properties from the SharedArrayBuffer prototype object. SharedArrayBuffer instances each have an [[ArrayBufferData]] internal slot and an [[ArrayBufferByteLength]] internal slot.
+ SharedArrayBuffer instances inherit properties from the SharedArrayBuffer prototype object. SharedArrayBuffer instances each have an [[ArrayBufferData]] internal slot. SharedArrayBuffer instances which are not growable each have an [[ArrayBufferByteLength]] internal slot. SharedArrayBuffer instances which are growable each have an [[ArrayBufferByteLengthData]] internal slot and an [[ArrayBufferMaxByteLength]] internal slot.
SharedArrayBuffer instances, unlike ArrayBuffer instances, are never detached.
+
+
+ Growable SharedArrayBuffer Guidelines
+
+ The following are guidelines for ECMAScript programmers working with growable SharedArrayBuffer.
+ We recommend that programs be tested in their deployment environments where possible. The amount of available physical memory differ greatly between hardware devices. Similarly, virtual memory subsystems also differ greatly between hardware devices as well as operating systems. An application that runs without out-of-memory errors on a 64-bit desktop web browser could run out of memory on a 32-bit mobile web browser.
+ When choosing a value for the *"maxByteLength"* option for growable SharedArrayBuffer, we recommend that the smallest possible size for the application be chosen. We recommend that *"maxByteLength"* does not exceed 1073741824, or 1GiB.
+ Please note that successfully constructing a growable SharedArrayBuffer for a particular maximum size does not guarantee that future grows will succeed.
+ Not all loads of a growable SharedArrayBuffer's length are synchronizing ~seq-cst~ loads. Loads of the length that are for bounds-checking of an integer-indexed property access, e.g. `u8[idx]`, are not synchronizing. In general, in the absence of explicit synchronization, one property access being in-bound does not imply a subsequent property access in the same agent is also in-bound. In contrast, explicit loads of the length via the `length` and `byteLength` getters on SharedArrayBuffer, %TypedArray%.prototype, and DataView.prototype are synchronizing. Loads of the length that are performed by built-in methods to check if a TypedArray is entirely out-of-bounds are also synchronizing.
+
+
+
+ The following are guidelines for ECMAScript implementers implementing growable SharedArrayBuffer.
+ We recommend growable SharedArrayBuffer be implemented as in-place growth via reserving virtual memory up front.
+ Because grow operations can happen in parallel with memory accesses on a growable SharedArrayBuffer, the constraints of the memory model require that even unordered accesses do not "tear" (bits of their values will not be mixed). In practice, this means the underlying data block of a growable SharedArrayBuffer cannot be grown by being copied without stopping the world. We do not recommend stopping the world as an implementation strategy because it introduces a serialization point and is slow.
+ Grown memory must appear zeroed from the moment of its creation, including to any racy accesses in parallel. This can be accomplished via zero-filled-on-demand virtual memory pages, or careful synchronization if manually zeroing memory.
+ Integer-indexed property access on TypedArray views of growable SharedArrayBuffers is intended to be optimizable similarly to access on TypedArray views of non-growable SharedArrayBuffers, because integer-indexed property loads on are not synchronizing on the underlying buffer's length (see programmer guidelines above). For example, bounds checks for property accesses may still be hoisted out of loops.
+ In practice it is difficult to implement growable SharedArrayBuffer by copying on hosts that do not have virtual memory, such as those running on embedded devices without an MMU. Memory usage behaviour of growable SharedArrayBuffers on such hosts may significantly differ from that of hosts with virtual memory. Such hosts should clearly communicate memory usage expectations to users.
+
+
@@ -42304,6 +44281,112 @@ DataView Objects
Abstract Operations For DataView Objects
+
+ DataView With Buffer Witness Records
+ A DataView With Buffer Witness Record is a Record value used to encapsulate a DataView along with a cached byte length of the viewed buffer. It is used to help ensure there is a single shared memory read event of the byte length data block when the viewed buffer is a growable SharedArrayBuffers.
+ DataView With Buffer Witness Records have the fields listed in .
+
+
+
+
+ Field Name
+ |
+
+ Value
+ |
+
+ Meaning
+ |
+
+
+
+ [[Object]]
+ |
+
+ a DataView
+ |
+
+ The DataView object whose buffer's byte length is loaded.
+ |
+
+
+
+ [[CachedBufferByteLength]]
+ |
+
+ a non-negative integer or ~detached~
+ |
+
+ The byte length of the object's [[ViewedArrayBuffer]] when the Record was created.
+ |
+
+
+
+
+
+
+
+ MakeDataViewWithBufferWitnessRecord (
+ _obj_: a DataView,
+ _order_: ~seq-cst~ or ~unordered~,
+ ): a DataView With Buffer Witness Record
+
+
+
+ 1. Let _buffer_ be _obj_.[[ViewedArrayBuffer]].
+ 1. If IsDetachedBuffer(_buffer_) is *true*, then
+ 1. Let _byteLength_ be ~detached~.
+ 1. Else,
+ 1. Let _byteLength_ be ArrayBufferByteLength(_buffer_, _order_).
+ 1. Return the DataView With Buffer Witness Record { [[Object]]: _obj_, [[CachedBufferByteLength]]: _byteLength_ }.
+
+
+
+
+
+ GetViewByteLength (
+ _viewRecord_: a DataView With Buffer Witness Record,
+ ): a non-negative integer
+
+
+
+ 1. Assert: IsViewOutOfBounds(_viewRecord_) is *false*.
+ 1. Let _view_ be _viewRecord_.[[Object]].
+ 1. If _view_.[[ByteLength]] is not ~auto~, return _view_.[[ByteLength]].
+ 1. Assert: IsFixedLengthArrayBuffer(_view_.[[ViewedArrayBuffer]]) is *false*.
+ 1. Let _byteOffset_ be _view_.[[ByteOffset]].
+ 1. Let _byteLength_ be _viewRecord_.[[CachedBufferByteLength]].
+ 1. Assert: _byteLength_ is not ~detached~.
+ 1. Return _byteLength_ - _byteOffset_.
+
+
+
+
+
+ IsViewOutOfBounds (
+ _viewRecord_: a DataView With Buffer Witness Record,
+ ): a Boolean
+
+
+
+ 1. Let _view_ be _viewRecord_.[[Object]].
+ 1. Let _bufferByteLength_ be _viewRecord_.[[CachedBufferByteLength]].
+ 1. Assert: IsDetachedBuffer(_view_.[[ViewedArrayBuffer]]) is *true* if and only if _bufferByteLength_ is ~detached~.
+ 1. If _bufferByteLength_ is ~detached~, return *true*.
+ 1. Let _byteOffsetStart_ be _view_.[[ByteOffset]].
+ 1. If _view_.[[ByteLength]] is ~auto~, then
+ 1. Let _byteOffsetEnd_ be _bufferByteLength_.
+ 1. Else,
+ 1. Let _byteOffsetEnd_ be _byteOffsetStart_ + _view_.[[ByteLength]].
+ 1. If _byteOffsetStart_ > _bufferByteLength_ or _byteOffsetEnd_ > _bufferByteLength_, return *true*.
+ 1. NOTE: 0-length DataViews are not considered out-of-bounds.
+ 1. Return *false*.
+
+
+
GetViewValue (
@@ -42322,14 +44405,15 @@
1. Assert: _view_ has a [[ViewedArrayBuffer]] internal slot.
1. Let _getIndex_ be ? ToIndex(_requestIndex_).
1. Set _isLittleEndian_ to ToBoolean(_isLittleEndian_).
- 1. Let _buffer_ be _view_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
1. Let _viewOffset_ be _view_.[[ByteOffset]].
- 1. Let _viewSize_ be _view_.[[ByteLength]].
+ 1. Let _viewRecord_ be MakeDataViewWithBufferWitnessRecord(_view_, ~unordered~).
+ 1. NOTE: Bounds checking is not a synchronizing operation when _view_'s backing buffer is a growable SharedArrayBuffer.
+ 1. If IsViewOutOfBounds(_viewRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _viewSize_ be GetViewByteLength(_viewRecord_).
1. Let _elementSize_ be the Element Size value specified in for Element Type _type_.
1. If _getIndex_ + _elementSize_ > _viewSize_, throw a *RangeError* exception.
1. Let _bufferIndex_ be _getIndex_ + _viewOffset_.
- 1. Return GetValueFromBuffer(_buffer_, _bufferIndex_, _type_, *false*, ~Unordered~, _isLittleEndian_).
+ 1. Return GetValueFromBuffer(_view_.[[ViewedArrayBuffer]], _bufferIndex_, _type_, *false*, ~unordered~, _isLittleEndian_).
@@ -42354,14 +44438,15 @@
1. If IsBigIntElementType(_type_) is *true*, let _numberValue_ be ? ToBigInt(_value_).
1. Otherwise, let _numberValue_ be ? ToNumber(_value_).
1. Set _isLittleEndian_ to ToBoolean(_isLittleEndian_).
- 1. Let _buffer_ be _view_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
1. Let _viewOffset_ be _view_.[[ByteOffset]].
- 1. Let _viewSize_ be _view_.[[ByteLength]].
+ 1. Let _viewRecord_ be MakeDataViewWithBufferWitnessRecord(_view_, ~unordered~).
+ 1. NOTE: Bounds checking is not a synchronizing operation when _view_'s backing buffer is a growable SharedArrayBuffer.
+ 1. If IsViewOutOfBounds(_viewRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _viewSize_ be GetViewByteLength(_viewRecord_).
1. Let _elementSize_ be the Element Size value specified in for Element Type _type_.
1. If _getIndex_ + _elementSize_ > _viewSize_, throw a *RangeError* exception.
1. Let _bufferIndex_ be _getIndex_ + _viewOffset_.
- 1. Perform SetValueInBuffer(_buffer_, _bufferIndex_, _type_, _numberValue_, *false*, ~Unordered~, _isLittleEndian_).
+ 1. Perform SetValueInBuffer(_view_.[[ViewedArrayBuffer]], _bufferIndex_, _type_, _numberValue_, *false*, ~unordered~, _isLittleEndian_).
1. Return *undefined*.
@@ -42386,15 +44471,23 @@ DataView ( _buffer_ [ , _byteOffset_ [ , _byteLength_ ] ] )
1. Perform ? RequireInternalSlot(_buffer_, [[ArrayBufferData]]).
1. Let _offset_ be ? ToIndex(_byteOffset_).
1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. Let _bufferByteLength_ be _buffer_.[[ArrayBufferByteLength]].
+ 1. Let _bufferByteLength_ be ArrayBufferByteLength(_buffer_, ~seq-cst~).
1. If _offset_ > _bufferByteLength_, throw a *RangeError* exception.
+ 1. Let _bufferIsFixedLength_ be IsFixedLengthArrayBuffer(_buffer_).
1. If _byteLength_ is *undefined*, then
- 1. Let _viewByteLength_ be _bufferByteLength_ - _offset_.
+ 1. If _bufferIsFixedLength_ is *true*, then
+ 1. Let _viewByteLength_ be _bufferByteLength_ - _offset_.
+ 1. Else,
+ 1. Let _viewByteLength_ be ~auto~.
1. Else,
1. Let _viewByteLength_ be ? ToIndex(_byteLength_).
1. If _offset_ + _viewByteLength_ > _bufferByteLength_, throw a *RangeError* exception.
1. Let _O_ be ? OrdinaryCreateFromConstructor(NewTarget, *"%DataView.prototype%"*, « [[DataView]], [[ViewedArrayBuffer]], [[ByteLength]], [[ByteOffset]] »).
1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
+ 1. Set _bufferByteLength_ to ArrayBufferByteLength(_buffer_, ~seq-cst~).
+ 1. If _offset_ > _bufferByteLength_, throw a *RangeError* exception.
+ 1. If _byteLength_ is not *undefined*, then
+ 1. If _offset_ + _viewByteLength_ > _bufferByteLength_, throw a *RangeError* exception.
1. Set _O_.[[ViewedArrayBuffer]] to _buffer_.
1. Set _O_.[[ByteLength]] to _viewByteLength_.
1. Set _O_.[[ByteOffset]] to _offset_.
@@ -42447,9 +44540,9 @@ get DataView.prototype.byteLength
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[DataView]]).
1. Assert: _O_ has a [[ViewedArrayBuffer]] internal slot.
- 1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. Let _size_ be _O_.[[ByteLength]].
+ 1. Let _viewRecord_ be MakeDataViewWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsViewOutOfBounds(_viewRecord_) is *true*, throw a *TypeError* exception.
+ 1. Let _size_ be GetViewByteLength(_viewRecord_).
1. Return 𝔽(_size_).
@@ -42461,8 +44554,8 @@ get DataView.prototype.byteOffset
1. Let _O_ be the *this* value.
1. Perform ? RequireInternalSlot(_O_, [[DataView]]).
1. Assert: _O_ has a [[ViewedArrayBuffer]] internal slot.
- 1. Let _buffer_ be _O_.[[ViewedArrayBuffer]].
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
+ 1. Let _viewRecord_ be MakeDataViewWithBufferWitnessRecord(_O_, ~seq-cst~).
+ 1. If IsViewOutOfBounds(_viewRecord_) is *true*, throw a *TypeError* exception.
1. Let _offset_ be _O_.[[ByteOffset]].
1. Return 𝔽(_offset_).
@@ -42478,7 +44571,7 @@ DataView.prototype.getBigInt64 ( _byteOffset_ [ , _littleEndian_ ] )
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~BigInt64~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~bigint64~).
@@ -42487,7 +44580,7 @@ DataView.prototype.getBigUint64 ( _byteOffset_ [ , _littleEndian_ ] )
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~BigUint64~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~biguint64~).
@@ -42497,7 +44590,7 @@ DataView.prototype.getFloat32 ( _byteOffset_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~Float32~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~float32~).
@@ -42507,7 +44600,7 @@ DataView.prototype.getFloat64 ( _byteOffset_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~Float64~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~float64~).
@@ -42516,7 +44609,7 @@ DataView.prototype.getInt8 ( _byteOffset_ )
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? GetViewValue(_v_, _byteOffset_, *true*, ~Int8~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, *true*, ~int8~).
@@ -42526,7 +44619,7 @@ DataView.prototype.getInt16 ( _byteOffset_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~Int16~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~int16~).
@@ -42536,7 +44629,7 @@ DataView.prototype.getInt32 ( _byteOffset_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~Int32~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~int32~).
@@ -42545,7 +44638,7 @@ DataView.prototype.getUint8 ( _byteOffset_ )
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? GetViewValue(_v_, _byteOffset_, *true*, ~Uint8~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, *true*, ~uint8~).
@@ -42555,7 +44648,7 @@ DataView.prototype.getUint16 ( _byteOffset_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~Uint16~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~uint16~).
@@ -42565,7 +44658,7 @@ DataView.prototype.getUint32 ( _byteOffset_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~Uint32~).
+ 1. Return ? GetViewValue(_v_, _byteOffset_, _littleEndian_, ~uint32~).
@@ -42574,7 +44667,7 @@ DataView.prototype.setBigInt64 ( _byteOffset_, _value_ [ , _littleEndian_ ]
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~BigInt64~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~bigint64~, _value_).
@@ -42583,7 +44676,7 @@ DataView.prototype.setBigUint64 ( _byteOffset_, _value_ [ , _littleEndian_ ]
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~BigUint64~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~biguint64~, _value_).
@@ -42593,7 +44686,7 @@ DataView.prototype.setFloat32 ( _byteOffset_, _value_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~Float32~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~float32~, _value_).
@@ -42603,7 +44696,7 @@ DataView.prototype.setFloat64 ( _byteOffset_, _value_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~Float64~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~float64~, _value_).
@@ -42612,7 +44705,7 @@ DataView.prototype.setInt8 ( _byteOffset_, _value_ )
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? SetViewValue(_v_, _byteOffset_, *true*, ~Int8~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, *true*, ~int8~, _value_).
@@ -42622,7 +44715,7 @@ DataView.prototype.setInt16 ( _byteOffset_, _value_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~Int16~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~int16~, _value_).
@@ -42632,7 +44725,7 @@ DataView.prototype.setInt32 ( _byteOffset_, _value_ [ , _littleEndian_ ] )
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~Int32~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~int32~, _value_).
@@ -42641,7 +44734,7 @@ DataView.prototype.setUint8 ( _byteOffset_, _value_ )
This method performs the following steps when called:
1. Let _v_ be the *this* value.
- 1. Return ? SetViewValue(_v_, _byteOffset_, *true*, ~Uint8~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, *true*, ~uint8~, _value_).
@@ -42651,7 +44744,7 @@ DataView.prototype.setUint16 ( _byteOffset_, _value_ [ , _littleEndian_ ] )<
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~Uint16~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~uint16~, _value_).
@@ -42661,7 +44754,7 @@ DataView.prototype.setUint32 ( _byteOffset_, _value_ [ , _littleEndian_ ] )<
1. Let _v_ be the *this* value.
1. If _littleEndian_ is not present, set _littleEndian_ to *false*.
- 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~Uint32~, _value_).
+ 1. Return ? SetViewValue(_v_, _byteOffset_, _littleEndian_, ~uint32~, _value_).
@@ -42697,12 +44790,115 @@ The Atomics Object
For informative guidelines for programming and implementing shared memory in ECMAScript, please see the notes at the end of the memory model section.
-
- WaiterList Objects
- A WaiterList is a semantic object that contains an ordered list of agent signifiers for those agents that are waiting on a location (_block_, _i_) in shared memory; _block_ is a Shared Data Block and _i_ a byte offset into the memory of _block_. A WaiterList object also optionally contains a Synchronize event denoting the previous leaving of its critical section.
- Initially a WaiterList object has an empty list and no Synchronize event.
- The agent cluster has a store of WaiterList objects; the store is indexed by (_block_, _i_). WaiterLists are agent-independent: a lookup in the store of WaiterLists by (_block_, _i_) will result in the same WaiterList object in any agent in the agent cluster.
- Each WaiterList has a critical section that controls exclusive access to that WaiterList during evaluation. Only a single agent may enter a WaiterList's critical section at one time. Entering and leaving a WaiterList's critical section is controlled by the abstract operations EnterCriticalSection and LeaveCriticalSection. Operations on a WaiterList—adding and removing waiting agents, traversing the list of agents, suspending and notifying agents on the list, setting and retrieving the Synchronize event—may only be performed by agents that have entered the WaiterList's critical section.
+
+ Waiter Record
+ A Waiter Record is a Record value used to denote a particular call to `Atomics.wait` or `Atomics.waitAsync`.
+ A Waiter Record has fields listed in .
+
+
+
+
+ Field Name
+ |
+
+ Value
+ |
+
+ Meaning
+ |
+
+
+
+ [[AgentSignifier]]
+ |
+
+ an agent signifier
+ |
+
+ The agent that called `Atomics.wait` or `Atomics.waitAsync`.
+ |
+
+
+
+ [[PromiseCapability]]
+ |
+
+ a PromiseCapability Record or ~blocking~
+ |
+
+ If denoting a call to `Atomics.waitAsync`, the resulting promise, otherwise ~blocking~.
+ |
+
+
+
+ [[TimeoutTime]]
+ |
+
+ a non-negative extended mathematical value
+ |
+
+ The earliest time by which timeout may be triggered; computed using time values.
+ |
+
+
+
+ [[Result]]
+ |
+
+ *"ok"* or *"timed-out"*
+ |
+
+ The return value of the call.
+ |
+
+
+
+
+
+
+ WaiterList Records
+ A WaiterList Record is used to explain waiting and notification of agents via `Atomics.wait`, `Atomics.waitAsync`, and `Atomics.notify`.
+ A WaiterList Record has fields listed in .
+
+
+
+
+ Field Name
+ |
+
+ Value
+ |
+
+ Meaning
+ |
+
+
+
+ [[Waiters]]
+ |
+
+ a List of Waiter Records
+ |
+
+ The calls to `Atomics.wait` or `Atomics.waitAsync` that are waiting on the location with which this WaiterList is associated.
+ |
+
+
+
+ [[MostRecentLeaveEvent]]
+ |
+
+ a Synchronize event or ~empty~
+ |
+
+ The event of the most recent leaving of its critical section, or ~empty~ if its critical section has never been entered.
+ |
+
+
+
+ There can be multiple Waiter Records in a WaiterList with the same agent signifier.
+ The agent cluster has a store of WaiterList Records; the store is indexed by (_block_, _i_), where _block_ is a Shared Data Block and _i_ a byte offset into the memory of _block_. WaiterList Records are agent-independent: a lookup in the store of WaiterList Records by (_block_, _i_) will result in the same WaiterList Record in any agent in the agent cluster.
+ Each WaiterList Record has a critical section that controls exclusive access to that WaiterList Record during evaluation. Only a single agent may enter a WaiterList Record's critical section at one time. Entering and leaving a WaiterList Record's critical section is controlled by the abstract operations EnterCriticalSection and LeaveCriticalSection. Operations on a WaiterList Record—adding and removing waiting agents, traversing the list of agents, suspending and notifying agents on the list, setting and retrieving the Synchronize event—may only be performed by agents that have entered the WaiterList Record's critical section.
@@ -42712,79 +44908,115 @@ Abstract Operations for Atomics
ValidateIntegerTypedArray (
_typedArray_: an ECMAScript language value,
- optional _waitable_: a Boolean,
- ): either a normal completion containing either an ArrayBuffer or a SharedArrayBuffer, or a throw completion
+ _waitable_: a Boolean,
+ ): either a normal completion containing a TypedArray With Buffer Witness Record, or a throw completion
- 1. If _waitable_ is not present, set _waitable_ to *false*.
- 1. Perform ? ValidateTypedArray(_typedArray_).
- 1. Let _buffer_ be _typedArray_.[[ViewedArrayBuffer]].
+ 1. Let _taRecord_ be ? ValidateTypedArray(_typedArray_, ~unordered~).
+ 1. NOTE: Bounds checking is not a synchronizing operation when _typedArray_'s backing buffer is a growable SharedArrayBuffer.
1. If _waitable_ is *true*, then
- 1. If _typedArray_.[[TypedArrayName]] is not *"Int32Array"* or *"BigInt64Array"*, throw a *TypeError* exception.
+ 1. If _typedArray_.[[TypedArrayName]] is neither *"Int32Array"* nor *"BigInt64Array"*, throw a *TypeError* exception.
1. Else,
1. Let _type_ be TypedArrayElementType(_typedArray_).
1. If IsUnclampedIntegerElementType(_type_) is *false* and IsBigIntElementType(_type_) is *false*, throw a *TypeError* exception.
- 1. Return _buffer_.
+ 1. Return _taRecord_.
ValidateAtomicAccess (
- _typedArray_: a TypedArray,
+ _taRecord_: a TypedArray With Buffer Witness Record,
_requestIndex_: an ECMAScript language value,
): either a normal completion containing an integer or a throw completion
- 1. Let _length_ be _typedArray_.[[ArrayLength]].
+ 1. Let _length_ be TypedArrayLength(_taRecord_).
1. Let _accessIndex_ be ? ToIndex(_requestIndex_).
1. Assert: _accessIndex_ ≥ 0.
1. If _accessIndex_ ≥ _length_, throw a *RangeError* exception.
+ 1. Let _typedArray_ be _taRecord_.[[Object]].
1. Let _elementSize_ be TypedArrayElementSize(_typedArray_).
1. Let _offset_ be _typedArray_.[[ByteOffset]].
1. Return (_accessIndex_ × _elementSize_) + _offset_.
+
+
+ ValidateAtomicAccessOnIntegerTypedArray (
+ _typedArray_: an ECMAScript language value,
+ _requestIndex_: an ECMAScript language value,
+ optional _waitable_: a Boolean,
+ ): either a normal completion containing an integer or a throw completion
+
+
+
+ 1. If _waitable_ is not present, set _waitable_ to *false*.
+ 1. Let _taRecord_ be ? ValidateIntegerTypedArray(_typedArray_, _waitable_).
+ 1. Return ? ValidateAtomicAccess(_taRecord_, _requestIndex_).
+
+
+
+
+
+ RevalidateAtomicAccess (
+ _typedArray_: a TypedArray,
+ _byteIndexInBuffer_: an integer,
+ ): either a normal completion containing ~unused~ or a throw completion
+
+
+
+ 1. Let _taRecord_ be MakeTypedArrayWithBufferWitnessRecord(_typedArray_, ~unordered~).
+ 1. NOTE: Bounds checking is not a synchronizing operation when _typedArray_'s backing buffer is a growable SharedArrayBuffer.
+ 1. If IsTypedArrayOutOfBounds(_taRecord_) is *true*, throw a *TypeError* exception.
+ 1. Assert: _byteIndexInBuffer_ ≥ _typedArray_.[[ByteOffset]].
+ 1. If _byteIndexInBuffer_ ≥ _taRecord_.[[CachedBufferByteLength]], throw a *RangeError* exception.
+ 1. Return ~unused~.
+
+
+
GetWaiterList (
_block_: a Shared Data Block,
_i_: a non-negative integer that is evenly divisible by 4,
- ): a WaiterList
+ ): a WaiterList Record
1. Assert: _i_ and _i_ + 3 are valid byte offsets within the memory of _block_.
- 1. Return the WaiterList that is referenced by the pair (_block_, _i_).
+ 1. Return the WaiterList Record that is referenced by the pair (_block_, _i_).
EnterCriticalSection (
- _WL_: a WaiterList,
+ _WL_: a WaiterList Record,
): ~unused~
- 1. Assert: The surrounding agent is not in the critical section for any WaiterList.
+ 1. Assert: The surrounding agent is not in the critical section for any WaiterList Record.
1. Wait until no agent is in the critical section for _WL_, then enter the critical section for _WL_ (without allowing any other agent to enter).
- 1. If _WL_ has a Synchronize event, then
+ 1. If _WL_.[[MostRecentLeaveEvent]] is not ~empty~, then
1. NOTE: A _WL_ whose critical section has been entered at least once has a Synchronize event set by LeaveCriticalSection.
1. Let _execution_ be the [[CandidateExecution]] field of the surrounding agent's Agent Record.
- 1. Let _eventsRecord_ be the Agent Events Record in _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
- 1. Let _entererEventList_ be _eventsRecord_.[[EventList]].
+ 1. Let _eventsRecord_ be the Agent Events Record of _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
1. Let _enterEvent_ be a new Synchronize event.
- 1. Append _enterEvent_ to _entererEventList_.
- 1. Let _leaveEvent_ be the Synchronize event in _WL_.
- 1. Append (_leaveEvent_, _enterEvent_) to _eventsRecord_.[[AgentSynchronizesWith]].
+ 1. Append _enterEvent_ to _eventsRecord_.[[EventList]].
+ 1. Append (_WL_.[[MostRecentLeaveEvent]], _enterEvent_) to _eventsRecord_.[[AgentSynchronizesWith]].
1. Return ~unused~.
EnterCriticalSection has contention when an agent attempting to enter the critical section must wait for another agent to leave it. When there is no contention, FIFO order of EnterCriticalSection calls is observable. When there is contention, an implementation may choose an arbitrary order but may not cause an agent to wait indefinitely.
@@ -42793,7 +45025,7 @@
LeaveCriticalSection (
- _WL_: a WaiterList,
+ _WL_: a WaiterList Record,
): ~unused~
- 1. Let _buffer_ be ? ValidateIntegerTypedArray(_typedArray_).
- 1. Let _indexedPosition_ be ? ValidateAtomicAccess(_typedArray_, _index_).
- 1. If _typedArray_.[[ContentType]] is ~BigInt~, let _v_ be ? ToBigInt(_value_).
+ 1. Let _byteIndexInBuffer_ be ? ValidateAtomicAccessOnIntegerTypedArray(_typedArray_, _index_).
+ 1. If _typedArray_.[[ContentType]] is ~bigint~, let _v_ be ? ToBigInt(_value_).
1. Otherwise, let _v_ be 𝔽(? ToIntegerOrInfinity(_value_)).
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. NOTE: The above check is not redundant with the check in ValidateIntegerTypedArray because the call to ToBigInt or ToIntegerOrInfinity on the preceding lines can have arbitrary side effects, which could cause the buffer to become detached.
+ 1. Perform ? RevalidateAtomicAccess(_typedArray_, _byteIndexInBuffer_).
+ 1. Let _buffer_ be _typedArray_.[[ViewedArrayBuffer]].
1. Let _elementType_ be TypedArrayElementType(_typedArray_).
- 1. Return GetModifySetValueInBuffer(_buffer_, _indexedPosition_, _elementType_, _v_, _op_).
+ 1. Return GetModifySetValueInBuffer(_buffer_, _byteIndexInBuffer_, _elementType_, _v_, _op_).
@@ -42955,9 +45334,13 @@
1. Let _i_ be 0.
1. For each element _xByte_ of _xBytes_, do
1. Let _yByte_ be _yBytes_[_i_].
- 1. If _op_ is `&`, let _resultByte_ be the result of applying the bitwise AND operation to _xByte_ and _yByte_.
- 1. Else if _op_ is `^`, let _resultByte_ be the result of applying the bitwise exclusive OR (XOR) operation to _xByte_ and _yByte_.
- 1. Else, _op_ is `|`. Let _resultByte_ be the result of applying the bitwise inclusive OR operation to _xByte_ and _yByte_.
+ 1. If _op_ is `&`, then
+ 1. Let _resultByte_ be the result of applying the bitwise AND operation to _xByte_ and _yByte_.
+ 1. Else if _op_ is `^`, then
+ 1. Let _resultByte_ be the result of applying the bitwise exclusive OR (XOR) operation to _xByte_ and _yByte_.
+ 1. Else,
+ 1. Assert: _op_ is `|`.
+ 1. Let _resultByte_ be the result of applying the bitwise inclusive OR operation to _xByte_ and _yByte_.
1. Set _i_ to _i_ + 1.
1. Append _resultByte_ to _result_.
1. Return _result_.
@@ -43020,40 +45403,27 @@ Atomics.and ( _typedArray_, _index_, _value_ )
Atomics.compareExchange ( _typedArray_, _index_, _expectedValue_, _replacementValue_ )
This function performs the following steps when called:
- 1. Let _buffer_ be ? ValidateIntegerTypedArray(_typedArray_).
+ 1. Let _byteIndexInBuffer_ be ? ValidateAtomicAccessOnIntegerTypedArray(_typedArray_, _index_).
+ 1. Let _buffer_ be _typedArray_.[[ViewedArrayBuffer]].
1. Let _block_ be _buffer_.[[ArrayBufferData]].
- 1. Let _indexedPosition_ be ? ValidateAtomicAccess(_typedArray_, _index_).
- 1. If _typedArray_.[[ContentType]] is ~BigInt~, then
+ 1. If _typedArray_.[[ContentType]] is ~bigint~, then
1. Let _expected_ be ? ToBigInt(_expectedValue_).
1. Let _replacement_ be ? ToBigInt(_replacementValue_).
1. Else,
1. Let _expected_ be 𝔽(? ToIntegerOrInfinity(_expectedValue_)).
1. Let _replacement_ be 𝔽(? ToIntegerOrInfinity(_replacementValue_)).
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. NOTE: The above check is not redundant with the check in ValidateIntegerTypedArray because the call to ToBigInt or ToIntegerOrInfinity on the preceding lines can have arbitrary side effects, which could cause the buffer to become detached.
+ 1. Perform ? RevalidateAtomicAccess(_typedArray_, _byteIndexInBuffer_).
1. Let _elementType_ be TypedArrayElementType(_typedArray_).
1. Let _elementSize_ be TypedArrayElementSize(_typedArray_).
1. Let _isLittleEndian_ be the value of the [[LittleEndian]] field of the surrounding agent's Agent Record.
1. Let _expectedBytes_ be NumericToRawBytes(_elementType_, _expected_, _isLittleEndian_).
1. Let _replacementBytes_ be NumericToRawBytes(_elementType_, _replacement_, _isLittleEndian_).
1. If IsSharedArrayBuffer(_buffer_) is *true*, then
- 1. Let _execution_ be the [[CandidateExecution]] field of the surrounding agent's Agent Record.
- 1. Let _eventList_ be the [[EventList]] field of the element of _execution_.[[EventsRecords]] whose [[AgentSignifier]] is AgentSignifier().
- 1. Let _rawBytesRead_ be a List of length _elementSize_ whose elements are nondeterministically chosen byte values.
- 1. NOTE: In implementations, _rawBytesRead_ is the result of a load-link, of a load-exclusive, or of an operand of a read-modify-write instruction on the underlying hardware. The nondeterminism is a semantic prescription of the memory model to describe observable behaviour of hardware with weak consistency.
- 1. NOTE: The comparison of the expected value and the read value is performed outside of the read-modify-write modification function to avoid needlessly strong synchronization when the expected value is not equal to the read value.
- 1. If ByteListEqual(_rawBytesRead_, _expectedBytes_) is *true*, then
- 1. Let _second_ be a new read-modify-write modification function with parameters (_oldBytes_, _newBytes_) that captures nothing and performs the following steps atomically when called:
- 1. Return _newBytes_.
- 1. Let _event_ be ReadModifyWriteSharedMemory { [[Order]]: ~SeqCst~, [[NoTear]]: *true*, [[Block]]: _block_, [[ByteIndex]]: _indexedPosition_, [[ElementSize]]: _elementSize_, [[Payload]]: _replacementBytes_, [[ModifyOp]]: _second_ }.
- 1. Else,
- 1. Let _event_ be ReadSharedMemory { [[Order]]: ~SeqCst~, [[NoTear]]: *true*, [[Block]]: _block_, [[ByteIndex]]: _indexedPosition_, [[ElementSize]]: _elementSize_ }.
- 1. Append _event_ to _eventList_.
- 1. Append Chosen Value Record { [[Event]]: _event_, [[ChosenValue]]: _rawBytesRead_ } to _execution_.[[ChosenValues]].
+ 1. Let _rawBytesRead_ be AtomicCompareExchangeInSharedBlock(_block_, _byteIndexInBuffer_, _elementSize_, _expectedBytes_, _replacementBytes_).
1. Else,
- 1. Let _rawBytesRead_ be a List of length _elementSize_ whose elements are the sequence of _elementSize_ bytes starting with _block_[_indexedPosition_].
+ 1. Let _rawBytesRead_ be a List of length _elementSize_ whose elements are the sequence of _elementSize_ bytes starting with _block_[_byteIndexInBuffer_].
1. If ByteListEqual(_rawBytesRead_, _expectedBytes_) is *true*, then
- 1. Store the individual bytes of _replacementBytes_ into _block_, starting at _block_[_indexedPosition_].
+ 1. Store the individual bytes of _replacementBytes_ into _block_, starting at _block_[_byteIndexInBuffer_].
1. Return RawBytesToNumeric(_elementType_, _rawBytesRead_, _isLittleEndian_).
@@ -43091,12 +45461,11 @@ Atomics.isLockFree ( _size_ )
Atomics.load ( _typedArray_, _index_ )
This function performs the following steps when called:
- 1. Let _buffer_ be ? ValidateIntegerTypedArray(_typedArray_).
- 1. Let _indexedPosition_ be ? ValidateAtomicAccess(_typedArray_, _index_).
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. NOTE: The above check is not redundant with the check in ValidateIntegerTypedArray because the call to ValidateAtomicAccess on the preceding line can have arbitrary side effects, which could cause the buffer to become detached.
+ 1. Let _byteIndexInBuffer_ be ? ValidateAtomicAccessOnIntegerTypedArray(_typedArray_, _index_).
+ 1. Perform ? RevalidateAtomicAccess(_typedArray_, _byteIndexInBuffer_).
+ 1. Let _buffer_ be _typedArray_.[[ViewedArrayBuffer]].
1. Let _elementType_ be TypedArrayElementType(_typedArray_).
- 1. Return GetValueFromBuffer(_buffer_, _indexedPosition_, _elementType_, *true*, ~SeqCst~).
+ 1. Return GetValueFromBuffer(_buffer_, _byteIndexInBuffer_, _elementType_, *true*, ~seq-cst~).
@@ -43114,14 +45483,13 @@ Atomics.or ( _typedArray_, _index_, _value_ )
Atomics.store ( _typedArray_, _index_, _value_ )
This function performs the following steps when called:
- 1. Let _buffer_ be ? ValidateIntegerTypedArray(_typedArray_).
- 1. Let _indexedPosition_ be ? ValidateAtomicAccess(_typedArray_, _index_).
- 1. If _typedArray_.[[ContentType]] is ~BigInt~, let _v_ be ? ToBigInt(_value_).
+ 1. Let _byteIndexInBuffer_ be ? ValidateAtomicAccessOnIntegerTypedArray(_typedArray_, _index_).
+ 1. If _typedArray_.[[ContentType]] is ~bigint~, let _v_ be ? ToBigInt(_value_).
1. Otherwise, let _v_ be 𝔽(? ToIntegerOrInfinity(_value_)).
- 1. If IsDetachedBuffer(_buffer_) is *true*, throw a *TypeError* exception.
- 1. NOTE: The above check is not redundant with the check in ValidateIntegerTypedArray because the call to ToBigInt or ToIntegerOrInfinity on the preceding lines can have arbitrary side effects, which could cause the buffer to become detached.
+ 1. Perform ? RevalidateAtomicAccess(_typedArray_, _byteIndexInBuffer_).
+ 1. Let _buffer_ be _typedArray_.[[ViewedArrayBuffer]].
1. Let _elementType_ be TypedArrayElementType(_typedArray_).
- 1. Perform SetValueInBuffer(_buffer_, _indexedPosition_, _elementType_, _v_, *true*, ~SeqCst~).
+ 1. Perform SetValueInBuffer(_buffer_, _byteIndexInBuffer_, _elementType_, _v_, *true*, ~seq-cst~).
1. Return _v_.
@@ -43152,33 +45520,16 @@ Atomics.wait ( _typedArray_, _index_, _value_, _timeout_ )
This function puts the surrounding agent in a wait queue and suspends it until notified or until the wait times out, returning a String differentiating those cases.
It performs the following steps when called:
- 1. Let _buffer_ be ? ValidateIntegerTypedArray(_typedArray_, *true*).
- 1. If IsSharedArrayBuffer(_buffer_) is *false*, throw a *TypeError* exception.
- 1. Let _indexedPosition_ be ? ValidateAtomicAccess(_typedArray_, _index_).
- 1. If _typedArray_.[[TypedArrayName]] is *"BigInt64Array"*, let _v_ be ? ToBigInt64(_value_).
- 1. Otherwise, let _v_ be ? ToInt32(_value_).
- 1. Let _q_ be ? ToNumber(_timeout_).
- 1. If _q_ is *NaN* or *+∞*𝔽, let _t_ be +∞; else if _q_ is *-∞*𝔽, let _t_ be 0; else let _t_ be max(ℝ(_q_), 0).
- 1. Let _B_ be AgentCanSuspend().
- 1. If _B_ is *false*, throw a *TypeError* exception.
- 1. Let _block_ be _buffer_.[[ArrayBufferData]].
- 1. Let _WL_ be GetWaiterList(_block_, _indexedPosition_).
- 1. Perform EnterCriticalSection(_WL_).
- 1. Let _elementType_ be TypedArrayElementType(_typedArray_).
- 1. Let _w_ be GetValueFromBuffer(_buffer_, _indexedPosition_, _elementType_, *true*, ~SeqCst~).
- 1. If _v_ ≠ _w_, then
- 1. Perform LeaveCriticalSection(_WL_).
- 1. Return *"not-equal"*.
- 1. Let _W_ be AgentSignifier().
- 1. Perform AddWaiter(_WL_, _W_).
- 1. Let _notified_ be SuspendAgent(_WL_, _W_, _t_).
- 1. If _notified_ is *true*, then
- 1. Assert: _W_ is not on the list of waiters in _WL_.
- 1. Else,
- 1. Perform RemoveWaiter(_WL_, _W_).
- 1. Perform LeaveCriticalSection(_WL_).
- 1. If _notified_ is *true*, return *"ok"*.
- 1. Return *"timed-out"*.
+ 1. Return ? DoWait(~sync~, _typedArray_, _index_, _value_, _timeout_).
+
+
+
+
+ Atomics.waitAsync ( _typedArray_, _index_, _value_, _timeout_ )
+ This function returns a Promise that is resolved when the calling agent is notified or the the timeout is reached.
+ It performs the following steps when called:
+
+ 1. Return ? DoWait(~async~, _typedArray_, _index_, _value_, _timeout_).
@@ -43187,15 +45538,16 @@ Atomics.notify ( _typedArray_, _index_, _count_ )
This function notifies some agents that are sleeping in the wait queue.
It performs the following steps when called:
- 1. Let _buffer_ be ? ValidateIntegerTypedArray(_typedArray_, *true*).
- 1. Let _indexedPosition_ be ? ValidateAtomicAccess(_typedArray_, _index_).
- 1. If _count_ is *undefined*, let _c_ be +∞.
+ 1. Let _byteIndexInBuffer_ be ? ValidateAtomicAccessOnIntegerTypedArray(_typedArray_, _index_, *true*).
+ 1. If _count_ is *undefined*, then
+ 1. Let _c_ be +∞.
1. Else,
1. Let _intCount_ be ? ToIntegerOrInfinity(_count_).
1. Let _c_ be max(_intCount_, 0).
+ 1. Let _buffer_ be _typedArray_.[[ViewedArrayBuffer]].
1. Let _block_ be _buffer_.[[ArrayBufferData]].
1. If IsSharedArrayBuffer(_buffer_) is *false*, return *+0*𝔽.
- 1. Let _WL_ be GetWaiterList(_block_, _indexedPosition_).
+ 1. Let _WL_ be GetWaiterList(_block_, _byteIndexInBuffer_).
1. Perform EnterCriticalSection(_WL_).
1. Let _S_ be RemoveWaiters(_WL_, _c_).
1. For each element _W_ of _S_, do
@@ -43251,7 +45603,7 @@ JSON.parse ( _text_ [ , _reviver_ ] )
1. [id="step-json-parse-eval"] Let _completion_ be Completion(Evaluation of _script_).
1. NOTE: The PropertyDefinitionEvaluation semantics defined in have special handling for the above evaluation.
1. Let _unfiltered_ be _completion_.[[Value]].
- 1. [id="step-json-parse-assert-type"] Assert: _unfiltered_ is either a String, Number, Boolean, Null, or an Object that is defined by either an |ArrayLiteral| or an |ObjectLiteral|.
+ 1. [id="step-json-parse-assert-type"] Assert: _unfiltered_ is either a String, a Number, a Boolean, an Object that is defined by either an |ArrayLiteral| or an |ObjectLiteral|, or *null*.
1. If IsCallable(_reviver_) is *true*, then
1. Let _root_ be OrdinaryObjectCreate(%Object.prototype%).
1. Let _rootName_ be the empty String.
@@ -43334,11 +45686,13 @@ JSON.stringify ( _value_ [ , _replacer_ [ , _space_ ] ] )
1. Let _prop_ be ! ToString(𝔽(_k_)).
1. Let _v_ be ? Get(_replacer_, _prop_).
1. Let _item_ be *undefined*.
- 1. If _v_ is a String, set _item_ to _v_.
- 1. Else if _v_ is a Number, set _item_ to ! ToString(_v_).
+ 1. If _v_ is a String, then
+ 1. Set _item_ to _v_.
+ 1. Else if _v_ is a Number, then
+ 1. Set _item_ to ! ToString(_v_).
1. Else if _v_ is an Object, then
1. If _v_ has a [[StringData]] or [[NumberData]] internal slot, set _item_ to ? ToString(_v_).
- 1. If _item_ is not *undefined* and _item_ is not currently an element of _PropertyList_, then
+ 1. If _item_ is not *undefined* and _PropertyList_ does not contain _item_, then
1. Append _item_ to _PropertyList_.
1. Set _k_ to _k_ + 1.
1. If _space_ is an Object, then
@@ -43351,7 +45705,7 @@ JSON.stringify ( _value_ [ , _replacer_ [ , _space_ ] ] )
1. Set _spaceMV_ to min(10, _spaceMV_).
1. If _spaceMV_ < 1, let _gap_ be the empty String; otherwise let _gap_ be the String value containing _spaceMV_ occurrences of the code unit 0x0020 (SPACE).
1. Else if _space_ is a String, then
- 1. If the length of _space_ is 10 or less, let _gap_ be _space_; otherwise let _gap_ be the substring of _space_ from 0 to 10.
+ 1. If the length of _space_ ≤ 10, let _gap_ be _space_; otherwise let _gap_ be the substring of _space_ from 0 to 10.
1. Else,
1. Let _gap_ be the empty String.
1. Let _wrapper_ be OrdinaryObjectCreate(%Object.prototype%).
@@ -43444,7 +45798,7 @@
_state_: a JSON Serialization Record,
_key_: a String,
_holder_: an Object,
- ): either a normal completion containing either *undefined* or a String, or a throw completion
+ ): either a normal completion containing either a String or *undefined*, or a throw completion
@@ -43496,8 +45850,8 @@
1. For each code point _C_ of StringToCodePoints(_value_), do
1. If _C_ is listed in the “Code Point” column of , then
1. Set _product_ to the string-concatenation of _product_ and the escape sequence for _C_ as specified in the “Escape Sequence” column of the corresponding row.
- 1. Else if _C_ has a numeric value less than 0x0020 (SPACE), or if _C_ has the same numeric value as a or , then
- 1. Let _unit_ be the code unit whose numeric value is that of _C_.
+ 1. Else if _C_ has a numeric value less than 0x0020 (SPACE) or _C_ has the same numeric value as a leading surrogate or trailing surrogate, then
+ 1. Let _unit_ be the code unit whose numeric value is the numeric value of _C_.
1. Set _product_ to the string-concatenation of _product_ and UnicodeEscape(_unit_).
1. Else,
1. Set _product_ to the string-concatenation of _product_ and UTF16EncodeCodePoint(_C_).
@@ -43612,7 +45966,7 @@
1. Let _n_ be the numeric value of _C_.
1. Assert: _n_ ≤ 0xFFFF.
1. Let _hex_ be the String representation of _n_, formatted as a lowercase hexadecimal number.
- 1. Return the string-concatenation of the code unit 0x005C (REVERSE SOLIDUS), *"u"*, and ! StringPad(_hex_, *4*𝔽, *"0"*, ~start~).
+ 1. Return the string-concatenation of the code unit 0x005C (REVERSE SOLIDUS), *"u"*, and StringPad(_hex_, 4, *"0"*, ~start~).
@@ -43721,7 +46075,7 @@ Managing Memory
WeakRef Objects
- A WeakRef is an object that is used to refer to a target object without preserving it from garbage collection. WeakRefs can be dereferenced to allow access to the target object, if the target object hasn't been reclaimed by garbage collection.
+ A WeakRef is an object that is used to refer to a target object or symbol without preserving it from garbage collection. WeakRefs can be dereferenced to allow access to the target value, if the target hasn't been reclaimed by garbage collection.
The WeakRef Constructor
@@ -43747,7 +46101,7 @@ WeakRef ( _target_ )
This function performs the following steps when called:
1. If NewTarget is *undefined*, throw a *TypeError* exception.
- 1. If _target_ is not an Object, throw a *TypeError* exception.
+ 1. If CanBeHeldWeakly(_target_) is *false*, throw a *TypeError* exception.
1. Let _weakRef_ be ? OrdinaryCreateFromConstructor(NewTarget, *"%WeakRef.prototype%"*, « [[WeakRefTarget]] »).
1. Perform AddToKeptObjects(_target_).
1. Set _weakRef_.[[WeakRefTarget]] to _target_.
@@ -43803,7 +46157,7 @@ WeakRef.prototype.deref ( )
- If the WeakRef returns a _target_ Object that is not *undefined*, then this _target_ object should not be garbage collected until the current execution of ECMAScript code has completed. The AddToKeptObjects operation makes sure read consistency is maintained.
+ If the WeakRef returns a _target_ value that is not *undefined*, then this _target_ value should not be garbage collected until the current execution of ECMAScript code has completed. The AddToKeptObjects operation makes sure read consistency is maintained.
let target = { foo() {} };
@@ -43859,7 +46213,7 @@ Properties of WeakRef Instances
FinalizationRegistry Objects
- A FinalizationRegistry is an object that manages registration and unregistration of cleanup operations that are performed when target objects are garbage collected.
+ A FinalizationRegistry is an object that manages registration and unregistration of cleanup operations that are performed when target objects and symbols are garbage collected.
The FinalizationRegistry Constructor
@@ -43938,9 +46292,9 @@ FinalizationRegistry.prototype.register ( _target_, _heldValue_ [ , _unregis
1. Let _finalizationRegistry_ be the *this* value.
1. Perform ? RequireInternalSlot(_finalizationRegistry_, [[Cells]]).
- 1. If _target_ is not an Object, throw a *TypeError* exception.
+ 1. If CanBeHeldWeakly(_target_) is *false*, throw a *TypeError* exception.
1. If SameValue(_target_, _heldValue_) is *true*, throw a *TypeError* exception.
- 1. If _unregisterToken_ is not an Object, then
+ 1. If CanBeHeldWeakly(_unregisterToken_) is *false*, then
1. If _unregisterToken_ is not *undefined*, throw a *TypeError* exception.
1. Set _unregisterToken_ to ~empty~.
1. Let _cell_ be the Record { [[WeakRefTarget]]: _target_, [[HeldValue]]: _heldValue_, [[UnregisterToken]]: _unregisterToken_ }.
@@ -43949,7 +46303,7 @@ FinalizationRegistry.prototype.register ( _target_, _heldValue_ [ , _unregis
- Based on the algorithms and definitions in this specification, _cell_.[[HeldValue]] is live when _cell_ is in _finalizationRegistry_.[[Cells]]; however, this does not necessarily mean that _cell_.[[UnregisterToken]] or _cell_.[[Target]] are live. For example, registering an object with itself as its unregister token would not keep the object alive forever.
+ Based on the algorithms and definitions in this specification, _cell_.[[HeldValue]] is live when _finalizationRegistry_.[[Cells]] contains _cell_; however, this does not necessarily mean that _cell_.[[UnregisterToken]] or _cell_.[[Target]] are live. For example, registering an object with itself as its unregister token would not keep the object alive forever.
@@ -43959,7 +46313,7 @@ FinalizationRegistry.prototype.unregister ( _unregisterToken_ )
1. Let _finalizationRegistry_ be the *this* value.
1. Perform ? RequireInternalSlot(_finalizationRegistry_, [[Cells]]).
- 1. If _unregisterToken_ is not an Object, throw a *TypeError* exception.
+ 1. If CanBeHeldWeakly(_unregisterToken_) is *false*, throw a *TypeError* exception.
1. Let _removed_ be *false*.
1. For each Record { [[WeakRefTarget]], [[HeldValue]], [[UnregisterToken]] } _cell_ of _finalizationRegistry_.[[Cells]], do
1. If _cell_.[[UnregisterToken]] is not ~empty~ and SameValue(_cell_.[[UnregisterToken]], _unregisterToken_) is *true*, then
@@ -44450,8 +46804,8 @@
1. IfAbruptRejectPromise(_value_, _promiseCapability_).
1. Let _valueWrapper_ be Completion(PromiseResolve(%Promise%, _value_)).
1. IfAbruptRejectPromise(_valueWrapper_, _promiseCapability_).
- 1. Let _unwrap_ be a new Abstract Closure with parameters (_value_) that captures _done_ and performs the following steps when called:
- 1. Return CreateIterResultObject(_value_, _done_).
+ 1. Let _unwrap_ be a new Abstract Closure with parameters (_v_) that captures _done_ and performs the following steps when called:
+ 1. Return CreateIterResultObject(_v_, _done_).
1. Let _onFulfilled_ be CreateBuiltinFunction(_unwrap_, 1, *""*, « »).
1. NOTE: _onFulfilled_ is used when processing the *"value"* property of an IteratorResult object in order to wait for its value if it is a promise and re-package the result in a new "unwrapped" IteratorResult object.
1. Perform PerformPromiseThen(_valueWrapper_, _onFulfilled_, *undefined*, _promiseCapability_).
@@ -45001,15 +47355,16 @@ IfAbruptRejectPromise ( _value_, _capability_ )
1. If _value_ is an abrupt completion, then
1. Perform ? Call(_capability_.[[Reject]], *undefined*, « _value_.[[Value]] »).
1. Return _capability_.[[Promise]].
- 1. Else, set _value_ to _value_.[[Value]].
+ 1. Else,
+ 1. Set _value_ to ! _value_.
PromiseReaction Records
- The PromiseReaction is a Record value used to store information about how a promise should react when it becomes resolved or rejected with a given value. PromiseReaction records are created by the PerformPromiseThen abstract operation, and are used by the Abstract Closure returned by NewPromiseReactionJob.
- PromiseReaction records have the fields listed in .
+ A PromiseReaction Record is a Record value used to store information about how a promise should react when it becomes resolved or rejected with a given value. PromiseReaction Records are created by the PerformPromiseThen abstract operation, and are used by the Abstract Closure returned by NewPromiseReactionJob.
+ PromiseReaction Records have the fields listed in .
@@ -45039,7 +47394,7 @@ PromiseReaction Records
[[Type]]
- ~Fulfill~ or ~Reject~
+ ~fulfill~ or ~reject~
|
The [[Type]] is used when [[Handler]] is ~empty~ to allow for behaviour specific to the settlement type.
@@ -45256,10 +47611,6 @@
description
It allows host environments to track promise rejections.
- An implementation of HostPromiseRejectionTracker must conform to the following requirements:
-
- - It must complete normally (i.e. not return an abrupt completion).
-
The default implementation of HostPromiseRejectionTracker is to return ~unused~.
@@ -45299,11 +47650,13 @@
1. Let _type_ be _reaction_.[[Type]].
1. Let _handler_ be _reaction_.[[Handler]].
1. If _handler_ is ~empty~, then
- 1. If _type_ is ~Fulfill~, let _handlerResult_ be NormalCompletion(_argument_).
+ 1. If _type_ is ~fulfill~, then
+ 1. Let _handlerResult_ be NormalCompletion(_argument_).
1. Else,
- 1. Assert: _type_ is ~Reject~.
+ 1. Assert: _type_ is ~reject~.
1. Let _handlerResult_ be ThrowCompletion(_argument_).
- 1. Else, let _handlerResult_ be Completion(HostCallJobCallback(_handler_, *undefined*, « _argument_ »)).
+ 1. Else,
+ 1. Let _handlerResult_ be Completion(HostCallJobCallback(_handler_, *undefined*, « _argument_ »)).
1. If _promiseCapability_ is *undefined*, then
1. Assert: _handlerResult_ is not an abrupt completion.
1. Return ~empty~.
@@ -45404,7 +47757,7 @@ Promise.all ( _iterable_ )
1. Let _promiseCapability_ be ? NewPromiseCapability(_C_).
1. Let _promiseResolve_ be Completion(GetPromiseResolve(_C_)).
1. IfAbruptRejectPromise(_promiseResolve_, _promiseCapability_).
- 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_)).
+ 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_, ~sync~)).
1. IfAbruptRejectPromise(_iteratorRecord_, _promiseCapability_).
1. Let _result_ be Completion(PerformPromiseAll(_iteratorRecord_, _C_, _promiseCapability_, _promiseResolve_)).
1. If _result_ is an abrupt completion, then
@@ -45447,21 +47800,15 @@
1. Let _remainingElementsCount_ be the Record { [[Value]]: 1 }.
1. Let _index_ be 0.
1. Repeat,
- 1. Let _next_ be Completion(IteratorStep(_iteratorRecord_)).
- 1. If _next_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_next_).
- 1. If _next_ is *false*, then
- 1. Set _iteratorRecord_.[[Done]] to *true*.
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, then
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _valuesArray_ be CreateArrayFromList(_values_).
1. Perform ? Call(_resultCapability_.[[Resolve]], *undefined*, « _valuesArray_ »).
1. Return _resultCapability_.[[Promise]].
- 1. Let _nextValue_ be Completion(IteratorValue(_next_)).
- 1. If _nextValue_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_nextValue_).
1. Append *undefined* to _values_.
- 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _nextValue_ »).
+ 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _next_ »).
1. Let _steps_ be the algorithm steps defined in .
1. Let _length_ be the number of non-optional parameters of the function definition in .
1. Let _onFulfilled_ be CreateBuiltinFunction(_steps_, _length_, *""*, « [[AlreadyCalled]], [[Index]], [[Values]], [[Capability]], [[RemainingElements]] »).
@@ -45490,7 +47837,7 @@ `Promise.all` Resolve Element Functions
1. Let _remainingElementsCount_ be _F_.[[RemainingElements]].
1. Set _values_[_index_] to _x_.
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _valuesArray_ be CreateArrayFromList(_values_).
1. Return ? Call(_promiseCapability_.[[Resolve]], *undefined*, « _valuesArray_ »).
1. Return *undefined*.
@@ -45507,7 +47854,7 @@ Promise.allSettled ( _iterable_ )
1. Let _promiseCapability_ be ? NewPromiseCapability(_C_).
1. Let _promiseResolve_ be Completion(GetPromiseResolve(_C_)).
1. IfAbruptRejectPromise(_promiseResolve_, _promiseCapability_).
- 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_)).
+ 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_, ~sync~)).
1. IfAbruptRejectPromise(_iteratorRecord_, _promiseCapability_).
1. Let _result_ be Completion(PerformPromiseAllSettled(_iteratorRecord_, _C_, _promiseCapability_, _promiseResolve_)).
1. If _result_ is an abrupt completion, then
@@ -45535,21 +47882,15 @@
1. Let _remainingElementsCount_ be the Record { [[Value]]: 1 }.
1. Let _index_ be 0.
1. Repeat,
- 1. Let _next_ be Completion(IteratorStep(_iteratorRecord_)).
- 1. If _next_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_next_).
- 1. If _next_ is *false*, then
- 1. Set _iteratorRecord_.[[Done]] to *true*.
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, then
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _valuesArray_ be CreateArrayFromList(_values_).
1. Perform ? Call(_resultCapability_.[[Resolve]], *undefined*, « _valuesArray_ »).
1. Return _resultCapability_.[[Promise]].
- 1. Let _nextValue_ be Completion(IteratorValue(_next_)).
- 1. If _nextValue_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_nextValue_).
1. Append *undefined* to _values_.
- 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _nextValue_ »).
+ 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _next_ »).
1. Let _stepsFulfilled_ be the algorithm steps defined in .
1. Let _lengthFulfilled_ be the number of non-optional parameters of the function definition in .
1. Let _onFulfilled_ be CreateBuiltinFunction(_stepsFulfilled_, _lengthFulfilled_, *""*, « [[AlreadyCalled]], [[Index]], [[Values]], [[Capability]], [[RemainingElements]] »).
@@ -45591,7 +47932,7 @@ `Promise.allSettled` Resolve Element Functions
1. Perform ! CreateDataPropertyOrThrow(_obj_, *"value"*, _x_).
1. Set _values_[_index_] to _obj_.
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _valuesArray_ be CreateArrayFromList(_values_).
1. Return ? Call(_promiseCapability_.[[Resolve]], *undefined*, « _valuesArray_ »).
1. Return *undefined*.
@@ -45617,7 +47958,7 @@ `Promise.allSettled` Reject Element Functions
1. Perform ! CreateDataPropertyOrThrow(_obj_, *"reason"*, _x_).
1. Set _values_[_index_] to _obj_.
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _valuesArray_ be CreateArrayFromList(_values_).
1. Return ? Call(_promiseCapability_.[[Resolve]], *undefined*, « _valuesArray_ »).
1. Return *undefined*.
@@ -45634,7 +47975,7 @@ Promise.any ( _iterable_ )
1. Let _promiseCapability_ be ? NewPromiseCapability(_C_).
1. Let _promiseResolve_ be Completion(GetPromiseResolve(_C_)).
1. IfAbruptRejectPromise(_promiseResolve_, _promiseCapability_).
- 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_)).
+ 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_, ~sync~)).
1. IfAbruptRejectPromise(_iteratorRecord_, _promiseCapability_).
1. Let _result_ be Completion(PerformPromiseAny(_iteratorRecord_, _C_, _promiseCapability_, _promiseResolve_)).
1. If _result_ is an abrupt completion, then
@@ -45662,22 +48003,16 @@
1. Let _remainingElementsCount_ be the Record { [[Value]]: 1 }.
1. Let _index_ be 0.
1. Repeat,
- 1. Let _next_ be Completion(IteratorStep(_iteratorRecord_)).
- 1. If _next_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_next_).
- 1. If _next_ is *false*, then
- 1. Set _iteratorRecord_.[[Done]] to *true*.
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, then
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _error_ be a newly created *AggregateError* object.
1. Perform ! DefinePropertyOrThrow(_error_, *"errors"*, PropertyDescriptor { [[Configurable]]: *true*, [[Enumerable]]: *false*, [[Writable]]: *true*, [[Value]]: CreateArrayFromList(_errors_) }).
1. Return ThrowCompletion(_error_).
1. Return _resultCapability_.[[Promise]].
- 1. Let _nextValue_ be Completion(IteratorValue(_next_)).
- 1. If _nextValue_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_nextValue_).
1. Append *undefined* to _errors_.
- 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _nextValue_ »).
+ 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _next_ »).
1. Let _stepsRejected_ be the algorithm steps defined in .
1. Let _lengthRejected_ be the number of non-optional parameters of the function definition in .
1. Let _onRejected_ be CreateBuiltinFunction(_stepsRejected_, _lengthRejected_, *""*, « [[AlreadyCalled]], [[Index]], [[Errors]], [[Capability]], [[RemainingElements]] »).
@@ -45706,7 +48041,7 @@ `Promise.any` Reject Element Functions
1. Let _remainingElementsCount_ be _F_.[[RemainingElements]].
1. Set _errors_[_index_] to _x_.
1. Set _remainingElementsCount_.[[Value]] to _remainingElementsCount_.[[Value]] - 1.
- 1. If _remainingElementsCount_.[[Value]] is 0, then
+ 1. If _remainingElementsCount_.[[Value]] = 0, then
1. Let _error_ be a newly created *AggregateError* object.
1. Perform ! DefinePropertyOrThrow(_error_, *"errors"*, PropertyDescriptor { [[Configurable]]: *true*, [[Enumerable]]: *false*, [[Writable]]: *true*, [[Value]]: CreateArrayFromList(_errors_) }).
1. Return ? Call(_promiseCapability_.[[Reject]], *undefined*, « _error_ »).
@@ -45730,7 +48065,7 @@ Promise.race ( _iterable_ )
1. Let _promiseCapability_ be ? NewPromiseCapability(_C_).
1. Let _promiseResolve_ be Completion(GetPromiseResolve(_C_)).
1. IfAbruptRejectPromise(_promiseResolve_, _promiseCapability_).
- 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_)).
+ 1. Let _iteratorRecord_ be Completion(GetIterator(_iterable_, ~sync~)).
1. IfAbruptRejectPromise(_iteratorRecord_, _promiseCapability_).
1. Let _result_ be Completion(PerformPromiseRace(_iteratorRecord_, _C_, _promiseCapability_, _promiseResolve_)).
1. If _result_ is an abrupt completion, then
@@ -45758,16 +48093,10 @@
1. Repeat,
- 1. Let _next_ be Completion(IteratorStep(_iteratorRecord_)).
- 1. If _next_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_next_).
- 1. If _next_ is *false*, then
- 1. Set _iteratorRecord_.[[Done]] to *true*.
+ 1. Let _next_ be ? IteratorStepValue(_iteratorRecord_).
+ 1. If _next_ is ~done~, then
1. Return _resultCapability_.[[Promise]].
- 1. Let _nextValue_ be Completion(IteratorValue(_next_)).
- 1. If _nextValue_ is an abrupt completion, set _iteratorRecord_.[[Done]] to *true*.
- 1. ReturnIfAbrupt(_nextValue_).
- 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _nextValue_ »).
+ 1. Let _nextPromise_ be ? Call(_promiseResolve_, _constructor_, « _next_ »).
1. Perform ? Invoke(_nextPromise_, *"then"*, « _resultCapability_.[[Resolve]], _resultCapability_.[[Reject]] »).
@@ -45821,6 +48150,20 @@
+
+ Promise.withResolvers ( )
+ This function returns an object with three properties: a new promise together with the `resolve` and `reject` functions associated with it.
+
+ 1. Let _C_ be the *this* value.
+ 1. Let _promiseCapability_ be ? NewPromiseCapability(_C_).
+ 1. Let _obj_ be OrdinaryObjectCreate(%Object.prototype%).
+ 1. Perform ! CreateDataPropertyOrThrow(_obj_, *"promise"*, _promiseCapability_.[[Promise]]).
+ 1. Perform ! CreateDataPropertyOrThrow(_obj_, *"resolve"*, _promiseCapability_.[[Resolve]]).
+ 1. Perform ! CreateDataPropertyOrThrow(_obj_, *"reject"*, _promiseCapability_.[[Reject]]).
+ 1. Return _obj_.
+
+
+
get Promise [ @@species ]
`Promise[@@species]` is an accessor property whose set accessor function is *undefined*. Its get accessor function performs the following steps when called:
@@ -45872,19 +48215,19 @@ Promise.prototype.finally ( _onFinally_ )
1. Else,
1. Let _thenFinallyClosure_ be a new Abstract Closure with parameters (_value_) that captures _onFinally_ and _C_ and performs the following steps when called:
1. Let _result_ be ? Call(_onFinally_, *undefined*).
- 1. Let _promise_ be ? PromiseResolve(_C_, _result_).
+ 1. Let _p_ be ? PromiseResolve(_C_, _result_).
1. Let _returnValue_ be a new Abstract Closure with no parameters that captures _value_ and performs the following steps when called:
1. Return _value_.
1. Let _valueThunk_ be CreateBuiltinFunction(_returnValue_, 0, *""*, « »).
- 1. Return ? Invoke(_promise_, *"then"*, « _valueThunk_ »).
+ 1. Return ? Invoke(_p_, *"then"*, « _valueThunk_ »).
1. Let _thenFinally_ be CreateBuiltinFunction(_thenFinallyClosure_, 1, *""*, « »).
1. Let _catchFinallyClosure_ be a new Abstract Closure with parameters (_reason_) that captures _onFinally_ and _C_ and performs the following steps when called:
1. Let _result_ be ? Call(_onFinally_, *undefined*).
- 1. Let _promise_ be ? PromiseResolve(_C_, _result_).
+ 1. Let _p_ be ? PromiseResolve(_C_, _result_).
1. Let _throwReason_ be a new Abstract Closure with no parameters that captures _reason_ and performs the following steps when called:
1. Return ThrowCompletion(_reason_).
1. Let _thrower_ be CreateBuiltinFunction(_throwReason_, 0, *""*, « »).
- 1. Return ? Invoke(_promise_, *"then"*, « _thrower_ »).
+ 1. Return ? Invoke(_p_, *"then"*, « _thrower_ »).
1. Let _catchFinally_ be CreateBuiltinFunction(_catchFinallyClosure_, 1, *""*, « »).
1. Return ? Invoke(_promise_, *"then"*, « _thenFinally_, _catchFinally_ »).
@@ -45926,8 +48269,8 @@
1. Let _onRejectedJobCallback_ be ~empty~.
1. Else,
1. Let _onRejectedJobCallback_ be HostMakeJobCallback(_onRejected_).
- 1. Let _fulfillReaction_ be the PromiseReaction { [[Capability]]: _resultCapability_, [[Type]]: ~Fulfill~, [[Handler]]: _onFulfilledJobCallback_ }.
- 1. Let _rejectReaction_ be the PromiseReaction { [[Capability]]: _resultCapability_, [[Type]]: ~Reject~, [[Handler]]: _onRejectedJobCallback_ }.
+ 1. Let _fulfillReaction_ be the PromiseReaction Record { [[Capability]]: _resultCapability_, [[Type]]: ~fulfill~, [[Handler]]: _onFulfilledJobCallback_ }.
+ 1. Let _rejectReaction_ be the PromiseReaction Record { [[Capability]]: _resultCapability_, [[Type]]: ~reject~, [[Handler]]: _onRejectedJobCallback_ }.
1. If _promise_.[[PromiseState]] is ~pending~, then
1. Append _fulfillReaction_ to _promise_.[[PromiseFulfillReactions]].
1. Append _rejectReaction_ to _promise_.[[PromiseRejectReactions]].
@@ -46037,7 +48380,7 @@ Properties of Promise Instances
GeneratorFunction Objects
GeneratorFunctions are functions that are usually created by evaluating |GeneratorDeclaration|s, |GeneratorExpression|s, and |GeneratorMethod|s. They may also be created by calling the %GeneratorFunction% intrinsic.
-
+
@@ -46071,15 +48414,11 @@ Properties of the GeneratorFunction Constructor
- is a standard built-in function object that inherits from the Function constructor.
- has a [[Prototype]] internal slot whose value is %Function%.
+ - has a *"length"* property whose value is *1*𝔽.
- has a *"name"* property whose value is *"GeneratorFunction"*.
- has the following properties:
-
- GeneratorFunction.length
- This is a data property with a value of 1. This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
-
-
GeneratorFunction.prototype
The initial value of `GeneratorFunction.prototype` is the GeneratorFunction prototype object.
@@ -46105,7 +48444,7 @@ GeneratorFunction.prototype.constructor
GeneratorFunction.prototype.prototype
- The initial value of `GeneratorFunction.prototype.prototype` is the Generator prototype object.
+ The initial value of `GeneratorFunction.prototype.prototype` is %GeneratorPrototype%.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
@@ -46163,7 +48502,7 @@ AsyncGeneratorFunction ( ..._parameterArgs_, _bodyArg_ )
1. Let _C_ be the active function object.
1. If _bodyArg_ is not present, set _bodyArg_ to the empty String.
- 1. Return ? CreateDynamicFunction(_C_, NewTarget, ~asyncGenerator~, _parameterArgs_, _bodyArg_).
+ 1. Return ? CreateDynamicFunction(_C_, NewTarget, ~async-generator~, _parameterArgs_, _bodyArg_).
See NOTE for .
@@ -46177,15 +48516,11 @@ Properties of the AsyncGeneratorFunction Constructor
- is a standard built-in function object that inherits from the Function constructor.
- has a [[Prototype]] internal slot whose value is %Function%.
+ - has a *"length"* property whose value is *1*𝔽.
- has a *"name"* property whose value is *"AsyncGeneratorFunction"*.
- has the following properties:
-
- AsyncGeneratorFunction.length
- This is a data property with a value of 1. This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
-
-
AsyncGeneratorFunction.prototype
The initial value of `AsyncGeneratorFunction.prototype` is the AsyncGeneratorFunction prototype object.
@@ -46211,7 +48546,7 @@ AsyncGeneratorFunction.prototype.constructor
AsyncGeneratorFunction.prototype.prototype
- The initial value of `AsyncGeneratorFunction.prototype.prototype` is the AsyncGenerator prototype object.
+ The initial value of `AsyncGeneratorFunction.prototype.prototype` is %AsyncGeneratorPrototype%.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
@@ -46251,12 +48586,12 @@ prototype
Generator Objects
- A Generator is an instance of a generator function and conforms to both the Iterator and Iterable interfaces.
- Generator instances directly inherit properties from the object that is the initial value of the *"prototype"* property of the Generator function that created the instance. Generator instances indirectly inherit properties from the Generator Prototype intrinsic, %GeneratorFunction.prototype.prototype%.
+ A Generator is created by calling a generator function and conforms to both the Iterator and Iterable interfaces.
+ Generator instances directly inherit properties from the initial value of the *"prototype"* property of the generator function that created the instance. Generator instances indirectly inherit properties from %GeneratorPrototype%.
- Properties of the Generator Prototype Object
- The Generator prototype object:
+ The %GeneratorPrototype% Object
+ The %GeneratorPrototype% object:
- is %GeneratorFunction.prototype.prototype%.
- is an ordinary object.
@@ -46266,20 +48601,20 @@ Properties of the Generator Prototype Object
- Generator.prototype.constructor
- The initial value of `Generator.prototype.constructor` is %GeneratorFunction.prototype%.
+ %GeneratorPrototype%.constructor
+ The initial value of %GeneratorPrototype%`.constructor` is %GeneratorFunction.prototype%.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
- Generator.prototype.next ( _value_ )
+ %GeneratorPrototype%.next ( _value_ )
1. Return ? GeneratorResume(*this* value, _value_, ~empty~).
- Generator.prototype.return ( _value_ )
+ %GeneratorPrototype%.return ( _value_ )
This method performs the following steps when called:
1. Let _g_ be the *this* value.
@@ -46289,7 +48624,7 @@ Generator.prototype.return ( _value_ )
- Generator.prototype.throw ( _exception_ )
+ %GeneratorPrototype%.throw ( _exception_ )
This method performs the following steps when called:
1. Let _g_ be the *this* value.
@@ -46299,7 +48634,7 @@ Generator.prototype.throw ( _exception_ )
- Generator.prototype [ @@toStringTag ]
+ %GeneratorPrototype% [ @@toStringTag ]
The initial value of the @@toStringTag property is the String value *"Generator"*.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
@@ -46326,7 +48661,7 @@ Properties of Generator Instances
[[GeneratorState]]
|
- *undefined*, ~suspendedStart~, ~suspendedYield~, ~executing~, or ~completed~
+ *undefined*, ~suspended-start~, ~suspended-yield~, ~executing~, or ~completed~
|
The current execution state of the generator.
@@ -46375,9 +48710,9 @@
1. Let _genContext_ be the running execution context.
1. Set the Generator component of _genContext_ to _generator_.
1. Let _closure_ be a new Abstract Closure with no parameters that captures _generatorBody_ and performs the following steps when called:
- 1. Let _genContext_ be the running execution context.
- 1. Let _env_ be _genContext_'s LexicalEnvironment.
- 1. Let _generator_ be the Generator component of _genContext_.
+ 1. Let _acGenContext_ be the running execution context.
+ 1. Let _env_ be _acGenContext_'s LexicalEnvironment.
+ 1. Let _acGenerator_ be the Generator component of _acGenContext_.
1. If _generatorBody_ is a Parse Node, then
1. Let _result_ be Completion(Evaluation of _generatorBody_).
1. Else,
@@ -46385,18 +48720,20 @@
1. Let _result_ be _generatorBody_().
1. Set _result_ to Completion(DisposeResources(_env_.[[DisposeCapability]], _result_)).
1. Assert: If we return here, the generator either threw an exception or performed either an implicit or explicit return.
- 1. Remove _genContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
- 1. Set _generator_.[[GeneratorState]] to ~completed~.
- 1. Once a generator enters the ~completed~ state it never leaves it and its associated execution context is never resumed. Any execution state associated with _generator_ can be discarded at this point.
- 1. If _result_.[[Type]] is ~normal~, let _resultValue_ be *undefined*.
- 1. Else if _result_.[[Type]] is ~return~, let _resultValue_ be _result_.[[Value]].
+ 1. Remove _acGenContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
+ 1. Set _acGenerator_.[[GeneratorState]] to ~completed~.
+ 1. NOTE: Once a generator enters the ~completed~ state it never leaves it and its associated execution context is never resumed. Any execution state associated with _acGenerator_ can be discarded at this point.
+ 1. If _result_ is a normal completion, then
+ 1. Let _resultValue_ be *undefined*.
+ 1. Else if _result_ is a return completion, then
+ 1. Let _resultValue_ be _result_.[[Value]].
1. Else,
- 1. Assert: _result_.[[Type]] is ~throw~.
+ 1. Assert: _result_ is a throw completion.
1. Return ? _result_.
1. Return CreateIterResultObject(_resultValue_, *true*).
1. Set the code evaluation state of _genContext_ such that when evaluation is resumed for that execution context, _closure_ will be called with no arguments.
1. Set _generator_.[[GeneratorContext]] to _genContext_.
- 1. Set _generator_.[[GeneratorState]] to ~suspendedStart~.
+ 1. Set _generator_.[[GeneratorState]] to ~suspended-start~.
1. Return ~unused~.
@@ -46406,14 +48743,14 @@
GeneratorValidate (
_generator_: an ECMAScript language value,
_generatorBrand_: a String or ~empty~,
- ): either a normal completion containing either ~suspendedStart~, ~suspendedYield~, or ~completed~, or a throw completion
+ ): either a normal completion containing either ~suspended-start~, ~suspended-yield~, or ~completed~, or a throw completion
1. Perform ? RequireInternalSlot(_generator_, [[GeneratorState]]).
1. Perform ? RequireInternalSlot(_generator_, [[GeneratorBrand]]).
- 1. If _generator_.[[GeneratorBrand]] is not the same value as _generatorBrand_, throw a *TypeError* exception.
+ 1. If _generator_.[[GeneratorBrand]] is not _generatorBrand_, throw a *TypeError* exception.
1. Assert: _generator_ also has a [[GeneratorContext]] internal slot.
1. Let _state_ be _generator_.[[GeneratorState]].
1. If _state_ is ~executing~, throw a *TypeError* exception.
@@ -46434,7 +48771,7 @@
1. Let _state_ be ? GeneratorValidate(_generator_, _generatorBrand_).
1. If _state_ is ~completed~, return CreateIterResultObject(*undefined*, *true*).
- 1. Assert: _state_ is either ~suspendedStart~ or ~suspendedYield~.
+ 1. Assert: _state_ is either ~suspended-start~ or ~suspended-yield~.
1. Let _genContext_ be _generator_.[[GeneratorContext]].
1. Let _methodContext_ be the running execution context.
1. Suspend _methodContext_.
@@ -46458,15 +48795,15 @@
1. Let _state_ be ? GeneratorValidate(_generator_, _generatorBrand_).
- 1. If _state_ is ~suspendedStart~, then
+ 1. If _state_ is ~suspended-start~, then
1. Set _generator_.[[GeneratorState]] to ~completed~.
- 1. Once a generator enters the ~completed~ state it never leaves it and its associated execution context is never resumed. Any execution state associated with _generator_ can be discarded at this point.
+ 1. NOTE: Once a generator enters the ~completed~ state it never leaves it and its associated execution context is never resumed. Any execution state associated with _generator_ can be discarded at this point.
1. Set _state_ to ~completed~.
1. If _state_ is ~completed~, then
- 1. If _abruptCompletion_.[[Type]] is ~return~, then
+ 1. If _abruptCompletion_ is a return completion, then
1. Return CreateIterResultObject(_abruptCompletion_.[[Value]], *true*).
1. Return ? _abruptCompletion_.
- 1. Assert: _state_ is ~suspendedYield~.
+ 1. Assert: _state_ is ~suspended-yield~.
1. Let _genContext_ be _generator_.[[GeneratorContext]].
1. Let _methodContext_ be the running execution context.
1. Suspend _methodContext_.
@@ -46504,7 +48841,7 @@
1. Assert: _genContext_ is the execution context of a generator.
1. Let _generator_ be the value of the Generator component of _genContext_.
1. Assert: GetGeneratorKind() is ~sync~.
- 1. Set _generator_.[[GeneratorState]] to ~suspendedYield~.
+ 1. Set _generator_.[[GeneratorState]] to ~suspended-yield~.
1. Remove _genContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
1. Let _callerContext_ be the running execution context.
1. Resume _callerContext_ passing NormalCompletion(_iterNextObj_). If _genContext_ is ever resumed again, let _resumptionValue_ be the Completion Record with which it is resumed.
@@ -46561,13 +48898,13 @@
AsyncGenerator Objects
- An AsyncGenerator is an instance of an async generator function and conforms to both the AsyncIterator and AsyncIterable interfaces.
+ An AsyncGenerator is created by calling an async generator function and conforms to both the AsyncIterator and AsyncIterable interfaces.
- AsyncGenerator instances directly inherit properties from the object that is the initial value of the *"prototype"* property of the AsyncGenerator function that created the instance. AsyncGenerator instances indirectly inherit properties from the AsyncGenerator Prototype intrinsic, %AsyncGeneratorFunction.prototype.prototype%.
+ AsyncGenerator instances directly inherit properties from the initial value of the *"prototype"* property of the async generator function that created the instance. AsyncGenerator instances indirectly inherit properties from %AsyncGeneratorPrototype%.
- Properties of the AsyncGenerator Prototype Object
- The AsyncGenerator prototype object:
+ The %AsyncGeneratorPrototype% Object
+ The %AsyncGeneratorPrototype% object:
- is %AsyncGeneratorFunction.prototype.prototype%.
- is an ordinary object.
@@ -46577,13 +48914,13 @@ Properties of the AsyncGenerator Prototype Object
- AsyncGenerator.prototype.constructor
- The initial value of `AsyncGenerator.prototype.constructor` is %AsyncGeneratorFunction.prototype%.
+ %AsyncGeneratorPrototype%.constructor
+ The initial value of %AsyncGeneratorPrototype%`.constructor` is %AsyncGeneratorFunction.prototype%.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
- AsyncGenerator.prototype.next ( _value_ )
+ %AsyncGeneratorPrototype%.next ( _value_ )
1. Let _generator_ be the *this* value.
1. Let _promiseCapability_ be ! NewPromiseCapability(%Promise%).
@@ -46596,7 +48933,7 @@ AsyncGenerator.prototype.next ( _value_ )
1. Return _promiseCapability_.[[Promise]].
1. Let _completion_ be NormalCompletion(_value_).
1. Perform AsyncGeneratorEnqueue(_generator_, _completion_, _promiseCapability_).
- 1. If _state_ is either ~suspendedStart~ or ~suspendedYield~, then
+ 1. If _state_ is either ~suspended-start~ or ~suspended-yield~, then
1. Perform AsyncGeneratorResume(_generator_, _completion_).
1. Else,
1. Assert: _state_ is either ~executing~ or ~awaiting-return~.
@@ -46605,7 +48942,7 @@ AsyncGenerator.prototype.next ( _value_ )
- AsyncGenerator.prototype.return ( _value_ )
+ %AsyncGeneratorPrototype%.return ( _value_ )
1. Let _generator_ be the *this* value.
1. Let _promiseCapability_ be ! NewPromiseCapability(%Promise%).
@@ -46614,10 +48951,10 @@ AsyncGenerator.prototype.return ( _value_ )
1. Let _completion_ be Completion Record { [[Type]]: ~return~, [[Value]]: _value_, [[Target]]: ~empty~ }.
1. Perform AsyncGeneratorEnqueue(_generator_, _completion_, _promiseCapability_).
1. Let _state_ be _generator_.[[AsyncGeneratorState]].
- 1. If _state_ is either ~suspendedStart~ or ~completed~, then
+ 1. If _state_ is either ~suspended-start~ or ~completed~, then
1. Set _generator_.[[AsyncGeneratorState]] to ~awaiting-return~.
1. Perform ! AsyncGeneratorAwaitReturn(_generator_).
- 1. Else if _state_ is ~suspendedYield~, then
+ 1. Else if _state_ is ~suspended-yield~, then
1. Perform AsyncGeneratorResume(_generator_, _completion_).
1. Else,
1. Assert: _state_ is either ~executing~ or ~awaiting-return~.
@@ -46626,14 +48963,14 @@ AsyncGenerator.prototype.return ( _value_ )
- AsyncGenerator.prototype.throw ( _exception_ )
+ %AsyncGeneratorPrototype%.throw ( _exception_ )
1. Let _generator_ be the *this* value.
1. Let _promiseCapability_ be ! NewPromiseCapability(%Promise%).
1. Let _result_ be Completion(AsyncGeneratorValidate(_generator_, ~empty~)).
1. IfAbruptRejectPromise(_result_, _promiseCapability_).
1. Let _state_ be _generator_.[[AsyncGeneratorState]].
- 1. If _state_ is ~suspendedStart~, then
+ 1. If _state_ is ~suspended-start~, then
1. Set _generator_.[[AsyncGeneratorState]] to ~completed~.
1. Set _state_ to ~completed~.
1. If _state_ is ~completed~, then
@@ -46641,7 +48978,7 @@ AsyncGenerator.prototype.throw ( _exception_ )
1. Return _promiseCapability_.[[Promise]].
1. Let _completion_ be ThrowCompletion(_exception_).
1. Perform AsyncGeneratorEnqueue(_generator_, _completion_, _promiseCapability_).
- 1. If _state_ is ~suspendedYield~, then
+ 1. If _state_ is ~suspended-yield~, then
1. Perform AsyncGeneratorResume(_generator_, _completion_).
1. Else,
1. Assert: _state_ is either ~executing~ or ~awaiting-return~.
@@ -46650,7 +48987,7 @@ AsyncGenerator.prototype.throw ( _exception_ )
- AsyncGenerator.prototype [ @@toStringTag ]
+ %AsyncGeneratorPrototype% [ @@toStringTag ]
The initial value of the @@toStringTag property is the String value *"AsyncGenerator"*.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
@@ -46668,7 +49005,7 @@ Properties of AsyncGenerator Instances
|
[[AsyncGeneratorState]] |
- *undefined*, ~suspendedStart~, ~suspendedYield~, ~executing~, ~awaiting-return~, or ~completed~ |
+ *undefined*, ~suspended-start~, ~suspended-yield~, ~executing~, ~awaiting-return~, or ~completed~ |
The current execution state of the async generator. |
@@ -46732,9 +49069,9 @@
1. Let _genContext_ be the running execution context.
1. Set the Generator component of _genContext_ to _generator_.
1. Let _closure_ be a new Abstract Closure with no parameters that captures _generatorBody_ and performs the following steps when called:
- 1. Let _genContext_ be the running execution context.
- 1. Let _env_ be _genContext_'s LexicalEnvironment.
- 1. Let _generator_ be the Generator component of _genContext_.
+ 1. Let _acGenContext_ be the running execution context.
+ 1. Let _env_ be _acGenContext_'s LexicalEnvironment.
+ 1. Let _acGenerator_ be the Generator component of _acGenContext_.
1. If _generatorBody_ is a Parse Node, then
1. Let _result_ be Completion(Evaluation of _generatorBody_).
1. Else,
@@ -46742,16 +49079,16 @@
1. Let _result_ be Completion(_generatorBody_()).
1. Set _result_ to Completion(DisposeResources(_env_.[[DisposeCapability]], _result_)).
1. Assert: If we return here, the async generator either threw an exception or performed either an implicit or explicit return.
- 1. Remove _genContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
- 1. Set _generator_.[[AsyncGeneratorState]] to ~completed~.
- 1. If _result_.[[Type]] is ~normal~, set _result_ to NormalCompletion(*undefined*).
- 1. If _result_.[[Type]] is ~return~, set _result_ to NormalCompletion(_result_.[[Value]]).
- 1. Perform AsyncGeneratorCompleteStep(_generator_, _result_, *true*).
- 1. Perform AsyncGeneratorDrainQueue(_generator_).
+ 1. Remove _acGenContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
+ 1. Set _acGenerator_.[[AsyncGeneratorState]] to ~completed~.
+ 1. If _result_ is a normal completion, set _result_ to NormalCompletion(*undefined*).
+ 1. If _result_ is a return completion, set _result_ to NormalCompletion(_result_.[[Value]]).
+ 1. Perform AsyncGeneratorCompleteStep(_acGenerator_, _result_, *true*).
+ 1. Perform AsyncGeneratorDrainQueue(_acGenerator_).
1. Return *undefined*.
1. Set the code evaluation state of _genContext_ such that when evaluation is resumed for that execution context, _closure_ will be called with no arguments.
1. Set _generator_.[[AsyncGeneratorContext]] to _genContext_.
- 1. Set _generator_.[[AsyncGeneratorState]] to ~suspendedStart~.
+ 1. Set _generator_.[[AsyncGeneratorState]] to ~suspended-start~.
1. Set _generator_.[[AsyncGeneratorQueue]] to a new empty List.
1. Return ~unused~.
@@ -46770,7 +49107,7 @@
1. Perform ? RequireInternalSlot(_generator_, [[AsyncGeneratorContext]]).
1. Perform ? RequireInternalSlot(_generator_, [[AsyncGeneratorState]]).
1. Perform ? RequireInternalSlot(_generator_, [[AsyncGeneratorQueue]]).
- 1. If _generator_.[[GeneratorBrand]] is not the same value as _generatorBrand_, throw a *TypeError* exception.
+ 1. If _generator_.[[GeneratorBrand]] is not _generatorBrand_, throw a *TypeError* exception.
1. Return ~unused~.
@@ -46804,16 +49141,15 @@
- 1. Let _queue_ be _generator_.[[AsyncGeneratorQueue]].
- 1. Assert: _queue_ is not empty.
- 1. Let _next_ be the first element of _queue_.
- 1. Remove the first element from _queue_.
+ 1. Assert: _generator_.[[AsyncGeneratorQueue]] is not empty.
+ 1. Let _next_ be the first element of _generator_.[[AsyncGeneratorQueue]].
+ 1. Remove the first element from _generator_.[[AsyncGeneratorQueue]].
1. Let _promiseCapability_ be _next_.[[Capability]].
1. Let _value_ be _completion_.[[Value]].
- 1. If _completion_.[[Type]] is ~throw~, then
+ 1. If _completion_ is a throw completion, then
1. Perform ! Call(_promiseCapability_.[[Reject]], *undefined*, « _value_ »).
1. Else,
- 1. Assert: _completion_.[[Type]] is ~normal~.
+ 1. Assert: _completion_ is a normal completion.
1. If _realm_ is present, then
1. Let _oldRealm_ be the running execution context's Realm.
1. Set the running execution context's Realm to _realm_.
@@ -46836,7 +49172,7 @@
- 1. Assert: _generator_.[[AsyncGeneratorState]] is either ~suspendedStart~ or ~suspendedYield~.
+ 1. Assert: _generator_.[[AsyncGeneratorState]] is either ~suspended-start~ or ~suspended-yield~.
1. Let _genContext_ be _generator_.[[AsyncGeneratorContext]].
1. Let _callerContext_ be the running execution context.
1. Suspend _callerContext_.
@@ -46858,10 +49194,10 @@
- 1. If _resumptionValue_.[[Type]] is not ~return~, return ? _resumptionValue_.
+ 1. If _resumptionValue_ is not a return completion, return ? _resumptionValue_.
1. Let _awaited_ be Completion(Await(_resumptionValue_.[[Value]])).
- 1. If _awaited_.[[Type]] is ~throw~, return ? _awaited_.
- 1. Assert: _awaited_.[[Type]] is ~normal~.
+ 1. If _awaited_ is a throw completion, return ? _awaited_.
+ 1. Assert: _awaited_ is a normal completion.
1. Return Completion Record { [[Type]]: ~return~, [[Value]]: _awaited_.[[Value]], [[Target]]: ~empty~ }.
@@ -46891,7 +49227,7 @@
1. Let _resumptionValue_ be Completion(_toYield_.[[Completion]]).
1. Return ? AsyncGeneratorUnwrapYieldResumption(_resumptionValue_).
1. Else,
- 1. Set _generator_.[[AsyncGeneratorState]] to ~suspendedYield~.
+ 1. Set _generator_.[[AsyncGeneratorState]] to ~suspended-yield~.
1. Remove _genContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
1. Let _callerContext_ be the running execution context.
1. Resume _callerContext_ passing *undefined*. If _genContext_ is ever resumed again, let _resumptionValue_ be the Completion Record with which it is resumed.
@@ -46913,7 +49249,7 @@
1. Assert: _queue_ is not empty.
1. Let _next_ be the first element of _queue_.
1. Let _completion_ be Completion(_next_.[[Completion]]).
- 1. Assert: _completion_.[[Type]] is ~return~.
+ 1. Assert: _completion_ is a return completion.
1. Let _promise_ be ? PromiseResolve(%Promise%, _completion_.[[Value]]).
1. Let _fulfilledClosure_ be a new Abstract Closure with parameters (_value_) that captures _generator_ and performs the following steps when called:
1. Set _generator_.[[AsyncGeneratorState]] to ~completed~.
@@ -46952,12 +49288,12 @@
1. Repeat, while _done_ is *false*,
1. Let _next_ be the first element of _queue_.
1. Let _completion_ be Completion(_next_.[[Completion]]).
- 1. If _completion_.[[Type]] is ~return~, then
+ 1. If _completion_ is a return completion, then
1. Set _generator_.[[AsyncGeneratorState]] to ~awaiting-return~.
1. Perform ! AsyncGeneratorAwaitReturn(_generator_).
1. Set _done_ to *true*.
1. Else,
- 1. If _completion_.[[Type]] is ~normal~, then
+ 1. If _completion_ is a normal completion, then
1. Set _completion_ to NormalCompletion(*undefined*).
1. Perform AsyncGeneratorCompleteStep(_generator_, _completion_, *true*).
1. If _queue_ is empty, set _done_ to *true*.
@@ -47033,15 +49369,11 @@ Properties of the AsyncFunction Constructor
- is a standard built-in function object that inherits from the Function constructor.
- has a [[Prototype]] internal slot whose value is %Function%.
+ - has a *"length"* property whose value is *1*𝔽.
- has a *"name"* property whose value is *"AsyncFunction"*.
- has the following properties:
-
- AsyncFunction.length
- This is a data property with a value of 1. This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
-
-
AsyncFunction.prototype
The initial value of `AsyncFunction.prototype` is the AsyncFunction prototype object.
@@ -47063,7 +49395,7 @@ Properties of the AsyncFunction Prototype Object
AsyncFunction.prototype.constructor
- The initial value of `AsyncFunction.prototype.constructor` is %AsyncFunction%
+ The initial value of `AsyncFunction.prototype.constructor` is %AsyncFunction%.
This property has the attributes { [[Writable]]: *false*, [[Enumerable]]: *false*, [[Configurable]]: *true* }.
@@ -47126,21 +49458,20 @@
- 1. Assert: _promiseCapability_ is a PromiseCapability Record.
1. Let _runningContext_ be the running execution context.
1. Let _closure_ be a new Abstract Closure with no parameters that captures _promiseCapability_ and _asyncBody_ and performs the following steps when called:
- 1. Let _asyncContext_ be the running execution context.
- 1. Let _env_ be _asyncContext_'s LexicalEnvironment.
+ 1. Let _acAsyncContext_ be the running execution context.
+ 1. Let _env_ be _acAsyncContext_'s LexicalEnvironment.
1. Let _result_ be Completion(Evaluation of _asyncBody_).
1. Set _result_ to Completion(DisposeResources(_env_.[[DisposeCapability]], _result_)).
1. Assert: If we return here, the async function either threw an exception or performed an implicit or explicit return; all awaiting is done.
- 1. Remove _asyncContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
- 1. If _result_.[[Type]] is ~normal~, then
+ 1. Remove _acAsyncContext_ from the execution context stack and restore the execution context that is at the top of the execution context stack as the running execution context.
+ 1. If _result_ is a normal completion, then
1. Perform ! Call(_promiseCapability_.[[Resolve]], *undefined*, « *undefined* »).
- 1. Else if _result_.[[Type]] is ~return~, then
+ 1. Else if _result_ is a return completion, then
1. Perform ! Call(_promiseCapability_.[[Resolve]], *undefined*, « _result_.[[Value]] »).
1. Else,
- 1. Assert: _result_.[[Type]] is ~throw~.
+ 1. Assert: _result_ is a throw completion.
1. Perform ! Call(_promiseCapability_.[[Reject]], *undefined*, « _result_.[[Value]] »).
1. [id="step-asyncblockstart-return-undefined"] Return ~unused~.
1. Set the code evaluation state of _asyncContext_ such that when evaluation is resumed for that execution context, _closure_ will be called with no arguments.
@@ -47163,11 +49494,11 @@
1. Let _asyncContext_ be the running execution context.
1. Let _promise_ be ? PromiseResolve(%Promise%, _value_).
- 1. Let _fulfilledClosure_ be a new Abstract Closure with parameters (_value_) that captures _asyncContext_ and performs the following steps when called:
+ 1. Let _fulfilledClosure_ be a new Abstract Closure with parameters (_v_) that captures _asyncContext_ and performs the following steps when called:
1. Let _prevContext_ be the running execution context.
1. Suspend _prevContext_.
1. Push _asyncContext_ onto the execution context stack; _asyncContext_ is now the running execution context.
- 1. Resume the suspended evaluation of _asyncContext_ using NormalCompletion(_value_) as the result of the operation that suspended it.
+ 1. Resume the suspended evaluation of _asyncContext_ using NormalCompletion(_v_) as the result of the operation that suspended it.
1. Assert: When we reach this step, _asyncContext_ has already been removed from the execution context stack and _prevContext_ is the currently running execution context.
1. Return *undefined*.
1. Let _onFulfilled_ be CreateBuiltinFunction(_fulfilledClosure_, 1, *""*, « »).
@@ -47387,20 +49718,20 @@ Proxy.revocable ( _target_, _handler_ )
This function creates a revocable Proxy object.
It performs the following steps when called:
- 1. Let _p_ be ? ProxyCreate(_target_, _handler_).
+ 1. Let _proxy_ be ? ProxyCreate(_target_, _handler_).
1. Let _revokerClosure_ be a new Abstract Closure with no parameters that captures nothing and performs the following steps when called:
1. Let _F_ be the active function object.
1. Let _p_ be _F_.[[RevocableProxy]].
1. If _p_ is *null*, return *undefined*.
1. Set _F_.[[RevocableProxy]] to *null*.
- 1. Assert: _p_ is a Proxy object.
+ 1. Assert: _p_ is a Proxy exotic object.
1. Set _p_.[[ProxyTarget]] to *null*.
1. Set _p_.[[ProxyHandler]] to *null*.
1. Return *undefined*.
1. Let _revoker_ be CreateBuiltinFunction(_revokerClosure_, 0, *""*, « [[RevocableProxy]] »).
- 1. Set _revoker_.[[RevocableProxy]] to _p_.
+ 1. Set _revoker_.[[RevocableProxy]] to _proxy_.
1. Let _result_ be OrdinaryObjectCreate(%Object.prototype%).
- 1. Perform ! CreateDataPropertyOrThrow(_result_, *"proxy"*, _p_).
+ 1. Perform ! CreateDataPropertyOrThrow(_result_, *"proxy"*, _proxy_).
1. Perform ! CreateDataPropertyOrThrow(_result_, *"revoke"*, _revoker_).
1. Return _result_.
@@ -47446,7 +49777,7 @@ Memory Model Fundamentals
[[Order]] |
- ~SeqCst~ or ~Unordered~ |
+ ~seq-cst~ or ~unordered~ |
The weakest ordering guaranteed by the memory model for the event. |
@@ -47481,7 +49812,7 @@ Memory Model Fundamentals
[[Order]] |
- ~SeqCst~, ~Unordered~, or ~Init~ |
+ ~seq-cst~, ~unordered~, or ~init~ |
The weakest ordering guaranteed by the memory model for the event. |
@@ -47521,7 +49852,7 @@ Memory Model Fundamentals
[[Order]] |
- ~SeqCst~ |
+ ~seq-cst~ |
Read-modify-write events are always sequentially consistent. |
@@ -47787,7 +50118,7 @@ Relations of Candidate Executions
agent-order
For a candidate execution _execution_, _execution_.[[AgentOrder]] is a Relation on events that satisfies the following.
- - For each pair (_E_, _D_) in EventSet(_execution_), (_E_, _D_) is in _execution_.[[AgentOrder]] if there is some Agent Events Record _aer_ in _execution_.[[EventsRecords]] such that _E_ and _D_ are in _aer_.[[EventList]] and _E_ is before _D_ in List order of _aer_.[[EventList]].
+ - For each pair (_E_, _D_) in EventSet(_execution_), _execution_.[[AgentOrder]] contains (_E_, _D_) if there is some Agent Events Record _aer_ in _execution_.[[EventsRecords]] such that _E_ and _D_ are in _aer_.[[EventList]] and _E_ is before _D_ in List order of _aer_.[[EventList]].
@@ -47813,7 +50144,7 @@ reads-bytes-from
reads-from
For a candidate execution _execution_, _execution_.[[ReadsFrom]] is the least Relation on events that satisfies the following.
- - For each pair (_R_, _W_) in SharedDataBlockEventSet(_execution_), (_R_, _W_) is in _execution_.[[ReadsFrom]] if _W_ is in _execution_.[[ReadsBytesFrom]](_R_).
+ - For each pair (_R_, _W_) in SharedDataBlockEventSet(_execution_), _execution_.[[ReadsFrom]] contains (_R_, _W_) if _execution_.[[ReadsBytesFrom]](_R_) contains _W_.
@@ -47821,7 +50152,7 @@ reads-from
host-synchronizes-with
For a candidate execution _execution_, _execution_.[[HostSynchronizesWith]] is a host-provided strict partial order on host-specific events that satisfies at least the following.
- - If (_E_, _D_) is in _execution_.[[HostSynchronizesWith]], _E_ and _D_ are in HostEventSet(_execution_).
+ - If _execution_.[[HostSynchronizesWith]] contains (_E_, _D_), _E_ and _D_ are in HostEventSet(_execution_).
- There is no cycle in the union of _execution_.[[HostSynchronizesWith]] and _execution_.[[AgentOrder]].
@@ -47838,15 +50169,15 @@ synchronizes-with
For a candidate execution _execution_, _execution_.[[SynchronizesWith]] is the least Relation on events that satisfies the following.
-
- For each pair (_R_, _W_) in _execution_.[[ReadsFrom]], (_W_, _R_) is in _execution_.[[SynchronizesWith]] if _R_.[[Order]] is ~SeqCst~, _W_.[[Order]] is ~SeqCst~, and _R_ and _W_ have equal ranges.
+ For each pair (_R_, _W_) in _execution_.[[ReadsFrom]], _execution_.[[SynchronizesWith]] contains (_W_, _R_) if _R_.[[Order]] is ~seq-cst~, _W_.[[Order]] is ~seq-cst~, and _R_ and _W_ have equal ranges.
-
For each element _eventsRecord_ of _execution_.[[EventsRecords]], the following is true.
- - For each pair (_S_, _Sw_) in _eventsRecord_.[[AgentSynchronizesWith]], (_S_, _Sw_) is in _execution_.[[SynchronizesWith]].
+ - For each pair (_S_, _Sw_) in _eventsRecord_.[[AgentSynchronizesWith]], _execution_.[[SynchronizesWith]] contains (_S_, _Sw_).
- - For each pair (_E_, _D_) in _execution_.[[HostSynchronizesWith]], (_E_, _D_) is in _execution_.[[SynchronizesWith]].
+ - For each pair (_E_, _D_) in _execution_.[[HostSynchronizesWith]], _execution_.[[SynchronizesWith]] contains (_E_, _D_).
@@ -47854,11 +50185,11 @@ synchronizes-with
- ~Init~ events do not participate in synchronizes-with, and are instead constrained directly by happens-before.
+ ~init~ events do not participate in synchronizes-with, and are instead constrained directly by happens-before.
- Not all ~SeqCst~ events related by reads-from are related by synchronizes-with. Only events that also have equal ranges are related by synchronizes-with.
+ Not all ~seq-cst~ events related by reads-from are related by synchronizes-with. Only events that also have equal ranges are related by synchronizes-with.
@@ -47871,10 +50202,10 @@ happens-before
For a candidate execution _execution_, _execution_.[[HappensBefore]] is the least Relation on events that satisfies the following.
- - For each pair (_E_, _D_) in _execution_.[[AgentOrder]], (_E_, _D_) is in _execution_.[[HappensBefore]].
- - For each pair (_E_, _D_) in _execution_.[[SynchronizesWith]], (_E_, _D_) is in _execution_.[[HappensBefore]].
- - For each pair (_E_, _D_) in SharedDataBlockEventSet(_execution_), (_E_, _D_) is in _execution_.[[HappensBefore]] if _E_.[[Order]] is ~Init~ and _E_ and _D_ have overlapping ranges.
- - For each pair (_E_, _D_) in EventSet(_execution_), (_E_, _D_) is in _execution_.[[HappensBefore]] if there is an event _F_ such that the pairs (_E_, _F_) and (_F_, _D_) are in _execution_.[[HappensBefore]].
+ - For each pair (_E_, _D_) in _execution_.[[AgentOrder]], _execution_.[[HappensBefore]] contains (_E_, _D_).
+ - For each pair (_E_, _D_) in _execution_.[[SynchronizesWith]], _execution_.[[HappensBefore]] contains (_E_, _D_).
+ - For each pair (_E_, _D_) in SharedDataBlockEventSet(_execution_), _execution_.[[HappensBefore]] contains (_E_, _D_) if _E_.[[Order]] is ~init~ and _E_ and _D_ have overlapping ranges.
+ - For each pair (_E_, _D_) in EventSet(_execution_), _execution_.[[HappensBefore]] contains (_E_, _D_) if there is an event _F_ such that the pairs (_E_, _F_) and (_F_, _D_) are in _execution_.[[HappensBefore]].
@@ -47912,7 +50243,7 @@ Coherent Reads
1. Let _Ws_ be _execution_.[[ReadsBytesFrom]](_R_).
1. Let _byteLocation_ be _R_.[[ByteIndex]].
1. For each element _W_ of _Ws_, do
- 1. If (_R_, _W_) is in _execution_.[[HappensBefore]], then
+ 1. If _execution_.[[HappensBefore]] contains (_R_, _W_), then
1. Return *false*.
1. If there exists a WriteSharedMemory or ReadModifyWriteSharedMemory event _V_ that has _byteLocation_ in its range such that the pairs (_W_, _V_) and (_V_, _R_) are in _execution_.[[HappensBefore]], then
1. Return *false*.
@@ -47928,8 +50259,8 @@ Tear Free Reads
1. For each ReadSharedMemory or ReadModifyWriteSharedMemory event _R_ of SharedDataBlockEventSet(_execution_), do
1. If _R_.[[NoTear]] is *true*, then
1. Assert: The remainder of dividing _R_.[[ByteIndex]] by _R_.[[ElementSize]] is 0.
- 1. For each event _W_ such that (_R_, _W_) is in _execution_.[[ReadsFrom]] and _W_.[[NoTear]] is *true*, do
- 1. If _R_ and _W_ have equal ranges and there exists an event _V_ such that _V_ and _W_ have equal ranges, _V_.[[NoTear]] is *true*, _W_ is not _V_, and (_R_, _V_) is in _execution_.[[ReadsFrom]], then
+ 1. For each event _W_ such that _execution_.[[ReadsFrom]] contains (_R_, _W_) and _W_.[[NoTear]] is *true*, do
+ 1. If _R_ and _W_ have equal ranges and there exists an event _V_ such that _V_ and _W_ have equal ranges, _V_.[[NoTear]] is *true*, _W_ and _V_ are not the same Shared Data Block event, and _execution_.[[ReadsFrom]] contains (_R_, _V_), then
1. Return *false*.
1. Return *true*.
@@ -47946,20 +50277,20 @@ Sequentially Consistent Atomics
- For each pair (_E_, _D_) in _execution_.[[HappensBefore]], (_E_, _D_) is in memory-order.
-
-
For each pair (_R_, _W_) in _execution_.[[ReadsFrom]], there is no WriteSharedMemory or ReadModifyWriteSharedMemory event _V_ in SharedDataBlockEventSet(_execution_) such that _V_.[[Order]] is ~SeqCst~, the pairs (_W_, _V_) and (_V_, _R_) are in memory-order, and any of the following conditions are true.
+ For each pair (_R_, _W_) in _execution_.[[ReadsFrom]], there is no WriteSharedMemory or ReadModifyWriteSharedMemory event _V_ in SharedDataBlockEventSet(_execution_) such that _V_.[[Order]] is ~seq-cst~, the pairs (_W_, _V_) and (_V_, _R_) are in memory-order, and any of the following conditions are true.
- - The pair (_W_, _R_) is in _execution_.[[SynchronizesWith]], and _V_ and _R_ have equal ranges.
- - The pairs (_W_, _R_) and (_V_, _R_) are in _execution_.[[HappensBefore]], _W_.[[Order]] is ~SeqCst~, and _W_ and _V_ have equal ranges.
- - The pairs (_W_, _R_) and (_W_, _V_) are in _execution_.[[HappensBefore]], _R_.[[Order]] is ~SeqCst~, and _V_ and _R_ have equal ranges.
+ - _execution_.[[SynchronizesWith]] contains the pair (_W_, _R_), and _V_ and _R_ have equal ranges.
+ - The pairs (_W_, _R_) and (_V_, _R_) are in _execution_.[[HappensBefore]], _W_.[[Order]] is ~seq-cst~, and _W_ and _V_ have equal ranges.
+ - The pairs (_W_, _R_) and (_W_, _V_) are in _execution_.[[HappensBefore]], _R_.[[Order]] is ~seq-cst~, and _V_ and _R_ have equal ranges.
- This clause additionally constrains ~SeqCst~ events on equal ranges.
+ This clause additionally constrains ~seq-cst~ events on equal ranges.
-
-
For each WriteSharedMemory or ReadModifyWriteSharedMemory event _W_ in SharedDataBlockEventSet(_execution_), if _W_.[[Order]] is ~SeqCst~, then it is not the case that there is an infinite number of ReadSharedMemory or ReadModifyWriteSharedMemory events in SharedDataBlockEventSet(_execution_) with equal range that is memory-order before _W_.
+ For each WriteSharedMemory or ReadModifyWriteSharedMemory event _W_ in SharedDataBlockEventSet(_execution_), if _W_.[[Order]] is ~seq-cst~, then it is not the case that there is an infinite number of ReadSharedMemory or ReadModifyWriteSharedMemory events in SharedDataBlockEventSet(_execution_) with equal range that is memory-order before _W_.
- This clause together with the forward progress guarantee on agents ensure the liveness condition that ~SeqCst~ writes become visible to ~SeqCst~ reads with equal range in finite time.
+ This clause together with the forward progress guarantee on agents ensure the liveness condition that ~seq-cst~ writes become visible to ~seq-cst~ reads with equal range in finite time.
@@ -47989,11 +50320,11 @@ Valid Executions
Races
For an execution _execution_, two events _E_ and _D_ in SharedDataBlockEventSet(_execution_) are in a race if the following algorithm returns *true*.
- 1. If _E_ is not _D_, then
+ 1. If _E_ and _D_ are not the same Shared Data Block event, then
1. If the pairs (_E_, _D_) and (_D_, _E_) are not in _execution_.[[HappensBefore]], then
1. If _E_ and _D_ are both WriteSharedMemory or ReadModifyWriteSharedMemory events and _E_ and _D_ do not have disjoint ranges, then
1. Return *true*.
- 1. If either (_E_, _D_) or (_D_, _E_) is in _execution_.[[ReadsFrom]], then
+ 1. If _execution_.[[ReadsFrom]] contains either (_E_, _D_) or (_D_, _E_), then
1. Return *true*.
1. Return *false*.
@@ -48004,7 +50335,7 @@ Data Races
For an execution _execution_, two events _E_ and _D_ in SharedDataBlockEventSet(_execution_) are in a data race if the following algorithm returns *true*.
1. If _E_ and _D_ are in a race in _execution_, then
- 1. If _E_.[[Order]] is not ~SeqCst~ or _D_.[[Order]] is not ~SeqCst~, then
+ 1. If _E_.[[Order]] is not ~seq-cst~ or _D_.[[Order]] is not ~seq-cst~, then
1. Return *true*.
1. If _E_ and _D_ have overlapping ranges, then
1. Return *true*.
@@ -48035,8 +50366,8 @@ Shared Memory Guidelines
Any transformation of an agent-order slice that is valid in the absence of shared memory is valid in the presence of shared memory, with the following exceptions.
-
-
Atomics are carved in stone: Program transformations must not cause the ~SeqCst~ events in an agent-order slice to be reordered with its ~Unordered~ operations, nor its ~SeqCst~ operations to be reordered with each other, nor may a program transformation remove a ~SeqCst~ operation from the agent-order.
- (In practice, the prohibition on reorderings forces a compiler to assume that every ~SeqCst~ operation is a synchronization and included in the final memory-order, which it would usually have to assume anyway in the absence of inter-agent program analysis. It also forces the compiler to assume that every call where the callee's effects on the memory-order are unknown may contain ~SeqCst~ operations.)
+ Atomics are carved in stone: Program transformations must not cause the ~seq-cst~ events in an agent-order slice to be reordered with its ~unordered~ operations, nor its ~seq-cst~ operations to be reordered with each other, nor may a program transformation remove a ~seq-cst~ operation from the agent-order.
+ (In practice, the prohibition on reorderings forces a compiler to assume that every ~seq-cst~ operation is a synchronization and included in the final memory-order, which it would usually have to assume anyway in the absence of inter-agent program analysis. It also forces the compiler to assume that every call where the callee's effects on the memory-order are unknown may contain ~seq-cst~ operations.)
-
Reads must be stable: Any given shared memory read must only observe a single value in an execution.
@@ -48048,7 +50379,7 @@ Shared Memory Guidelines
-
Possible read values must be non-empty: Program transformations cannot cause the possible read values of a shared memory read to become empty.
- (Counterintuitively, this rule in effect restricts transformations on writes, because writes have force in memory model insofar as to be read by read events. For example, writes may be moved and coalesced and sometimes reordered between two ~SeqCst~ operations, but the transformation may not remove every write that updates a location; some write must be preserved.)
+ (Counterintuitively, this rule in effect restricts transformations on writes, because writes have force in memory model insofar as to be read by read events. For example, writes may be moved and coalesced and sometimes reordered between two ~seq-cst~ operations, but the transformation may not remove every write that updates a location; some write must be preserved.)
Examples of transformations that remain valid are: merging multiple non-atomic reads from the same location, reordering non-atomic reads, introducing speculative non-atomic reads, merging multiple non-atomic writes to the same location, reordering non-atomic writes to different locations, and hoisting non-atomic reads out of loops even if that affects termination. Note in general that aliased TypedArrays make it hard to prove that locations are different.
@@ -48072,7 +50403,7 @@ Shared Memory Guidelines
Non-lock-free atomics compile to a spinlock acquire, a full fence, a series of non-atomic load and store instructions, a full fence, and a spinlock release.
That mapping is correct so long as an atomic operation on an address range does not race with a non-atomic write or with an atomic operation of different size. However, that is all we need: the memory model effectively demotes the atomic operations involved in a race to non-atomic status. On the other hand, the naive mapping is quite strong: it allows atomic operations to be used as sequentially consistent fences, which the memory model does not actually guarantee.
- A number of local improvements to those basic patterns are also intended to be legal:
+ Local improvements to those basic patterns are also allowed, subject to the constraints of the memory model. For example:
- There are obvious platform-dependent improvements that remove redundant fences. For example, on x86 the fences around lock-free atomic loads and stores can always be omitted except for the fence following a store, and no fence is needed for lock-free read-modify-write instructions, as these all use
LOCK -prefixed instructions. On many platforms there are fences of several strengths, and weaker fences can be used in certain contexts without destroying sequential consistency.
- Most modern platforms support lock-free atomics for all the data sizes required by ECMAScript atomics. Should non-lock-free atomics be needed, the fences surrounding the body of the atomic operation can usually be folded into the lock and unlock steps. The simplest solution for non-lock-free atomics is to have a single lock word per SharedArrayBuffer.
@@ -48500,12 +50831,27 @@ Regular Expressions
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -48525,6 +50871,15 @@ HTML-like Comments
The syntax and semantics of is extended as follows except that this extension is not allowed when parsing source text using the goal symbol |Module|:
Syntax
+ InputElementHashbangOrRegExp ::
+ WhiteSpace
+ LineTerminator
+ Comment
+ CommonToken
+ HashbangComment
+ RegularExpressionLiteral
+ HTMLCloseComment
+
Comment ::
MultiLineComment
SingleLineComment
@@ -48579,84 +50934,84 @@ Regular Expressions Patterns
This alternative pattern grammar and semantics only changes the syntax and semantics of BMP patterns. The following grammar extensions include productions parameterized with the [UnicodeMode] parameter. However, none of these extensions change the syntax of Unicode patterns recognized when parsing with the [UnicodeMode] parameter present on the goal symbol.
Syntax
- Term[UnicodeMode, N] ::
- [+UnicodeMode] Assertion[+UnicodeMode, ?N]
- [+UnicodeMode] Atom[+UnicodeMode, ?N] Quantifier
- [+UnicodeMode] Atom[+UnicodeMode, ?N]
- [~UnicodeMode] QuantifiableAssertion[?N] Quantifier
- [~UnicodeMode] Assertion[~UnicodeMode, ?N]
- [~UnicodeMode] ExtendedAtom[?N] Quantifier
- [~UnicodeMode] ExtendedAtom[?N]
-
- Assertion[UnicodeMode, N] ::
+ Term[UnicodeMode, UnicodeSetsMode, NamedCaptureGroups] ::
+ [+UnicodeMode] Assertion[+UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups]
+ [+UnicodeMode] Atom[+UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups] Quantifier
+ [+UnicodeMode] Atom[+UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups]
+ [~UnicodeMode] QuantifiableAssertion[?NamedCaptureGroups] Quantifier
+ [~UnicodeMode] Assertion[~UnicodeMode, ~UnicodeSetsMode, ?NamedCaptureGroups]
+ [~UnicodeMode] ExtendedAtom[?NamedCaptureGroups] Quantifier
+ [~UnicodeMode] ExtendedAtom[?NamedCaptureGroups]
+
+ Assertion[UnicodeMode, UnicodeSetsMode, NamedCaptureGroups] ::
`^`
`$`
- `\` `b`
- `\` `B`
- [+UnicodeMode] `(` `?` `=` Disjunction[+UnicodeMode, ?N] `)`
- [+UnicodeMode] `(` `?` `!` Disjunction[+UnicodeMode, ?N] `)`
- [~UnicodeMode] QuantifiableAssertion[?N]
- `(` `?` `<=` Disjunction[?UnicodeMode, ?N] `)`
- `(` `?` `<!` Disjunction[?UnicodeMode, ?N] `)`
-
- QuantifiableAssertion[N] ::
- `(` `?` `=` Disjunction[~UnicodeMode, ?N] `)`
- `(` `?` `!` Disjunction[~UnicodeMode, ?N] `)`
-
- ExtendedAtom[N] ::
+ `\b`
+ `\B`
+ [+UnicodeMode] `(?=` Disjunction[+UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups] `)`
+ [+UnicodeMode] `(?!` Disjunction[+UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups] `)`
+ [~UnicodeMode] QuantifiableAssertion[?NamedCaptureGroups]
+ `(?<=` Disjunction[?UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups] `)`
+ `(?<!` Disjunction[?UnicodeMode, ?UnicodeSetsMode, ?NamedCaptureGroups] `)`
+
+ QuantifiableAssertion[NamedCaptureGroups] ::
+ `(?=` Disjunction[~UnicodeMode, ~UnicodeSetsMode, ?NamedCaptureGroups] `)`
+ `(?!` Disjunction[~UnicodeMode, ~UnicodeSetsMode, ?NamedCaptureGroups] `)`
+
+ ExtendedAtom[NamedCaptureGroups] ::
`.`
- `\` AtomEscape[~UnicodeMode, ?N]
+ `\` AtomEscape[~UnicodeMode, ?NamedCaptureGroups]
`\` [lookahead == `c`]
- CharacterClass[~UnicodeMode]
- `(` GroupSpecifier[~UnicodeMode]? Disjunction[~UnicodeMode, ?N] `)`
- `(` `?` `:` Disjunction[~UnicodeMode, ?N] `)`
+ CharacterClass[~UnicodeMode, ~UnicodeSetsMode]
+ `(` GroupSpecifier[~UnicodeMode]? Disjunction[~UnicodeMode, ~UnicodeSetsMode, ?NamedCaptureGroups] `)`
+ `(?:` Disjunction[~UnicodeMode, ~UnicodeSetsMode, ?NamedCaptureGroups] `)`
InvalidBracedQuantifier
ExtendedPatternCharacter
InvalidBracedQuantifier ::
`{` DecimalDigits[~Sep] `}`
- `{` DecimalDigits[~Sep] `,` `}`
+ `{` DecimalDigits[~Sep] `,}`
`{` DecimalDigits[~Sep] `,` DecimalDigits[~Sep] `}`
ExtendedPatternCharacter ::
SourceCharacter but not one of `^` `$` `\` `.` `*` `+` `?` `(` `)` `[` `|`
- AtomEscape[UnicodeMode, N] ::
+ AtomEscape[UnicodeMode, NamedCaptureGroups] ::
[+UnicodeMode] DecimalEscape
[~UnicodeMode] DecimalEscape [> but only if the CapturingGroupNumber of |DecimalEscape| is ≤ CountLeftCapturingParensWithin(the |Pattern| containing |DecimalEscape|)]
CharacterClassEscape[?UnicodeMode]
- CharacterEscape[?UnicodeMode, ?N]
- [+N] `k` GroupName[?UnicodeMode]
+ CharacterEscape[?UnicodeMode, ?NamedCaptureGroups]
+ [+NamedCaptureGroups] `k` GroupName[?UnicodeMode]
- CharacterEscape[UnicodeMode, N] ::
+ CharacterEscape[UnicodeMode, NamedCaptureGroups] ::
ControlEscape
`c` AsciiLetter
`0` [lookahead ∉ DecimalDigit]
HexEscapeSequence
RegExpUnicodeEscapeSequence[?UnicodeMode]
[~UnicodeMode] LegacyOctalEscapeSequence
- IdentityEscape[?UnicodeMode, ?N]
+ IdentityEscape[?UnicodeMode, ?NamedCaptureGroups]
- IdentityEscape[UnicodeMode, N] ::
+ IdentityEscape[UnicodeMode, NamedCaptureGroups] ::
[+UnicodeMode] SyntaxCharacter
[+UnicodeMode] `/`
- [~UnicodeMode] SourceCharacterIdentityEscape[?N]
+ [~UnicodeMode] SourceCharacterIdentityEscape[?NamedCaptureGroups]
- SourceCharacterIdentityEscape[N] ::
- [~N] SourceCharacter but not `c`
- [+N] SourceCharacter but not one of `c` or `k`
+ SourceCharacterIdentityEscape[NamedCaptureGroups] ::
+ [~NamedCaptureGroups] SourceCharacter but not `c`
+ [+NamedCaptureGroups] SourceCharacter but not one of `c` or `k`
- ClassAtomNoDash[UnicodeMode, N] ::
+ ClassAtomNoDash[UnicodeMode, NamedCaptureGroups] ::
SourceCharacter but not one of `\` or `]` or `-`
- `\` ClassEscape[?UnicodeMode, ?N]
+ `\` ClassEscape[?UnicodeMode, ?NamedCaptureGroups]
`\` [lookahead == `c`]
- ClassEscape[UnicodeMode, N] ::
+ ClassEscape[UnicodeMode, NamedCaptureGroups] ::
`b`
[+UnicodeMode] `-`
[~UnicodeMode] `c` ClassControlLetter
CharacterClassEscape[?UnicodeMode]
- CharacterEscape[?UnicodeMode, ?N]
+ CharacterEscape[?UnicodeMode, ?NamedCaptureGroups]
ClassControlLetter ::
DecimalDigit
@@ -48676,22 +51031,22 @@ Static Semantics: Early Errors
Additionally, the rules for the following productions are modified with the addition of the highlighted text:
- NonemptyClassRanges :: ClassAtom `-` ClassAtom ClassRanges
+ NonemptyClassRanges :: ClassAtom `-` ClassAtom ClassContents
-
It is a Syntax Error if IsCharacterClass of the first |ClassAtom| is *true* or IsCharacterClass of the second |ClassAtom| is *true* and this production has a [UnicodeMode] parameter.
-
- It is a Syntax Error if IsCharacterClass of the first |ClassAtom| is *false* and IsCharacterClass of the second |ClassAtom| is *false* and the CharacterValue of the first |ClassAtom| is larger than the CharacterValue of the second |ClassAtom|.
+ It is a Syntax Error if IsCharacterClass of the first |ClassAtom| is *false*, IsCharacterClass of the second |ClassAtom| is *false*, and the CharacterValue of the first |ClassAtom| is strictly greater than the CharacterValue of the second |ClassAtom|.
- NonemptyClassRangesNoDash :: ClassAtomNoDash `-` ClassAtom ClassRanges
+ NonemptyClassRangesNoDash :: ClassAtomNoDash `-` ClassAtom ClassContents
-
It is a Syntax Error if IsCharacterClass of |ClassAtomNoDash| is *true* or IsCharacterClass of |ClassAtom| is *true* and this production has a [UnicodeMode] parameter.
-
- It is a Syntax Error if IsCharacterClass of |ClassAtomNoDash| is *false* and IsCharacterClass of |ClassAtom| is *false* and the CharacterValue of |ClassAtomNoDash| is larger than the CharacterValue of |ClassAtom|.
+ It is a Syntax Error if IsCharacterClass of |ClassAtomNoDash| is *false*, IsCharacterClass of |ClassAtom| is *false*, and the CharacterValue of |ClassAtomNoDash| is strictly greater than the CharacterValue of |ClassAtom|.
@@ -48744,7 +51099,7 @@ Runtime Semantics: CompileSubpattern
Runtime Semantics: CompileAssertion
- CompileAssertion rules for the Assertion :: `(` `?` `=` Disjunction `)` and Assertion :: `(` `?` `!` Disjunction `)` productions are also used for the |QuantifiableAssertion| productions, but with |QuantifiableAssertion| substituted for |Assertion|.
+ CompileAssertion rules for the Assertion :: `(?=` Disjunction `)` and Assertion :: `(?!` Disjunction `)` productions are also used for the |QuantifiableAssertion| productions, but with |QuantifiableAssertion| substituted for |Assertion|.
@@ -48768,19 +51123,19 @@ Runtime Semantics: CompileToCharSet
The semantics of is extended as follows:
The following two rules replace the corresponding rules of CompileToCharSet.
- NonemptyClassRanges :: ClassAtom `-` ClassAtom ClassRanges
+ NonemptyClassRanges :: ClassAtom `-` ClassAtom ClassContents
1. Let _A_ be CompileToCharSet of the first |ClassAtom| with argument _rer_.
1. Let _B_ be CompileToCharSet of the second |ClassAtom| with argument _rer_.
- 1. Let _C_ be CompileToCharSet of |ClassRanges| with argument _rer_.
+ 1. Let _C_ be CompileToCharSet of |ClassContents| with argument _rer_.
1. Let _D_ be CharacterRangeOrUnion(_rer_, _A_, _B_).
1. Return the union of _D_ and _C_.
- NonemptyClassRangesNoDash :: ClassAtomNoDash `-` ClassAtom ClassRanges
+ NonemptyClassRangesNoDash :: ClassAtomNoDash `-` ClassAtom ClassContents
1. Let _A_ be CompileToCharSet of |ClassAtomNoDash| with argument _rer_.
1. Let _B_ be CompileToCharSet of |ClassAtom| with argument _rer_.
- 1. Let _C_ be CompileToCharSet of |ClassRanges| with argument _rer_.
+ 1. Let _C_ be CompileToCharSet of |ClassContents| with argument _rer_.
1. Let _D_ be CharacterRangeOrUnion(_rer_, _A_, _B_).
1. Return the union of _D_ and _C_.
@@ -48810,7 +51165,7 @@
- 1. If _rer_.[[Unicode]] is *false*, then
+ 1. If HasEitherUnicodeFlag(_rer_) is *false*, then
1. If _A_ does not contain exactly one character or _B_ does not contain exactly one character, then
1. Let _C_ be the CharSet containing the single character `-` U+002D (HYPHEN-MINUS).
1. Return the union of CharSets _A_, _B_ and _C_.
@@ -48818,6 +51173,25 @@
+
+
+ Static Semantics: ParsePattern ( _patternText_, _u_, _v_ )
+ The semantics of is extended as follows:
+ The abstract operation ParsePattern takes arguments _patternText_ (a sequence of Unicode code points), _u_ (a Boolean), and _v_ (a Boolean). It performs the following steps when called:
+
+ 1. If _v_ is *true* and _u_ is *true*, then
+ 1. Let _parseResult_ be a List containing one or more *SyntaxError* objects.
+ 1. Else if _v_ is *true*, then
+ 1. Let _parseResult_ be ParseText(_patternText_, |Pattern[+UnicodeMode, +UnicodeSetsMode, +NamedCaptureGroups]|).
+ 1. Else if _u_ is *true*, then
+ 1. Let _parseResult_ be ParseText(_patternText_, |Pattern[+UnicodeMode, ~UnicodeSetsMode, +NamedCaptureGroups]|).
+ 1. Else,
+ 1. Let _parseResult_ be ParseText(_patternText_, |Pattern[~UnicodeMode, ~UnicodeSetsMode, ~NamedCaptureGroups]|).
+ 1. If _parseResult_ is a Parse Node and _parseResult_ contains a |GroupName|, then
+ 1. Set _parseResult_ to ParseText(_patternText_, |Pattern[~UnicodeMode, ~UnicodeSetsMode, +NamedCaptureGroups]|).
+ 1. Return _parseResult_.
+
+
@@ -48869,27 +51243,27 @@ Additional Properties of the Global Object
escape ( _string_ )
This function is a property of the global object. It computes a new version of a String value in which certain code units have been replaced by a hexadecimal escape sequence.
- When replacing a code unit of numeric value less than or equal to 0x00FF, a two-digit escape sequence of the form %xx is used. When replacing a code unit of numeric value greater than 0x00FF, a four-digit escape sequence of the form %uxxxx is used.
+ When replacing a code unit of numeric value less than or equal to 0x00FF, a two-digit escape sequence of the form %xx is used. When replacing a code unit of numeric value strictly greater than 0x00FF, a four-digit escape sequence of the form %uxxxx is used.
It is the %escape% intrinsic object.
It performs the following steps when called:
1. Set _string_ to ? ToString(_string_).
- 1. Let _length_ be the length of _string_.
+ 1. Let _len_ be the length of _string_.
1. Let _R_ be the empty String.
1. Let _unescapedSet_ be the string-concatenation of the ASCII word characters and *"@\*+-./"*.
1. Let _k_ be 0.
- 1. Repeat, while _k_ < _length_,
- 1. Let _char_ be the code unit at index _k_ within _string_.
- 1. If _char_ is in _unescapedSet_, then
- 1. Let _S_ be the String value containing the single code unit _char_.
+ 1. Repeat, while _k_ < _len_,
+ 1. Let _C_ be the code unit at index _k_ within _string_.
+ 1. If _unescapedSet_ contains _C_, then
+ 1. Let _S_ be _C_.
1. Else,
- 1. Let _n_ be the numeric value of _char_.
+ 1. Let _n_ be the numeric value of _C_.
1. If _n_ < 256, then
1. Let _hex_ be the String representation of _n_, formatted as an uppercase hexadecimal number.
- 1. Let _S_ be the string-concatenation of *"%"* and ! StringPad(_hex_, *2*𝔽, *"0"*, ~start~).
+ 1. Let _S_ be the string-concatenation of *"%"* and StringPad(_hex_, 2, *"0"*, ~start~).
1. Else,
1. Let _hex_ be the String representation of _n_, formatted as an uppercase hexadecimal number.
- 1. Let _S_ be the string-concatenation of *"%u"* and ! StringPad(_hex_, *4*𝔽, *"0"*, ~start~).
+ 1. Let _S_ be the string-concatenation of *"%u"* and StringPad(_hex_, 4, *"0"*, ~start~).
1. Set _R_ to the string-concatenation of _R_ and _S_.
1. Set _k_ to _k_ + 1.
1. Return _R_.
@@ -48906,26 +51280,26 @@ unescape ( _string_ )
It performs the following steps when called:
1. Set _string_ to ? ToString(_string_).
- 1. Let _length_ be the length of _string_.
+ 1. Let _len_ be the length of _string_.
1. Let _R_ be the empty String.
1. Let _k_ be 0.
- 1. Repeat, while _k_ ≠ _length_,
- 1. Let _c_ be the code unit at index _k_ within _string_.
- 1. If _c_ is the code unit 0x0025 (PERCENT SIGN), then
- 1. Let _hexEscape_ be the empty String.
- 1. Let _skip_ be 0.
- 1. If _k_ ≤ _length_ - 6 and the code unit at index _k_ + 1 within _string_ is the code unit 0x0075 (LATIN SMALL LETTER U), then
- 1. Set _hexEscape_ to the substring of _string_ from _k_ + 2 to _k_ + 6.
- 1. Set _skip_ to 5.
- 1. Else if _k_ ≤ _length_ - 3, then
- 1. Set _hexEscape_ to the substring of _string_ from _k_ + 1 to _k_ + 3.
- 1. Set _skip_ to 2.
- 1. If _hexEscape_ can be interpreted as an expansion of |HexDigits[~Sep]|, then
- 1. Let _hexIntegerLiteral_ be the string-concatenation of *"0x"* and _hexEscape_.
- 1. Let _n_ be ! ToNumber(_hexIntegerLiteral_).
- 1. Set _c_ to the code unit whose value is ℝ(_n_).
- 1. Set _k_ to _k_ + _skip_.
- 1. Set _R_ to the string-concatenation of _R_ and _c_.
+ 1. Repeat, while _k_ < _len_,
+ 1. Let _C_ be the code unit at index _k_ within _string_.
+ 1. If _C_ is the code unit 0x0025 (PERCENT SIGN), then
+ 1. Let _hexDigits_ be the empty String.
+ 1. Let _optionalAdvance_ be 0.
+ 1. If _k_ + 5 < _len_ and the code unit at index _k_ + 1 within _string_ is the code unit 0x0075 (LATIN SMALL LETTER U), then
+ 1. Set _hexDigits_ to the substring of _string_ from _k_ + 2 to _k_ + 6.
+ 1. Set _optionalAdvance_ to 5.
+ 1. Else if _k_ + 3 ≤ _len_, then
+ 1. Set _hexDigits_ to the substring of _string_ from _k_ + 1 to _k_ + 3.
+ 1. Set _optionalAdvance_ to 2.
+ 1. Let _parseResult_ be ParseText(StringToCodePoints(_hexDigits_), |HexDigits[~Sep]|).
+ 1. If _parseResult_ is a Parse Node, then
+ 1. Let _n_ be the MV of _parseResult_.
+ 1. Set _C_ to the code unit whose numeric value is _n_.
+ 1. Set _k_ to _k_ + _optionalAdvance_.
+ 1. Set _R_ to the string-concatenation of _R_ and _C_.
1. Set _k_ to _k_ + 1.
1. Return _R_.
@@ -48944,7 +51318,7 @@ String.prototype.substr ( _start_, _length_ )
1. Let _S_ be ? ToString(_O_).
1. Let _size_ be the length of _S_.
1. Let _intStart_ be ? ToIntegerOrInfinity(_start_).
- 1. If _intStart_ is -∞, set _intStart_ to 0.
+ 1. If _intStart_ = -∞, set _intStart_ to 0.
1. Else if _intStart_ < 0, set _intStart_ to max(_size_ + _intStart_, 0).
1. Else, set _intStart_ to min(_intStart_, _size_).
1. If _length_ is *undefined*, let _intLength_ be _size_; otherwise let _intLength_ be ? ToIntegerOrInfinity(_length_).
@@ -49134,7 +51508,9 @@ Date.prototype.getYear ( )
This method performs the following steps when called:
- 1. Let _t_ be ? thisTimeValue(*this* value).
+ 1. Let _dateObject_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_dateObject_, [[DateValue]]).
+ 1. Let _t_ be _dateObject_.[[DateValue]].
1. If _t_ is *NaN*, return *NaN*.
1. Return YearFromTime(LocalTime(_t_)) - *1900*𝔽.
@@ -49147,19 +51523,17 @@ Date.prototype.setYear ( _year_ )
This method performs the following steps when called:
- 1. Let _t_ be ? thisTimeValue(*this* value).
+ 1. Let _dateObject_ be the *this* value.
+ 1. Perform ? RequireInternalSlot(_dateObject_, [[DateValue]]).
+ 1. Let _t_ be _dateObject_.[[DateValue]].
1. Let _y_ be ? ToNumber(_year_).
1. If _t_ is *NaN*, set _t_ to *+0*𝔽; otherwise, set _t_ to LocalTime(_t_).
- 1. If _y_ is *NaN*, then
- 1. Set the [[DateValue]] internal slot of this Date object to *NaN*.
- 1. Return *NaN*.
- 1. Let _yi_ be ! ToIntegerOrInfinity(_y_).
- 1. If 0 ≤ _yi_ ≤ 99, let _yyyy_ be *1900*𝔽 + 𝔽(_yi_).
- 1. Else, let _yyyy_ be _y_.
+ 1. Let _yyyy_ be MakeFullYear(_y_).
1. Let _d_ be MakeDay(_yyyy_, MonthFromTime(_t_), DateFromTime(_t_)).
- 1. Let _date_ be UTC(MakeDate(_d_, TimeWithinDay(_t_))).
- 1. Set the [[DateValue]] internal slot of this Date object to TimeClip(_date_).
- 1. Return the value of the [[DateValue]] internal slot of this Date object.
+ 1. Let _date_ be MakeDate(_d_, TimeWithinDay(_t_)).
+ 1. Let _u_ be TimeClip(UTC(_date_)).
+ 1. Set _dateObject_.[[DateValue]] to _u_.
+ 1. Return _u_.
@@ -49281,9 +51655,9 @@ Changes to FunctionDeclarationInstantiation
1. If _strict_ is *false*, then
1. For each |FunctionDeclaration| _f_ that is directly contained in the |StatementList| of a |Block|, |CaseClause|, or |DefaultClause|, do
1. Let _F_ be StringValue of the |BindingIdentifier| of _f_.
- 1. If replacing the |FunctionDeclaration| _f_ with a |VariableStatement| that has _F_ as a |BindingIdentifier| would not produce any Early Errors for _func_ and _F_ is not an element of _parameterNames_, then
+ 1. If replacing the |FunctionDeclaration| _f_ with a |VariableStatement| that has _F_ as a |BindingIdentifier| would not produce any Early Errors for _func_ and _parameterNames_ does not contain _F_, then
1. NOTE: A var binding for _F_ is only instantiated here if it is neither a VarDeclaredName, the name of a formal parameter, or another |FunctionDeclaration|.
- 1. If _initializedBindings_ does not contain _F_ and _F_ is not *"arguments"*, then
+ 1. If _instantiatedVarNames_ does not contain _F_ and _F_ is not *"arguments"*, then
1. Perform ! _varEnv_.CreateMutableBinding(_F_, *false*).
1. Perform ! _varEnv_.InitializeBinding(_F_, *undefined*, ~normal~).
1. Append _F_ to _instantiatedVarNames_.
@@ -49335,7 +51709,7 @@ Changes to EvalDeclarationInstantiation
1. Let _bindingExists_ be *false*.
1. Let _thisEnv_ be _lexEnv_.
1. Assert: The following loop will terminate.
- 1. Repeat, while _thisEnv_ is not the same as _varEnv_,
+ 1. Repeat, while _thisEnv_ is not _varEnv_,
1. If _thisEnv_ is not an Object Environment Record, then
1. If ! _thisEnv_.HasBinding(_F_) is *true*, then
1. [id="step-evaldeclarationinstantiation-web-compat-bindingexists"] Let _bindingExists_ be *true*.
@@ -49514,7 +51888,7 @@ Initializers in ForIn Statement Heads
1. Let _value_ be ? GetValue(_rhs_).
1. Perform ? PutValue(_lhs_, _value_).
1. Let _keyResult_ be ? ForIn/OfHeadEvaluation(« », |Expression|, ~enumerate~).
- 1. Return ? ForIn/OfBodyEvaluation(|BindingIdentifier|, |Statement|, _keyResult_, ~enumerate~, ~varBinding~, _labelSet_).
+ 1. Return ? ForIn/OfBodyEvaluation(|BindingIdentifier|, |Statement|, _keyResult_, ~enumerate~, ~var-binding~, _labelSet_).
@@ -49538,8 +51912,8 @@ Changes to IsLooselyEqual
The following steps replace step of IsLooselyEqual:
1. Perform the following steps:
- 1. If _x_ is an Object, _x_ has an [[IsHTMLDDA]] internal slot, and _y_ is either *null* or *undefined*, return *true*.
- 1. If _x_ is either *null* or *undefined*, _y_ is an Object, and _y_ has an [[IsHTMLDDA]] internal slot, return *true*.
+ 1. If _x_ is an Object, _x_ has an [[IsHTMLDDA]] internal slot, and _y_ is either *undefined* or *null*, return *true*.
+ 1. If _x_ is either *undefined* or *null*, _y_ is an Object, and _y_ has an [[IsHTMLDDA]] internal slot, return *true*.
@@ -49593,13 +51967,13 @@ The Strict Mode of ECMAScript
For strict functions, if an arguments object is created the binding of the local identifier `arguments` to the arguments object is immutable and hence may not be the target of an assignment expression. ().
- It is a *SyntaxError* if the StringValue of a |BindingIdentifier| is *"eval"* or *"arguments"* within strict mode code ().
+ It is a *SyntaxError* if the StringValue of a |BindingIdentifier| is either *"eval"* or *"arguments"* within strict mode code ().
Strict mode eval code cannot instantiate variables or functions in the variable environment of the caller to eval. Instead, a new variable environment is created and that environment is used for declaration binding instantiation for the eval code ().
- If *this* is evaluated within strict mode code, then the *this* value is not coerced to an object. A *this* value of *undefined* or *null* is not converted to the global object and primitive values are not converted to wrapper objects. The *this* value passed via a function call (including calls made using `Function.prototype.apply` and `Function.prototype.call`) do not coerce the passed *this* value to an object (, , ).
+ If *this* is evaluated within strict mode code, then the *this* value is not coerced to an object. A *this* value of either *undefined* or *null* is not converted to the global object and primitive values are not converted to wrapper objects. The *this* value passed via a function call (including calls made using `Function.prototype.apply` and `Function.prototype.call`) do not coerce the passed *this* value to an object (, , ).
When a `delete` operator occurs within strict mode code, a *SyntaxError* is thrown if its |UnaryExpression| is a direct reference to a variable, function argument, or function name ().
@@ -49630,14 +52004,18 @@ Host Layering Points
Host Hooks
HostCallJobCallback(...)
HostEnqueueFinalizationRegistryCleanupJob(...)
+ HostEnqueueGenericJob(...)
HostEnqueuePromiseJob(...)
+ HostEnqueueTimeoutJob(...)
HostEnsureCanCompileStrings(...)
HostFinalizeImportMeta(...)
HostGetImportMetaProperties(...)
+ HostGrowSharedArrayBuffer(...)
HostHasSourceTextAvailable(...)
HostLoadImportedModule(...)
HostMakeJobCallback(...)
HostPromiseRejectionTracker(...)
+ HostResizeArrayBuffer(...)
InitializeHostDefinedRealm(...)
@@ -49679,7 +52057,7 @@ Corrections and Clarifications in ECMAScript 2015 with Possible Compatibilit
: Previous editions permitted the TimeClip abstract operation to return either *+0*𝔽 or *-0*𝔽 as the representation of a 0 time value. ECMAScript 2015 specifies that *+0*𝔽 always returned. This means that for ECMAScript 2015 the time value of a Date is never observably *-0*𝔽 and methods that return time values never return *-0*𝔽.
: If a UTC offset representation is not present, the local time zone is used. Edition 5.1 incorrectly stated that a missing time zone should be interpreted as *"z"*.
: If the year cannot be represented using the Date Time String Format specified in a RangeError exception is thrown. Previous editions did not specify the behaviour for that case.
- : Previous editions did not specify the value returned by `Date.prototype.toString` when this time value is *NaN*. ECMAScript 2015 specifies the result to be the String value *"Invalid Date"*.
+ : Previous editions did not specify the value returned by `Date.prototype.toString` when the time value is *NaN*. ECMAScript 2015 specifies the result to be the String value *"Invalid Date"*.
, : Any LineTerminator code points in the value of the *"source"* property of a RegExp instance must be expressed using an escape sequence. Edition 5.1 only required the escaping of `/`.
, : In previous editions, the specifications for `String.prototype.match` and `String.prototype.replace` was incorrect for cases where the pattern argument was a RegExp value whose `global` flag is set. The previous specifications stated that for each attempt to match the pattern, if `lastIndex` did not change, it should be incremented by 1. The correct behaviour is that `lastIndex` should be incremented by 1 only if the pattern matched the empty String.
: Previous editions did not specify how a *NaN* value returned by a _comparefn_ was interpreted by `Array.prototype.sort`. ECMAScript 2015 specifies that such as value is treated as if *+0*𝔽 was returned from the _comparefn_. ECMAScript 2015 also specifies that ToNumber is applied to the result returned by a _comparefn_. In previous editions, the effect of a _comparefn_ result that is not a Number value was implementation-defined. In practice, implementations call ToNumber.
diff --git a/table-binary-unicode-properties-of-strings.html b/table-binary-unicode-properties-of-strings.html
new file mode 100644
index 00000000000..3bafffaee3d
--- /dev/null
+++ b/table-binary-unicode-properties-of-strings.html
@@ -0,0 +1,31 @@
+
+ Binary Unicode properties of strings
+
+
+
+ Property name |
+
+
+
+ `Basic_Emoji` |
+
+
+ `Emoji_Keycap_Sequence` |
+
+
+ `RGI_Emoji_Modifier_Sequence` |
+
+
+ `RGI_Emoji_Flag_Sequence` |
+
+
+ `RGI_Emoji_Tag_Sequence` |
+
+
+ `RGI_Emoji_ZWJ_Sequence` |
+
+
+ `RGI_Emoji` |
+
+
+
|