Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: v1.27.0 release #7461

Merged
merged 21 commits into from
Feb 13, 2025
Merged

chore: v1.27.0 release #7461

merged 21 commits into from
Feb 13, 2025

Conversation

philknows
Copy link
Member

Motivation

Releasing rc.2 with two additional commits #7455 and #7443. Supersedes #7458

nflaig and others added 21 commits February 3, 2025 10:15
…#7420)

**Motivation**

Based on observations from mainnet nodes, it seems like we reject
builder blocks in some cases either due to not being sent within cutoff
time or due to timeout on the api call but looking at the response times
these have been timely. The reason why those were rejected is either we
started the block production race too late into the slot, which is
mostly due to the fact that we take too much time to produce the common
block body or the timeout was handled by the node with a delay, both of
these cases are likely caused by event loop lag either due to GC or
processing something else.

See
[discord](https://discord.com/channels/593655374469660673/1331991458152058991/1335576180815958088)
for details.

**Description**

Increase block production timeouts to account for event loop lag
…tation (#7419)

Since ChainSafe/ssz#456 it's possible to use
`getAllReadonly()` with uncommited changes. This PR essential reverts
changes done in #7375 as it
causes more memory allocation which is not ideal.
Closes #6908, this flag is
widely used already and people are aware of the trade-offs, ie.
increased storage but works well enough for their use cases.
Eth1 data poll is not needed anymore in Pectra after all eth1 deposits
are processed.

Introduce mechanism to stop the polling.

Note in the case of Lodestar starting up in Electra, Eth1 data poll may
run for 1 epoch but no data will actually get polled and then
`PrepareNextSlotScheduler.stopEth1Polling()` will stop it.

Part of #6341

---------

Co-authored-by: Nico Flaig <[email protected]>
**Motivation**

Benchmarks show `snappy-wasm` is faster at compressing and uncompressing
than `snappyjs`.

**Description**

- Use `snappy-wasm` for compressing / uncompressing gossip payloads
- Add more `snappy` vs `snappyjs` vs `snappy-wasm` benchmarks

**TODO**
- [x] deploy this branch on our test fleet - deployed on feat3

```
  network / gossip / snappy
    compress
      ✔ 100 bytes - compress - snappyjs                                     335566.9 ops/s    2.980032 us/op        -        685 runs   2.54 s
      ✔ 100 bytes - compress - snappy                                       388610.3 ops/s    2.573272 us/op        -        870 runs   2.74 s
      ✔ 100 bytes - compress - snappy-wasm                                  583254.0 ops/s    1.714519 us/op        -        476 runs   1.32 s
      ✔ 100 bytes - compress - snappy-wasm - prealloc                        1586695 ops/s    630.2410 ns/op        -        481 runs  0.804 s
      ✔ 200 bytes - compress - snappyjs                                     298272.8 ops/s    3.352636 us/op        -        213 runs   1.22 s
      ✔ 200 bytes - compress - snappy                                       419528.0 ops/s    2.383631 us/op        -        926 runs   2.71 s
      ✔ 200 bytes - compress - snappy-wasm                                  472468.5 ops/s    2.116543 us/op        -        577 runs   1.72 s
      ✔ 200 bytes - compress - snappy-wasm - prealloc                        1430445 ops/s    699.0830 ns/op        -        868 runs   1.11 s
      ✔ 300 bytes - compress - snappyjs                                     265124.9 ops/s    3.771807 us/op        -        137 runs   1.02 s
      ✔ 300 bytes - compress - snappy                                       361683.9 ops/s    2.764845 us/op        -       1332 runs   4.18 s
      ✔ 300 bytes - compress - snappy-wasm                                  443688.4 ops/s    2.253834 us/op        -        859 runs   2.44 s
      ✔ 300 bytes - compress - snappy-wasm - prealloc                        1213825 ops/s    823.8420 ns/op        -        370 runs  0.807 s
      ✔ 400 bytes - compress - snappyjs                                     262168.5 ops/s    3.814341 us/op        -        358 runs   1.87 s
      ✔ 400 bytes - compress - snappy                                       382494.9 ops/s    2.614414 us/op        -       1562 runs   4.58 s
      ✔ 400 bytes - compress - snappy-wasm                                  406373.2 ops/s    2.460792 us/op        -        797 runs   2.46 s
      ✔ 400 bytes - compress - snappy-wasm - prealloc                        1111715 ops/s    899.5110 ns/op        -        450 runs  0.906 s
      ✔ 500 bytes - compress - snappyjs                                     229213.1 ops/s    4.362753 us/op        -        359 runs   2.07 s
      ✔ 500 bytes - compress - snappy                                       373695.8 ops/s    2.675973 us/op        -       2050 runs   5.99 s
      ✔ 500 bytes - compress - snappy-wasm                                  714917.4 ops/s    1.398763 us/op        -        960 runs   1.84 s
      ✔ 500 bytes - compress - snappy-wasm - prealloc                        1054619 ops/s    948.2100 ns/op        -        427 runs  0.907 s
      ✔ 1000 bytes - compress - snappyjs                                    148702.3 ops/s    6.724847 us/op        -        171 runs   1.65 s
      ✔ 1000 bytes - compress - snappy                                      423688.1 ops/s    2.360227 us/op        -        525 runs   1.74 s
      ✔ 1000 bytes - compress - snappy-wasm                                 524350.6 ops/s    1.907121 us/op        -        273 runs   1.03 s
      ✔ 1000 bytes - compress - snappy-wasm - prealloc                      685191.5 ops/s    1.459446 us/op        -        349 runs   1.01 s
      ✔ 10000 bytes - compress - snappyjs                                   21716.92 ops/s    46.04704 us/op        -         16 runs   1.24 s
      ✔ 10000 bytes - compress - snappy                                     98051.32 ops/s    10.19874 us/op        -        184 runs   2.39 s
      ✔ 10000 bytes - compress - snappy-wasm                                114681.8 ops/s    8.719783 us/op        -         49 runs  0.937 s
      ✔ 10000 bytes - compress - snappy-wasm - prealloc                     111203.6 ops/s    8.992518 us/op        -         49 runs  0.953 s
      ✔ 100000 bytes - compress - snappyjs                                  2947.313 ops/s    339.2921 us/op        -         12 runs   4.74 s
      ✔ 100000 bytes - compress - snappy                                    14963.78 ops/s    66.82801 us/op        -         70 runs   5.19 s
      ✔ 100000 bytes - compress - snappy-wasm                               19868.33 ops/s    50.33136 us/op        -         14 runs   1.21 s
      ✔ 100000 bytes - compress - snappy-wasm - prealloc                    24579.34 ops/s    40.68457 us/op        -         13 runs   1.06 s
    uncompress
      ✔ 100 bytes - uncompress - snappyjs                                   589201.6 ops/s    1.697212 us/op        -        242 runs  0.911 s
      ✔ 100 bytes - uncompress - snappy                                     537424.1 ops/s    1.860728 us/op        -        220 runs  0.910 s
      ✔ 100 bytes - uncompress - snappy-wasm                                634966.2 ops/s    1.574887 us/op        -        194 runs  0.808 s
      ✔ 100 bytes - uncompress - snappy-wasm - prealloc                      1846964 ops/s    541.4290 ns/op        -        559 runs  0.804 s
      ✔ 200 bytes - uncompress - snappyjs                                   395141.8 ops/s    2.530737 us/op        -        281 runs   1.22 s
      ✔ 200 bytes - uncompress - snappy                                     536862.6 ops/s    1.862674 us/op        -        274 runs   1.01 s
      ✔ 200 bytes - uncompress - snappy-wasm                                420251.6 ops/s    2.379527 us/op        -        129 runs  0.810 s
      ✔ 200 bytes - uncompress - snappy-wasm - prealloc                      1746167 ops/s    572.6830 ns/op        -        529 runs  0.804 s
      ✔ 300 bytes - uncompress - snappyjs                                   441676.2 ops/s    2.264102 us/op        -        898 runs   2.53 s
      ✔ 300 bytes - uncompress - snappy                                     551313.2 ops/s    1.813851 us/op        -        336 runs   1.11 s
      ✔ 300 bytes - uncompress - snappy-wasm                                494773.0 ops/s    2.021129 us/op        -        203 runs  0.912 s
      ✔ 300 bytes - uncompress - snappy-wasm - prealloc                      1528680 ops/s    654.1590 ns/op        -        465 runs  0.805 s
      ✔ 400 bytes - uncompress - snappyjs                                   383746.1 ops/s    2.605890 us/op        -        235 runs   1.11 s
      ✔ 400 bytes - uncompress - snappy                                     515986.6 ops/s    1.938035 us/op        -        158 runs  0.809 s
      ✔ 400 bytes - uncompress - snappy-wasm                                392947.8 ops/s    2.544867 us/op        -        322 runs   1.32 s
      ✔ 400 bytes - uncompress - snappy-wasm - prealloc                      1425978 ops/s    701.2730 ns/op        -        721 runs   1.01 s
      ✔ 500 bytes - uncompress - snappyjs                                   330727.5 ops/s    3.023637 us/op        -        173 runs   1.02 s
      ✔ 500 bytes - uncompress - snappy                                     513874.1 ops/s    1.946002 us/op        -        157 runs  0.806 s
      ✔ 500 bytes - uncompress - snappy-wasm                                389263.0 ops/s    2.568957 us/op        -        161 runs  0.914 s
      ✔ 500 bytes - uncompress - snappy-wasm - prealloc                      1330936 ops/s    751.3510 ns/op        -        672 runs   1.01 s
      ✔ 1000 bytes - uncompress - snappyjs                                  241393.9 ops/s    4.142606 us/op        -        126 runs   1.03 s
      ✔ 1000 bytes - uncompress - snappy                                    491119.6 ops/s    2.036164 us/op        -        201 runs  0.911 s
      ✔ 1000 bytes - uncompress - snappy-wasm                               361794.5 ops/s    2.764000 us/op        -        148 runs  0.910 s
      ✔ 1000 bytes - uncompress - snappy-wasm - prealloc                    959026.5 ops/s    1.042724 us/op        -        390 runs  0.909 s
      ✔ 10000 bytes - uncompress - snappyjs                                 40519.03 ops/s    24.67976 us/op        -         16 runs  0.913 s
      ✔ 10000 bytes - uncompress - snappy                                   202537.6 ops/s    4.937355 us/op        -        796 runs   4.43 s
      ✔ 10000 bytes - uncompress - snappy-wasm                              165017.6 ops/s    6.059960 us/op        -         52 runs  0.822 s
      ✔ 10000 bytes - uncompress - snappy-wasm - prealloc                   175061.5 ops/s    5.712277 us/op        -        130 runs   1.25 s
      ✔ 100000 bytes - uncompress - snappyjs                                4030.391 ops/s    248.1149 us/op        -         12 runs   3.71 s
      ✔ 100000 bytes - uncompress - snappy                                  35459.43 ops/s    28.20124 us/op        -         41 runs   1.67 s
      ✔ 100000 bytes - uncompress - snappy-wasm                             22449.16 ops/s    44.54509 us/op        -         13 runs   1.11 s
      ✔ 100000 bytes - uncompress - snappy-wasm - prealloc                  27169.50 ops/s    36.80598 us/op        -         13 runs  0.997 s
```

Closes #4170

---------

Co-authored-by: Nico Flaig <[email protected]>
- repeat of #7204 because we reverted it
- See the review in #6483

Co-authored-by: Matthew Keil <[email protected]>
**Motivation**

- #6869

**Description**
- add `MIN_EPOCHS_FOR_BLOCK_REQUESTS` config (PS we're missing a lot of
the network config entries from the consensus specs)
- add `--chain.pruneHistory` flag, default to false
- when chain.pruneHistory is true
- prune all historical blocks/states on startup and then on every
subsequent finalization

---------

Co-authored-by: Nico Flaig <[email protected]>
As discussed we should hide the flag introduced in
#7427 for now until the
feature becomes more stable.

Also commented out the docs section for now.
Noticed the queries are not working as the `_bucket` suffix is missing.
Also did some cosmetic changes and moved the panels a bit further down
in the sync dashboard.


![image](https://github.com/user-attachments/assets/37489fd6-1062-46c2-9d9e-31f4c460a9db)


We might also wanna reconsider buckets as fetching keys seems to take >
1 second in a few cases. I mentioned a [possible
solution](https://discord.com/channels/593655374469660673/1337188931489239072/1338499158566375464)
to improve the fetch time.
Bumps [nanoid](https://github.com/ai/nanoid) from 3.3.7 to 3.3.8.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/ai/nanoid/blob/main/CHANGELOG.md">nanoid's
changelog</a>.</em></p>
<blockquote>
<h2>3.3.8</h2>
<ul>
<li>Fixed a way to break Nano ID by passing non-integer size (by <a
href="https://github.com/myndzi"><code>@​myndzi</code></a>).</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/ai/nanoid/commit/3044cd5e73f4cf31795f61f6e6b961c8c0a5c744"><code>3044cd5</code></a>
Release 3.3.8 version</li>
<li><a
href="https://github.com/ai/nanoid/commit/4fe34959c34e5b3573889ed4f24fe91d1d3e7231"><code>4fe3495</code></a>
Update size limit</li>
<li><a
href="https://github.com/ai/nanoid/commit/d643045f40d6dc8afa000a644d857da1436ed08c"><code>d643045</code></a>
Fix pool pollution, infinite loop (<a
href="https://github.com/ai/nanoid/issues/510">#510</a>)</li>
<li>See full diff in <a
href="https://github.com/ai/nanoid/compare/3.3.7...3.3.8">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=nanoid&package-manager=npm_and_yarn&previous-version=3.3.7&new-version=3.3.8)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/ChainSafe/lodestar/network/alerts).

</details>

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
**Motivation**

Start of resolution to #7183. Driven by peerDAS branch. `isForkBlobs` is
not longer semantically correct in `fulu` because that branch does not
have blobs, it has columns. Is the first major feature that was removed
so the types/typeguard naming semantics we were using kinda broke.
Updated to reflect pre/post fork instead of post"Feature".

**Description**

Rename pre/post fork names and type guards.

---------

Co-authored-by: Nico Flaig <[email protected]>
**Motivation**

- from electra, `processSyncCommitteeUpdates()` could be >15s according
to the devnet

**Description**

- the main fix is in `computeShuffledIndex` where we can cache pivot and
source computation there
- some other optimization other than that:
  - only compute hash once every 16 iterations
- compute int manually instead of using `bytesToInt` in order not to use
BigInt
  - cache shuffled index

I guess if we use `hashtree` we can improve more but the diff is a lot
already and the main optimization is in `computeShuffledIndex()`, not
the hash function. We can consider that in the future.

We can also improve pre-electra but I think it's been not that bad for a
long time, so only focus on electra in this PR

Closes #7366

**Tests**

- added unit tests to compare naive version vs the optimized version
- benchmarks on local show >1000x difference for the main concerned
function `naiveGetNextSyncCommitteeIndices()` while CI only show >20x
difference. This is my local

```
computeProposerIndex
    ✔ naive computeProposerIndex 100000 validators                        31.86491 ops/s    31.38248 ms/op        -         10 runs   34.5 s
    ✔ computeProposerIndex 100000 validators                              106.2267 ops/s    9.413833 ms/op        -         10 runs   10.4 s

  getNextSyncCommitteeIndices electra
    ✔ naiveGetNextSyncCommitteeIndices 1000 validators                   0.2121840 ops/s    4.712890  s/op        -         10 runs   51.7 s
    ✔ getNextSyncCommitteeIndices 1000 validators                         214.9251 ops/s    4.652783 ms/op        -         45 runs  0.714 s
    ✔ naiveGetNextSyncCommitteeIndices 10000 validators                  0.2122278 ops/s    4.711918  s/op        -         10 runs   51.8 s
    ✔ getNextSyncCommitteeIndices 10000 validators                        220.2337 ops/s    4.540632 ms/op        -         46 runs  0.710 s
    ✔ naiveGetNextSyncCommitteeIndices 100000 validators                 0.2117828 ops/s    4.721820  s/op        -         10 runs   52.2 s
    ✔ getNextSyncCommitteeIndices 100000 validators                       204.7383 ops/s    4.884283 ms/op        -         43 runs  0.714 s

  computeShuffledIndex
    ✔ naive computeShuffledIndex 100000 validators                      0.06638498 ops/s    15.06365  s/op        -          3 runs   60.3 s
    ✔ cached computeShuffledIndex 100000 validators                       1.932706 ops/s    517.4092 ms/op        -         10 runs   5.72 s
```

---------

Co-authored-by: Tuyen Nguyen <[email protected]>
Co-authored-by: Cayman <[email protected]>
### Background
Raise by Teku team on discord
https://discord.com/channels/595666850260713488/1338793076491026486/1338793635386228756,

Lodestar will generate duplicated attestations and include them in the
block body when proposing.

For example https://dora.pectra-devnet-6.ethpandaops.io/slot/54933 has 6
copies of the same attestation with signature
`0xae6b928e4866d5a43ae6d4ced869e3aa53f38617d42da5862c76e9a928942783a108e4e281d208766b8a9b2adb286aff0e0af7c14f24d1b013f4ccb47c000a11c256112fab37945d2e5bb3671b997a23b1d57d67d3fe69835ecc06f1f57b1210
`.

This is due to `getAttestationsForBlockElectra` putting multiple copies
of the same attestations in `consolidations` when there are attestations
coming from different committees with the same attestation data.

The above attestation having 6 copies because the attestation contains 6
committees (0, 2, 3, 6, 7 and 12).

### Proposal

Remove `AttestationsConsolidation.score` and convert `consolidations` to
a `Map<AttestationsConsolidation, number>()` to track the score while
eliminating duplicates.

---------

Co-authored-by: Nico Flaig <[email protected]>
@philknows philknows marked this pull request as ready for review February 13, 2025 14:57
@philknows philknows requested a review from a team as a code owner February 13, 2025 14:57
Copy link

codecov bot commented Feb 13, 2025

Codecov Report

Attention: Patch coverage is 73.81616% with 94 lines in your changes missing coverage. Please review.

Project coverage is 50.43%. Comparing base (54229fd) to head (2ddcd20).
Report is 22 commits behind head on stable.

Additional details and impacted files
@@            Coverage Diff             @@
##           stable    #7461      +/-   ##
==========================================
+ Coverage   50.26%   50.43%   +0.17%     
==========================================
  Files         602      602              
  Lines       40376    40583     +207     
  Branches     2205     2224      +19     
==========================================
+ Hits        20294    20468     +174     
- Misses      20042    20075      +33     
  Partials       40       40              

Copy link
Contributor

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: bcff3a7 Previous: null Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.0352 ms/op
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 38.527 us/op
BLS verify - blst 912.31 us/op
BLS verifyMultipleSignatures 3 - blst 1.2744 ms/op
BLS verifyMultipleSignatures 8 - blst 1.9822 ms/op
BLS verifyMultipleSignatures 32 - blst 5.7259 ms/op
BLS verifyMultipleSignatures 64 - blst 11.005 ms/op
BLS verifyMultipleSignatures 128 - blst 17.671 ms/op
BLS deserializing 10000 signatures 767.75 ms/op
BLS deserializing 100000 signatures 7.2924 s/op
BLS verifyMultipleSignatures - same message - 3 - blst 1.2185 ms/op
BLS verifyMultipleSignatures - same message - 8 - blst 1.3415 ms/op
BLS verifyMultipleSignatures - same message - 32 - blst 1.9106 ms/op
BLS verifyMultipleSignatures - same message - 64 - blst 2.8836 ms/op
BLS verifyMultipleSignatures - same message - 128 - blst 4.9076 ms/op
BLS aggregatePubkeys 32 - blst 20.983 us/op
BLS aggregatePubkeys 128 - blst 75.405 us/op
notSeenSlots=1 numMissedVotes=1 numBadVotes=10 69.191 ms/op
notSeenSlots=1 numMissedVotes=0 numBadVotes=4 55.711 ms/op
notSeenSlots=2 numMissedVotes=1 numBadVotes=10 46.556 ms/op
getSlashingsAndExits - default max 89.074 us/op
getSlashingsAndExits - 2k 414.14 us/op
proposeBlockBody type=full, size=empty 5.7449 ms/op
isKnown best case - 1 super set check 210.00 ns/op
isKnown normal case - 2 super set checks 210.00 ns/op
isKnown worse case - 16 super set checks 210.00 ns/op
InMemoryCheckpointStateCache - add get delete 2.5420 us/op
validate api signedAggregateAndProof - struct 2.0305 ms/op
validate gossip signedAggregateAndProof - struct 1.8213 ms/op
batch validate gossip attestation - vc 640000 - chunk 32 156.60 us/op
batch validate gossip attestation - vc 640000 - chunk 64 146.33 us/op
batch validate gossip attestation - vc 640000 - chunk 128 140.01 us/op
batch validate gossip attestation - vc 640000 - chunk 256 140.45 us/op
pickEth1Vote - no votes 1.1010 ms/op
pickEth1Vote - max votes 11.251 ms/op
pickEth1Vote - Eth1Data hashTreeRoot value x2048 19.421 ms/op
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 27.488 ms/op
pickEth1Vote - Eth1Data fastSerialize value x2048 515.89 us/op
pickEth1Vote - Eth1Data fastSerialize tree x2048 5.1924 ms/op
bytes32 toHexString 405.00 ns/op
bytes32 Buffer.toString(hex) 263.00 ns/op
bytes32 Buffer.toString(hex) from Uint8Array 506.00 ns/op
bytes32 Buffer.toString(hex) + 0x 271.00 ns/op
Object access 1 prop 0.13600 ns/op
Map access 1 prop 0.15700 ns/op
Object get x1000 6.7130 ns/op
Map get x1000 7.2900 ns/op
Object set x1000 36.557 ns/op
Map set x1000 24.958 ns/op
Return object 10000 times 0.31930 ns/op
Throw Error 10000 times 5.1530 us/op
toHex 154.04 ns/op
Buffer.from 140.58 ns/op
shared Buffer 91.303 ns/op
fastMsgIdFn sha256 / 200 bytes 2.3900 us/op
fastMsgIdFn h32 xxhash / 200 bytes 230.00 ns/op
fastMsgIdFn h64 xxhash / 200 bytes 322.00 ns/op
fastMsgIdFn sha256 / 1000 bytes 8.2900 us/op
fastMsgIdFn h32 xxhash / 1000 bytes 378.00 ns/op
fastMsgIdFn h64 xxhash / 1000 bytes 440.00 ns/op
fastMsgIdFn sha256 / 10000 bytes 70.010 us/op
fastMsgIdFn h32 xxhash / 10000 bytes 2.2090 us/op
fastMsgIdFn h64 xxhash / 10000 bytes 1.3900 us/op
send data - 1000 256B messages 20.671 ms/op
send data - 1000 512B messages 22.187 ms/op
send data - 1000 1024B messages 29.235 ms/op
send data - 1000 1200B messages 37.600 ms/op
send data - 1000 2048B messages 29.201 ms/op
send data - 1000 4096B messages 35.772 ms/op
send data - 1000 16384B messages 85.916 ms/op
send data - 1000 65536B messages 540.86 ms/op
enrSubnets - fastDeserialize 64 bits 1.4400 us/op
enrSubnets - ssz BitVector 64 bits 358.00 ns/op
enrSubnets - fastDeserialize 4 bits 147.00 ns/op
enrSubnets - ssz BitVector 4 bits 381.00 ns/op
prioritizePeers score -10:0 att 32-0.1 sync 2-0 139.91 us/op
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 174.70 us/op
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 240.36 us/op
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 435.90 us/op
prioritizePeers score 0:0 att 64-1 sync 4-1 527.69 us/op
array of 16000 items push then shift 1.8442 us/op
LinkedList of 16000 items push then shift 8.9280 ns/op
array of 16000 items push then pop 95.890 ns/op
LinkedList of 16000 items push then pop 8.7990 ns/op
array of 24000 items push then shift 2.7239 us/op
LinkedList of 24000 items push then shift 10.715 ns/op
array of 24000 items push then pop 124.75 ns/op
LinkedList of 24000 items push then pop 8.2110 ns/op
intersect bitArray bitLen 8 7.0130 ns/op
intersect array and set length 8 44.826 ns/op
intersect bitArray bitLen 128 33.070 ns/op
intersect array and set length 128 698.66 ns/op
bitArray.getTrueBitIndexes() bitLen 128 1.1780 us/op
bitArray.getTrueBitIndexes() bitLen 248 2.2840 us/op
bitArray.getTrueBitIndexes() bitLen 512 4.2030 us/op
Buffer.concat 32 items 871.00 ns/op
Uint8Array.set 32 items 2.1020 us/op
Buffer.copy 3.5110 us/op
Uint8Array.set - with subarray 3.5310 us/op
Uint8Array.set - without subarray 2.1600 us/op
getUint32 - dataview 219.00 ns/op
getUint32 - manual 135.00 ns/op
Set add up to 64 items then delete first 2.3125 us/op
OrderedSet add up to 64 items then delete first 5.5132 us/op
Set add up to 64 items then delete last 4.2616 us/op
OrderedSet add up to 64 items then delete last 7.6928 us/op
Set add up to 64 items then delete middle 3.9541 us/op
OrderedSet add up to 64 items then delete middle 7.2921 us/op
Set add up to 128 items then delete first 6.8035 us/op
OrderedSet add up to 128 items then delete first 10.914 us/op
Set add up to 128 items then delete last 6.9031 us/op
OrderedSet add up to 128 items then delete last 14.055 us/op
Set add up to 128 items then delete middle 7.1709 us/op
OrderedSet add up to 128 items then delete middle 18.544 us/op
Set add up to 256 items then delete first 14.308 us/op
OrderedSet add up to 256 items then delete first 21.970 us/op
Set add up to 256 items then delete last 13.879 us/op
OrderedSet add up to 256 items then delete last 21.978 us/op
Set add up to 256 items then delete middle 13.772 us/op
OrderedSet add up to 256 items then delete middle 50.966 us/op
transfer serialized Status (84 B) 3.2570 us/op
copy serialized Status (84 B) 2.4010 us/op
transfer serialized SignedVoluntaryExit (112 B) 3.3870 us/op
copy serialized SignedVoluntaryExit (112 B) 1.8490 us/op
transfer serialized ProposerSlashing (416 B) 3.8610 us/op
copy serialized ProposerSlashing (416 B) 2.5400 us/op
transfer serialized Attestation (485 B) 4.3610 us/op
copy serialized Attestation (485 B) 2.8680 us/op
transfer serialized AttesterSlashing (33232 B) 5.4100 us/op
copy serialized AttesterSlashing (33232 B) 9.5730 us/op
transfer serialized Small SignedBeaconBlock (128000 B) 5.2250 us/op
copy serialized Small SignedBeaconBlock (128000 B) 23.004 us/op
transfer serialized Avg SignedBeaconBlock (200000 B) 5.8480 us/op
copy serialized Avg SignedBeaconBlock (200000 B) 29.935 us/op
transfer serialized BlobsSidecar (524380 B) 9.2380 us/op
copy serialized BlobsSidecar (524380 B) 111.70 us/op
transfer serialized Big SignedBeaconBlock (1000000 B) 10.816 us/op
copy serialized Big SignedBeaconBlock (1000000 B) 291.06 us/op
pass gossip attestations to forkchoice per slot 6.1629 ms/op
forkChoice updateHead vc 100000 bc 64 eq 0 558.30 us/op
forkChoice updateHead vc 600000 bc 64 eq 0 8.1504 ms/op
forkChoice updateHead vc 1000000 bc 64 eq 0 12.150 ms/op
forkChoice updateHead vc 600000 bc 320 eq 0 6.8690 ms/op
forkChoice updateHead vc 600000 bc 1200 eq 0 7.2243 ms/op
forkChoice updateHead vc 600000 bc 7200 eq 0 8.3027 ms/op
forkChoice updateHead vc 600000 bc 64 eq 1000 13.136 ms/op
forkChoice updateHead vc 600000 bc 64 eq 10000 23.618 ms/op
forkChoice updateHead vc 600000 bc 64 eq 300000 107.30 ms/op
computeDeltas 500000 validators 300 proto nodes 7.3139 ms/op
computeDeltas 500000 validators 1200 proto nodes 7.1147 ms/op
computeDeltas 500000 validators 7200 proto nodes 7.5630 ms/op
computeDeltas 750000 validators 300 proto nodes 11.509 ms/op
computeDeltas 750000 validators 1200 proto nodes 11.785 ms/op
computeDeltas 750000 validators 7200 proto nodes 13.027 ms/op
computeDeltas 1400000 validators 300 proto nodes 21.696 ms/op
computeDeltas 1400000 validators 1200 proto nodes 20.423 ms/op
computeDeltas 1400000 validators 7200 proto nodes 22.208 ms/op
computeDeltas 2100000 validators 300 proto nodes 34.285 ms/op
computeDeltas 2100000 validators 1200 proto nodes 29.886 ms/op
computeDeltas 2100000 validators 7200 proto nodes 25.783 ms/op
altair processAttestation - 250000 vs - 7PWei normalcase 4.7455 ms/op
altair processAttestation - 250000 vs - 7PWei worstcase 6.8622 ms/op
altair processAttestation - setStatus - 1/6 committees join 204.03 us/op
altair processAttestation - setStatus - 1/3 committees join 474.81 us/op
altair processAttestation - setStatus - 1/2 committees join 491.00 us/op
altair processAttestation - setStatus - 2/3 committees join 648.20 us/op
altair processAttestation - setStatus - 4/5 committees join 875.65 us/op
altair processAttestation - setStatus - 100% committees join 1.0945 ms/op
altair processBlock - 250000 vs - 7PWei normalcase 17.777 ms/op
altair processBlock - 250000 vs - 7PWei normalcase hashState 68.579 ms/op
altair processBlock - 250000 vs - 7PWei worstcase 68.221 ms/op
altair processBlock - 250000 vs - 7PWei worstcase hashState 164.69 ms/op
phase0 processBlock - 250000 vs - 7PWei normalcase 4.1697 ms/op
phase0 processBlock - 250000 vs - 7PWei worstcase 39.489 ms/op
altair processEth1Data - 250000 vs - 7PWei normalcase 741.39 us/op
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:15 13.450 us/op
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:219 48.521 us/op
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:42 20.353 us/op
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:18 14.902 us/op
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1020 154.66 us/op
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11777 1.9450 ms/op
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.7878 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.7671 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 10.283 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.7321 ms/op
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 5.9975 ms/op
Tree 40 250000 create 959.53 ms/op
Tree 40 250000 get(125000) 193.36 ns/op
Tree 40 250000 set(125000) 2.7966 us/op
Tree 40 250000 toArray() 37.398 ms/op
Tree 40 250000 iterate all - toArray() + loop 37.650 ms/op
Tree 40 250000 iterate all - get(i) 78.215 ms/op
Array 250000 create 6.4205 ms/op
Array 250000 clone - spread 2.3331 ms/op
Array 250000 get(125000) 0.46800 ns/op
Array 250000 set(125000) 0.63100 ns/op
Array 250000 iterate all - loop 235.27 us/op
phase0 afterProcessEpoch - 250000 vs - 7PWei 64.581 ms/op
Array.fill - length 1000000 10.500 ms/op
Array push - length 1000000 57.214 ms/op
Array.get 0.55213 ns/op
Uint8Array.get 0.49464 ns/op
phase0 beforeProcessEpoch - 250000 vs - 7PWei 53.599 ms/op
altair processEpoch - mainnet_e81889 560.83 ms/op
mainnet_e81889 - altair beforeProcessEpoch 38.314 ms/op
mainnet_e81889 - altair processJustificationAndFinalization 13.919 us/op
mainnet_e81889 - altair processInactivityUpdates 7.5616 ms/op
mainnet_e81889 - altair processRewardsAndPenalties 75.476 ms/op
mainnet_e81889 - altair processRegistryUpdates 1.5770 us/op
mainnet_e81889 - altair processSlashings 464.00 ns/op
mainnet_e81889 - altair processEth1DataReset 462.00 ns/op
mainnet_e81889 - altair processEffectiveBalanceUpdates 2.3784 ms/op
mainnet_e81889 - altair processSlashingsReset 2.1030 us/op
mainnet_e81889 - altair processRandaoMixesReset 3.0390 us/op
mainnet_e81889 - altair processHistoricalRootsUpdate 551.00 ns/op
mainnet_e81889 - altair processParticipationFlagUpdates 1.9100 us/op
mainnet_e81889 - altair processSyncCommitteeUpdates 411.00 ns/op
mainnet_e81889 - altair afterProcessEpoch 67.071 ms/op
capella processEpoch - mainnet_e217614 1.3129 s/op
mainnet_e217614 - capella beforeProcessEpoch 116.90 ms/op
mainnet_e217614 - capella processJustificationAndFinalization 7.1850 us/op
mainnet_e217614 - capella processInactivityUpdates 25.822 ms/op
mainnet_e217614 - capella processRewardsAndPenalties 270.55 ms/op
mainnet_e217614 - capella processRegistryUpdates 8.2710 us/op
mainnet_e217614 - capella processSlashings 286.00 ns/op
mainnet_e217614 - capella processEth1DataReset 286.00 ns/op
mainnet_e217614 - capella processEffectiveBalanceUpdates 22.581 ms/op
mainnet_e217614 - capella processSlashingsReset 1.2950 us/op
mainnet_e217614 - capella processRandaoMixesReset 1.6040 us/op
mainnet_e217614 - capella processHistoricalRootsUpdate 242.00 ns/op
mainnet_e217614 - capella processParticipationFlagUpdates 927.00 ns/op
mainnet_e217614 - capella afterProcessEpoch 129.51 ms/op
phase0 processEpoch - mainnet_e58758 399.65 ms/op
mainnet_e58758 - phase0 beforeProcessEpoch 130.93 ms/op
mainnet_e58758 - phase0 processJustificationAndFinalization 8.3570 us/op
mainnet_e58758 - phase0 processRewardsAndPenalties 49.099 ms/op
mainnet_e58758 - phase0 processRegistryUpdates 3.6580 us/op
mainnet_e58758 - phase0 processSlashings 209.00 ns/op
mainnet_e58758 - phase0 processEth1DataReset 210.00 ns/op
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.0487 ms/op
mainnet_e58758 - phase0 processSlashingsReset 1.5760 us/op
mainnet_e58758 - phase0 processRandaoMixesReset 1.9190 us/op
mainnet_e58758 - phase0 processHistoricalRootsUpdate 283.00 ns/op
mainnet_e58758 - phase0 processParticipationRecordUpdates 1.3230 us/op
mainnet_e58758 - phase0 afterProcessEpoch 44.733 ms/op
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.5870 ms/op
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 8.9920 ms/op
altair processInactivityUpdates - 250000 normalcase 24.767 ms/op
altair processInactivityUpdates - 250000 worstcase 25.151 ms/op
phase0 processRegistryUpdates - 250000 normalcase 6.1160 us/op
phase0 processRegistryUpdates - 250000 badcase_full_deposits 335.46 us/op
phase0 processRegistryUpdates - 250000 worstcase 0.5 145.19 ms/op
altair processRewardsAndPenalties - 250000 normalcase 59.031 ms/op
altair processRewardsAndPenalties - 250000 worstcase 49.760 ms/op
phase0 getAttestationDeltas - 250000 normalcase 13.336 ms/op
phase0 getAttestationDeltas - 250000 worstcase 7.0271 ms/op
phase0 processSlashings - 250000 worstcase 112.55 us/op
altair processSyncCommitteeUpdates - 250000 159.43 ms/op
BeaconState.hashTreeRoot - No change 352.00 ns/op
BeaconState.hashTreeRoot - 1 full validator 98.346 us/op
BeaconState.hashTreeRoot - 32 full validator 1.0277 ms/op
BeaconState.hashTreeRoot - 512 full validator 13.444 ms/op
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 116.36 us/op
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.6390 ms/op
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 34.250 ms/op
BeaconState.hashTreeRoot - 1 balances 95.312 us/op
BeaconState.hashTreeRoot - 32 balances 881.50 us/op
BeaconState.hashTreeRoot - 512 balances 10.524 ms/op
BeaconState.hashTreeRoot - 250000 balances 203.64 ms/op
aggregationBits - 2048 els - zipIndexesInBitList 23.257 us/op
byteArrayEquals 32 55.450 ns/op
Buffer.compare 32 18.149 ns/op
byteArrayEquals 1024 1.6478 us/op
Buffer.compare 1024 26.839 ns/op
byteArrayEquals 16384 26.147 us/op
Buffer.compare 16384 179.26 ns/op
byteArrayEquals 123687377 187.44 ms/op
Buffer.compare 123687377 7.3506 ms/op
byteArrayEquals 32 - diff last byte 51.218 ns/op
Buffer.compare 32 - diff last byte 16.700 ns/op
byteArrayEquals 1024 - diff last byte 1.5918 us/op
Buffer.compare 1024 - diff last byte 24.549 ns/op
byteArrayEquals 16384 - diff last byte 24.616 us/op
Buffer.compare 16384 - diff last byte 186.65 ns/op
byteArrayEquals 123687377 - diff last byte 195.19 ms/op
Buffer.compare 123687377 - diff last byte 7.5898 ms/op
byteArrayEquals 32 - random bytes 5.4660 ns/op
Buffer.compare 32 - random bytes 17.169 ns/op
byteArrayEquals 1024 - random bytes 5.1380 ns/op
Buffer.compare 1024 - random bytes 17.358 ns/op
byteArrayEquals 16384 - random bytes 5.0870 ns/op
Buffer.compare 16384 - random bytes 17.108 ns/op
byteArrayEquals 123687377 - random bytes 6.5500 ns/op
Buffer.compare 123687377 - random bytes 18.870 ns/op
regular array get 100000 times 45.781 us/op
wrappedArray get 100000 times 47.150 us/op
arrayWithProxy get 100000 times 15.359 ms/op
ssz.Root.equals 48.547 ns/op
byteArrayEquals 49.504 ns/op
Buffer.compare 10.535 ns/op
processSlot - 1 slots 10.614 us/op
processSlot - 32 slots 2.2430 ms/op
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 65.591 ms/op
getCommitteeAssignments - req 1 vs - 250000 vc 2.2182 ms/op
getCommitteeAssignments - req 100 vs - 250000 vc 4.3427 ms/op
getCommitteeAssignments - req 1000 vs - 250000 vc 4.5132 ms/op
findModifiedValidators - 10000 modified validators 838.61 ms/op
findModifiedValidators - 1000 modified validators 752.63 ms/op
findModifiedValidators - 100 modified validators 294.32 ms/op
findModifiedValidators - 10 modified validators 218.84 ms/op
findModifiedValidators - 1 modified validators 172.65 ms/op
findModifiedValidators - no difference 256.14 ms/op
compare ViewDUs 6.5221 s/op
compare each validator Uint8Array 1.7721 s/op
compare ViewDU to Uint8Array 1.3696 s/op
migrate state 1000000 validators, 24 modified, 0 new 1.0074 s/op
migrate state 1000000 validators, 1700 modified, 1000 new 1.4233 s/op
migrate state 1000000 validators, 3400 modified, 2000 new 1.7953 s/op
migrate state 1500000 validators, 24 modified, 0 new 1.1185 s/op
migrate state 1500000 validators, 1700 modified, 1000 new 1.3926 s/op
migrate state 1500000 validators, 3400 modified, 2000 new 1.6271 s/op
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.8300 ns/op
state getBlockRootAtSlot - 250000 vs - 7PWei 781.09 ns/op
naive computeProposerIndex 100000 validators 81.026 ms/op
computeProposerIndex 100000 validators 10.878 ms/op
naiveGetNextSyncCommitteeIndices 1000 validators 10.540 s/op
getNextSyncCommitteeIndices 1000 validators 271.42 ms/op
naiveGetNextSyncCommitteeIndices 10000 validators 9.6832 s/op
getNextSyncCommitteeIndices 10000 validators 379.52 ms/op
naiveGetNextSyncCommitteeIndices 100000 validators 8.6255 s/op
getNextSyncCommitteeIndices 100000 validators 256.21 ms/op
naive computeShuffledIndex 100000 validators 26.626 s/op
cached computeShuffledIndex 100000 validators 569.40 ms/op
naive computeShuffledIndex 2000000 validators 570.77 s/op
cached computeShuffledIndex 2000000 validators 39.502 s/op
computeProposers - vc 250000 11.030 ms/op
computeEpochShuffling - vc 250000 43.977 ms/op
getNextSyncCommittee - vc 250000 207.81 ms/op
computeSigningRoot for AttestationData 25.469 us/op
hash AttestationData serialized data then Buffer.toString(base64) 1.7234 us/op
toHexString serialized data 1.2497 us/op
Buffer.toString(base64) 164.13 ns/op
nodejs block root to RootHex using toHex 138.49 ns/op
nodejs block root to RootHex using toRootHex 94.055 ns/op
browser block root to RootHex using the deprecated toHexString 219.03 ns/op
browser block root to RootHex using toHex 176.30 ns/op
browser block root to RootHex using toRootHex 164.30 ns/op

by benchmarkbot/action

@philknows philknows merged commit 11be432 into stable Feb 13, 2025
31 checks passed
@philknows philknows deleted the rc/v1.27.0 branch February 13, 2025 15:27
@wemeetagain
Copy link
Member

🎉 This PR is included in v1.27.0 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants