-
-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: v1.27.0 release #7461
Merged
Merged
chore: v1.27.0 release #7461
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…#7420) **Motivation** Based on observations from mainnet nodes, it seems like we reject builder blocks in some cases either due to not being sent within cutoff time or due to timeout on the api call but looking at the response times these have been timely. The reason why those were rejected is either we started the block production race too late into the slot, which is mostly due to the fact that we take too much time to produce the common block body or the timeout was handled by the node with a delay, both of these cases are likely caused by event loop lag either due to GC or processing something else. See [discord](https://discord.com/channels/593655374469660673/1331991458152058991/1335576180815958088) for details. **Description** Increase block production timeouts to account for event loop lag
…tation (#7419) Since ChainSafe/ssz#456 it's possible to use `getAllReadonly()` with uncommited changes. This PR essential reverts changes done in #7375 as it causes more memory allocation which is not ideal.
Closes #6908, this flag is widely used already and people are aware of the trade-offs, ie. increased storage but works well enough for their use cases.
Eth1 data poll is not needed anymore in Pectra after all eth1 deposits are processed. Introduce mechanism to stop the polling. Note in the case of Lodestar starting up in Electra, Eth1 data poll may run for 1 epoch but no data will actually get polled and then `PrepareNextSlotScheduler.stopEth1Polling()` will stop it. Part of #6341 --------- Co-authored-by: Nico Flaig <[email protected]>
**Motivation** Benchmarks show `snappy-wasm` is faster at compressing and uncompressing than `snappyjs`. **Description** - Use `snappy-wasm` for compressing / uncompressing gossip payloads - Add more `snappy` vs `snappyjs` vs `snappy-wasm` benchmarks **TODO** - [x] deploy this branch on our test fleet - deployed on feat3 ``` network / gossip / snappy compress ✔ 100 bytes - compress - snappyjs 335566.9 ops/s 2.980032 us/op - 685 runs 2.54 s ✔ 100 bytes - compress - snappy 388610.3 ops/s 2.573272 us/op - 870 runs 2.74 s ✔ 100 bytes - compress - snappy-wasm 583254.0 ops/s 1.714519 us/op - 476 runs 1.32 s ✔ 100 bytes - compress - snappy-wasm - prealloc 1586695 ops/s 630.2410 ns/op - 481 runs 0.804 s ✔ 200 bytes - compress - snappyjs 298272.8 ops/s 3.352636 us/op - 213 runs 1.22 s ✔ 200 bytes - compress - snappy 419528.0 ops/s 2.383631 us/op - 926 runs 2.71 s ✔ 200 bytes - compress - snappy-wasm 472468.5 ops/s 2.116543 us/op - 577 runs 1.72 s ✔ 200 bytes - compress - snappy-wasm - prealloc 1430445 ops/s 699.0830 ns/op - 868 runs 1.11 s ✔ 300 bytes - compress - snappyjs 265124.9 ops/s 3.771807 us/op - 137 runs 1.02 s ✔ 300 bytes - compress - snappy 361683.9 ops/s 2.764845 us/op - 1332 runs 4.18 s ✔ 300 bytes - compress - snappy-wasm 443688.4 ops/s 2.253834 us/op - 859 runs 2.44 s ✔ 300 bytes - compress - snappy-wasm - prealloc 1213825 ops/s 823.8420 ns/op - 370 runs 0.807 s ✔ 400 bytes - compress - snappyjs 262168.5 ops/s 3.814341 us/op - 358 runs 1.87 s ✔ 400 bytes - compress - snappy 382494.9 ops/s 2.614414 us/op - 1562 runs 4.58 s ✔ 400 bytes - compress - snappy-wasm 406373.2 ops/s 2.460792 us/op - 797 runs 2.46 s ✔ 400 bytes - compress - snappy-wasm - prealloc 1111715 ops/s 899.5110 ns/op - 450 runs 0.906 s ✔ 500 bytes - compress - snappyjs 229213.1 ops/s 4.362753 us/op - 359 runs 2.07 s ✔ 500 bytes - compress - snappy 373695.8 ops/s 2.675973 us/op - 2050 runs 5.99 s ✔ 500 bytes - compress - snappy-wasm 714917.4 ops/s 1.398763 us/op - 960 runs 1.84 s ✔ 500 bytes - compress - snappy-wasm - prealloc 1054619 ops/s 948.2100 ns/op - 427 runs 0.907 s ✔ 1000 bytes - compress - snappyjs 148702.3 ops/s 6.724847 us/op - 171 runs 1.65 s ✔ 1000 bytes - compress - snappy 423688.1 ops/s 2.360227 us/op - 525 runs 1.74 s ✔ 1000 bytes - compress - snappy-wasm 524350.6 ops/s 1.907121 us/op - 273 runs 1.03 s ✔ 1000 bytes - compress - snappy-wasm - prealloc 685191.5 ops/s 1.459446 us/op - 349 runs 1.01 s ✔ 10000 bytes - compress - snappyjs 21716.92 ops/s 46.04704 us/op - 16 runs 1.24 s ✔ 10000 bytes - compress - snappy 98051.32 ops/s 10.19874 us/op - 184 runs 2.39 s ✔ 10000 bytes - compress - snappy-wasm 114681.8 ops/s 8.719783 us/op - 49 runs 0.937 s ✔ 10000 bytes - compress - snappy-wasm - prealloc 111203.6 ops/s 8.992518 us/op - 49 runs 0.953 s ✔ 100000 bytes - compress - snappyjs 2947.313 ops/s 339.2921 us/op - 12 runs 4.74 s ✔ 100000 bytes - compress - snappy 14963.78 ops/s 66.82801 us/op - 70 runs 5.19 s ✔ 100000 bytes - compress - snappy-wasm 19868.33 ops/s 50.33136 us/op - 14 runs 1.21 s ✔ 100000 bytes - compress - snappy-wasm - prealloc 24579.34 ops/s 40.68457 us/op - 13 runs 1.06 s uncompress ✔ 100 bytes - uncompress - snappyjs 589201.6 ops/s 1.697212 us/op - 242 runs 0.911 s ✔ 100 bytes - uncompress - snappy 537424.1 ops/s 1.860728 us/op - 220 runs 0.910 s ✔ 100 bytes - uncompress - snappy-wasm 634966.2 ops/s 1.574887 us/op - 194 runs 0.808 s ✔ 100 bytes - uncompress - snappy-wasm - prealloc 1846964 ops/s 541.4290 ns/op - 559 runs 0.804 s ✔ 200 bytes - uncompress - snappyjs 395141.8 ops/s 2.530737 us/op - 281 runs 1.22 s ✔ 200 bytes - uncompress - snappy 536862.6 ops/s 1.862674 us/op - 274 runs 1.01 s ✔ 200 bytes - uncompress - snappy-wasm 420251.6 ops/s 2.379527 us/op - 129 runs 0.810 s ✔ 200 bytes - uncompress - snappy-wasm - prealloc 1746167 ops/s 572.6830 ns/op - 529 runs 0.804 s ✔ 300 bytes - uncompress - snappyjs 441676.2 ops/s 2.264102 us/op - 898 runs 2.53 s ✔ 300 bytes - uncompress - snappy 551313.2 ops/s 1.813851 us/op - 336 runs 1.11 s ✔ 300 bytes - uncompress - snappy-wasm 494773.0 ops/s 2.021129 us/op - 203 runs 0.912 s ✔ 300 bytes - uncompress - snappy-wasm - prealloc 1528680 ops/s 654.1590 ns/op - 465 runs 0.805 s ✔ 400 bytes - uncompress - snappyjs 383746.1 ops/s 2.605890 us/op - 235 runs 1.11 s ✔ 400 bytes - uncompress - snappy 515986.6 ops/s 1.938035 us/op - 158 runs 0.809 s ✔ 400 bytes - uncompress - snappy-wasm 392947.8 ops/s 2.544867 us/op - 322 runs 1.32 s ✔ 400 bytes - uncompress - snappy-wasm - prealloc 1425978 ops/s 701.2730 ns/op - 721 runs 1.01 s ✔ 500 bytes - uncompress - snappyjs 330727.5 ops/s 3.023637 us/op - 173 runs 1.02 s ✔ 500 bytes - uncompress - snappy 513874.1 ops/s 1.946002 us/op - 157 runs 0.806 s ✔ 500 bytes - uncompress - snappy-wasm 389263.0 ops/s 2.568957 us/op - 161 runs 0.914 s ✔ 500 bytes - uncompress - snappy-wasm - prealloc 1330936 ops/s 751.3510 ns/op - 672 runs 1.01 s ✔ 1000 bytes - uncompress - snappyjs 241393.9 ops/s 4.142606 us/op - 126 runs 1.03 s ✔ 1000 bytes - uncompress - snappy 491119.6 ops/s 2.036164 us/op - 201 runs 0.911 s ✔ 1000 bytes - uncompress - snappy-wasm 361794.5 ops/s 2.764000 us/op - 148 runs 0.910 s ✔ 1000 bytes - uncompress - snappy-wasm - prealloc 959026.5 ops/s 1.042724 us/op - 390 runs 0.909 s ✔ 10000 bytes - uncompress - snappyjs 40519.03 ops/s 24.67976 us/op - 16 runs 0.913 s ✔ 10000 bytes - uncompress - snappy 202537.6 ops/s 4.937355 us/op - 796 runs 4.43 s ✔ 10000 bytes - uncompress - snappy-wasm 165017.6 ops/s 6.059960 us/op - 52 runs 0.822 s ✔ 10000 bytes - uncompress - snappy-wasm - prealloc 175061.5 ops/s 5.712277 us/op - 130 runs 1.25 s ✔ 100000 bytes - uncompress - snappyjs 4030.391 ops/s 248.1149 us/op - 12 runs 3.71 s ✔ 100000 bytes - uncompress - snappy 35459.43 ops/s 28.20124 us/op - 41 runs 1.67 s ✔ 100000 bytes - uncompress - snappy-wasm 22449.16 ops/s 44.54509 us/op - 13 runs 1.11 s ✔ 100000 bytes - uncompress - snappy-wasm - prealloc 27169.50 ops/s 36.80598 us/op - 13 runs 0.997 s ``` Closes #4170 --------- Co-authored-by: Nico Flaig <[email protected]>
- repeat of #7204 because we reverted it - See the review in #6483 Co-authored-by: Matthew Keil <[email protected]>
**Motivation** - #6869 **Description** - add `MIN_EPOCHS_FOR_BLOCK_REQUESTS` config (PS we're missing a lot of the network config entries from the consensus specs) - add `--chain.pruneHistory` flag, default to false - when chain.pruneHistory is true - prune all historical blocks/states on startup and then on every subsequent finalization --------- Co-authored-by: Nico Flaig <[email protected]>
As discussed we should hide the flag introduced in #7427 for now until the feature becomes more stable. Also commented out the docs section for now.
Noticed the queries are not working as the `_bucket` suffix is missing. Also did some cosmetic changes and moved the panels a bit further down in the sync dashboard.  We might also wanna reconsider buckets as fetching keys seems to take > 1 second in a few cases. I mentioned a [possible solution](https://discord.com/channels/593655374469660673/1337188931489239072/1338499158566375464) to improve the fetch time.
Bumps [nanoid](https://github.com/ai/nanoid) from 3.3.7 to 3.3.8. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/ai/nanoid/blob/main/CHANGELOG.md">nanoid's changelog</a>.</em></p> <blockquote> <h2>3.3.8</h2> <ul> <li>Fixed a way to break Nano ID by passing non-integer size (by <a href="https://github.com/myndzi"><code>@myndzi</code></a>).</li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/ai/nanoid/commit/3044cd5e73f4cf31795f61f6e6b961c8c0a5c744"><code>3044cd5</code></a> Release 3.3.8 version</li> <li><a href="https://github.com/ai/nanoid/commit/4fe34959c34e5b3573889ed4f24fe91d1d3e7231"><code>4fe3495</code></a> Update size limit</li> <li><a href="https://github.com/ai/nanoid/commit/d643045f40d6dc8afa000a644d857da1436ed08c"><code>d643045</code></a> Fix pool pollution, infinite loop (<a href="https://github.com/ai/nanoid/issues/510">#510</a>)</li> <li>See full diff in <a href="https://github.com/ai/nanoid/compare/3.3.7...3.3.8">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/ChainSafe/lodestar/network/alerts). </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
**Motivation** Start of resolution to #7183. Driven by peerDAS branch. `isForkBlobs` is not longer semantically correct in `fulu` because that branch does not have blobs, it has columns. Is the first major feature that was removed so the types/typeguard naming semantics we were using kinda broke. Updated to reflect pre/post fork instead of post"Feature". **Description** Rename pre/post fork names and type guards. --------- Co-authored-by: Nico Flaig <[email protected]>
**Motivation** - from electra, `processSyncCommitteeUpdates()` could be >15s according to the devnet **Description** - the main fix is in `computeShuffledIndex` where we can cache pivot and source computation there - some other optimization other than that: - only compute hash once every 16 iterations - compute int manually instead of using `bytesToInt` in order not to use BigInt - cache shuffled index I guess if we use `hashtree` we can improve more but the diff is a lot already and the main optimization is in `computeShuffledIndex()`, not the hash function. We can consider that in the future. We can also improve pre-electra but I think it's been not that bad for a long time, so only focus on electra in this PR Closes #7366 **Tests** - added unit tests to compare naive version vs the optimized version - benchmarks on local show >1000x difference for the main concerned function `naiveGetNextSyncCommitteeIndices()` while CI only show >20x difference. This is my local ``` computeProposerIndex ✔ naive computeProposerIndex 100000 validators 31.86491 ops/s 31.38248 ms/op - 10 runs 34.5 s ✔ computeProposerIndex 100000 validators 106.2267 ops/s 9.413833 ms/op - 10 runs 10.4 s getNextSyncCommitteeIndices electra ✔ naiveGetNextSyncCommitteeIndices 1000 validators 0.2121840 ops/s 4.712890 s/op - 10 runs 51.7 s ✔ getNextSyncCommitteeIndices 1000 validators 214.9251 ops/s 4.652783 ms/op - 45 runs 0.714 s ✔ naiveGetNextSyncCommitteeIndices 10000 validators 0.2122278 ops/s 4.711918 s/op - 10 runs 51.8 s ✔ getNextSyncCommitteeIndices 10000 validators 220.2337 ops/s 4.540632 ms/op - 46 runs 0.710 s ✔ naiveGetNextSyncCommitteeIndices 100000 validators 0.2117828 ops/s 4.721820 s/op - 10 runs 52.2 s ✔ getNextSyncCommitteeIndices 100000 validators 204.7383 ops/s 4.884283 ms/op - 43 runs 0.714 s computeShuffledIndex ✔ naive computeShuffledIndex 100000 validators 0.06638498 ops/s 15.06365 s/op - 3 runs 60.3 s ✔ cached computeShuffledIndex 100000 validators 1.932706 ops/s 517.4092 ms/op - 10 runs 5.72 s ``` --------- Co-authored-by: Tuyen Nguyen <[email protected]> Co-authored-by: Cayman <[email protected]>
### Background Raise by Teku team on discord https://discord.com/channels/595666850260713488/1338793076491026486/1338793635386228756, Lodestar will generate duplicated attestations and include them in the block body when proposing. For example https://dora.pectra-devnet-6.ethpandaops.io/slot/54933 has 6 copies of the same attestation with signature `0xae6b928e4866d5a43ae6d4ced869e3aa53f38617d42da5862c76e9a928942783a108e4e281d208766b8a9b2adb286aff0e0af7c14f24d1b013f4ccb47c000a11c256112fab37945d2e5bb3671b997a23b1d57d67d3fe69835ecc06f1f57b1210 `. This is due to `getAttestationsForBlockElectra` putting multiple copies of the same attestations in `consolidations` when there are attestations coming from different committees with the same attestation data. The above attestation having 6 copies because the attestation contains 6 committees (0, 2, 3, 6, 7 and 12). ### Proposal Remove `AttestationsConsolidation.score` and convert `consolidations` to a `Map<AttestationsConsolidation, number>()` to track the score while eliminating duplicates. --------- Co-authored-by: Nico Flaig <[email protected]>
nflaig
approved these changes
Feb 13, 2025
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## stable #7461 +/- ##
==========================================
+ Coverage 50.26% 50.43% +0.17%
==========================================
Files 602 602
Lines 40376 40583 +207
Branches 2205 2224 +19
==========================================
+ Hits 20294 20468 +174
- Misses 20042 20075 +33
Partials 40 40 |
Performance Report✔️ no performance regression detected Full benchmark results
|
🎉 This PR is included in v1.27.0 🎉 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Releasing rc.2 with two additional commits #7455 and #7443. Supersedes #7458