Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy Geth and set of Nimbus nodes for Ropsten testnet #97

Closed
jakubgs opened this issue May 20, 2022 · 38 comments
Closed

Deploy Geth and set of Nimbus nodes for Ropsten testnet #97

jakubgs opened this issue May 20, 2022 · 38 comments
Assignees

Comments

@jakubgs
Copy link
Member

jakubgs commented May 20, 2022

There is a new testnet brewing intended to test The Merge on Ropsten:

This testnet is launching on the 30th of May, and each org will receive 10 k validators.

The support for this testnet has already been merged into nimbus-eth2:

What we'll need is:

The sooner the Geth node is up the sooner it syncs.

@jakubgs
Copy link
Member Author

jakubgs commented May 20, 2022

Although if we are in a hurry we do have a snap synced Geth instance on master-01.gc-us-central1-a.faucet.prod:

[email protected]:/docker/faucet-ropsten % ./rpc.sh eth_syncing
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": false
}

[email protected]:/docker/faucet-ropsten % sudo du -hsc node/data
85G	node/data
85G	total

So we can copy that in an emergency, but a fresh sync would be nice too.

jakubgs added a commit that referenced this issue May 20, 2022
jakubgs added a commit that referenced this issue May 20, 2022
@jakubgs
Copy link
Member Author

jakubgs commented May 20, 2022

Deployed a host and Geth node on it:

  • 94ffee28 - add ropsten-01.aws-eu-central-1a.nimbus.geth host
  • dcc11e0c - nimbus-geth-ropsten: configure Geth instance

We have an instance syncing:

[email protected]:~ % d
CONTAINER ID   NAMES                     IMAGE                             CREATED         STATUS
86c4cf9fb772   nimbus-ropsten-exporter   statusteam/geth_exporter:latest   9 minutes ago   Up 9 minutes
686db83c3af5   nimbus-ropsten-node       ethereum/client-go:v1.10.17       9 minutes ago   Up 9 minutes
[email protected]:~ % /docker/nimbus-ropsten/rpc.sh eth_syncing
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "currentBlock": "0x1cf51",
    "healedBytecodeBytes": "0x0",
    "healedBytecodes": "0x0",
    "healedTrienodeBytes": "0x0",
    "healedTrienodes": "0x0",
    "healingBytecode": "0x0",
    "healingTrienodes": "0x0",
    "highestBlock": "0xbb6c2d",
    "startingBlock": "0x847",
    "syncedAccountBytes": "0x0",
    "syncedAccounts": "0x0",
    "syncedBytecodeBytes": "0x0",
    "syncedBytecodes": "0x0",
    "syncedStorage": "0x0",
    "syncedStorageBytes": "0x0"
  }
}

@jakubgs
Copy link
Member Author

jakubgs commented May 21, 2022

We are getting there, about one third in 16 hours:

[email protected]:~ % alias eth_syncing='/docker/nimbus-ropsten/rpc.sh eth_syncing'
[email protected]:~ % current=$(eth_syncing | jq -r .result.currentBlock)                              
[email protected]:~ % highest=$(eth_syncing | jq -r .result.highestBlock)          
[email protected]:~ % echo "$((current)) / $((highest))"                           
3962202 / 12282925

@jakubgs
Copy link
Member Author

jakubgs commented May 23, 2022

Halfway there:

[email protected]:~ % echo "$((current)) / $((highest))"
5837589 / 12289864

But we might need a bigger data volume:

[email protected]:~ % df -h /data
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme1n1    147G   72G   68G  52% /data

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

This sync progress doesn't look that great:

image

[email protected]:~ % echo "$((current)) / $((highest))"
6226415 / 12289864

We might need some help with this.

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

I added some more peers via the JS console based on comments in this Gist:
https://gist.github.com/kf106/72d8e52106675798475fbd64bade4359

admin.addPeer("enode://2163f02d0e8945caf5503249e0ea23d1225e788b93c9f5280f852b1406dc8b9847204ae3a9e49cb0fa3b3e2279c95c6b16c4f2a383b4dad4b0e28e60a4ad052b@18.141.202.244:30303")
admin.addPeer("enode://3350f62dfbc2e814487e27c715a6490c88e73cf721e8d4010544e2947a66732ecd059fb7d94ba1cf499e07be733ead0873b4774d79e01c6525b1ac52d6344b52@3.38.136.38:30303")
admin.addPeer("enode://3b76ec5359e59048721de8b6ff97a064ea280233d37433222ce7efcdac700c987326734983c9b65f8f1914c40e1efd6b43999912a3bca208fcbb540a678db110@93.75.22.22:30308")
admin.addPeer("enode://481f0ff17c8b1920307bbbad60771465de36c675ec868b376e8c53073c67b1668a426d310c41a119b2343914f30e0c41e062d656349a3766e1afae07e19631e4@3.81.225.7:30303")
admin.addPeer("enode://5671b196b04692bd9beb2e274c6dfd3b4381cb015274daed9febc14fe8f51bf62eaf47ee41e21e73c0900751fbd476b497d6f72381ba023dedd3bd4366704c94@54.194.144.213:30303")
admin.addPeer("enode://6803f24379caf278da310152faa163c833f6345529d9073702d7f02e4557ae4e24c1be0383b80a6388e955fe2413f16bc24c6f1b78af18cbf1426286a4b60f2f@34.229.14.138:30303")
admin.addPeer("enode://76af9eaadfe87d2d5782750f7b6905f115f6b97f7e38571a15c663fa34b3220a611efd195fcdd3bfd6106769963fa076eae714e31af2627c1df1a5cafc47fc72@83.138.40.23:30304")
admin.addPeer("enode://8d34365b555cdcb833029f95073db983c16fb741862f36f36884a27b43c9cac3f6135a854a5183c910e64e294d6964c94be63db217d78554dd25caea098f54fd@54.169.33.131:30303")
admin.addPeer("enode://9911c3cf5fd5c6306a64b07907f6db078c401c925fef197f02f5af56687c7e87972cabc5de953233b182cabfcdebc231c445a74e87a420b08c009681294cb2ae@3.87.191.67:30303")
admin.addPeer("enode://a6ceaedf9fce9b1c3b8930d33d4a7973834305d4da7a395a8b75f6e35ca744caf059f3d1c00261c64fb59234929c8cae61ad193309e8be42504dc7e3ef538b88@18.234.248.237:30303")
admin.addPeer("enode://b3b30de9b662134f10a2d2386e48ccb017c86203a061438d0f117a2544f9eeaa9af2b3991df43b9d86bcf406654166865c50c304f2b8242e5a4c84edc1d29580@54.211.85.122:30303")
admin.addPeer("enode://eb5b47fdda73e1cddc882814ede336cfb8b33536b26c3544ad3e2b88745f48231740deb2730bdac805ad2ac0b529b01d5ae2357e11e62f6e840a1cc1cdab2e56@54.152.57.186:30303")
admin.addPeer("enode://f7b2a650a56fa4040f4f2de9e94c031d9aa1ff01322038e29263f5aa8ddddab3b003fc8f84732d24e36bb55e0a8049bb16ee117b05567c950cf51f62f547dcc6@3.15.132.76:30303")

I also increased memory limits slightly. Lets see if that does anything.

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

The number of peers we have is decent:

[email protected]:~ % /docker/nimbus-ropsten/rpc.sh admin_peers | jq -c '.result[] | { name, caps }'
{"name":"Geth/v1.10.16-stable-20356e57/linux-amd64/go1.17.5","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.16-stable/linux-amd64/go1.17.5","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.3-stable-991384a7/linux-amd64/go1.16.3","caps":["eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"erigon/v2021.12.3-beta-47c3b9df/linux-amd64/go1.17.5","caps":["eth/66"]}
{"name":"Geth/v1.10.2-stable-97d11b01/linux-amd64/go1.16","caps":["eth/64","eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.3-stable-991384a7/linux-amd64/go1.16.3","caps":["eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.1-stable-c2d2f4ed/linux-amd64/go1.16","caps":["eth/64","eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.3-stable-991384a7/linux-amd64/go1.16.3","caps":["eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.1-stable-c2d2f4ed/linux-amd64/go1.16","caps":["eth/64","eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.13-stable-7a0c19f8/linux-amd64/go1.17.2","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.4-unstable-070dca0d-20210512/linux-amd64/go1.14.13","caps":["eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.16-stable/linux-amd64/go1.17.5","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.14-stable-11a3a350/linux-amd64/go1.17.5","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.1-stable-c2d2f4ed/linux-amd64/go1.16","caps":["eth/64","eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.2-stable-97d11b01/linux-amd64/go1.16.3","caps":["eth/64","eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"Geth/v1.10.17-stable-25c9b49f/linux-amd64/go1.18","caps":["eth/66","snap/1"]}
{"name":"erigon/v2022.04.2-beta-d139c750/linux-amd64/go1.17.9","caps":["eth/66"]}
{"name":"erigon/v2022.04.2-beta-d139c750/linux-amd64/go1.17.9","caps":["eth/66"]}
{"name":"Geth/v1.10.1-stable-c2d2f4ed/linux-amd64/go1.16","caps":["eth/64","eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.3-stable-991384a7/linux-amd64/go1.16.3","caps":["eth/65","eth/66","snap/1"]}
{"name":"Geth/v1.10.15-stable-8be800ff/linux-arm64/go1.17.5","caps":["eth/66","snap/1"]}

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

One thing that worries me is that after restart the currentBlock value dropped down from 12289864 to 11210298:

[email protected]:/docker/nimbus-ropsten % echo "$((current)) / $((highest))"                           
6228725 / 11210298

Which isn't right based on the block explorer:
image

https://ropsten.etherscan.io/

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

Looks like I made a bad call by copying Goerli config, since Goerli uses full sync by default, and that's what Ropsten node used:

geth_network_name: 'goerli'
geth_sync_mode: 'full'

Updated it and will attempt to sync from scratch:
geth_network_name: 'ropsten'
geth_sync_mode: 'snap'

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

Quite the difference in syncing speed:

image

So far so good.

@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

Looks like we got synced in under 5 hours:

image

Now only state sync is left which is at 71%:

[email protected]:~ % tail -n1 /var/log/docker/nimbus-ropsten-node/docker.log
INFO [05-24|13:54:15.723] State sync in progress                   synced=71.70% state=47.13GiB  accounts=26,953,[email protected]   slots=152,921,[email protected] codes=1,727,[email protected]  eta=1h47m3.312s

Nice.

jakubgs added a commit that referenced this issue May 24, 2022
@jakubgs
Copy link
Member Author

jakubgs commented May 24, 2022

Had to bump the data volume to 250 GB: 8c27671a

[email protected]:~ % /docker/nimbus-ropsten/rpc.sh eth_syncing | jq
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": false
}

And we're synced.

@jakubgs
Copy link
Member Author

jakubgs commented May 25, 2022

I have ordered a Hetzner host for Ropsten testnet: Order B20220525-2169843

image

It should arrive shortly based on the processing times listed:
image

jakubgs added a commit that referenced this issue May 25, 2022
Host for Nimbus nodes for new merge testnet called Ropsten:
#97

Signed-off-by: Jakub Sokołowski <[email protected]>
@jakubgs
Copy link
Member Author

jakubgs commented May 25, 2022

The host is here, I've bootstrapped it already: 05214dc2

metal-01.he-eu-hel1.nimbus.ropsten hostname=metal-01.he-eu-hel1.nimbus.ropsten ansible_host=135.181.57.169 env=nimbus stage=ropsten data_center=he-eu-hel1 region=eu-hel1 dns_entry=metal-01.he-eu-hel1.nimbus.ropsten.statusim.net

Marudny pushed a commit that referenced this issue May 25, 2022
#97

- no validators have been deployed yet.

Signed-off-by: Artur Marud <[email protected]>
@Marudny
Copy link
Contributor

Marudny commented May 25, 2022

4 instances of beacon nodes were created based on kiln configuration. All of them are up and running.

[email protected]:~ % for PORT in {9301..9304}; do c "0:$PORT/eth/v1/node/syncing" | jq -c; done
{"data":{"head_slot":"0","sync_distance":"0","is_syncing":false}}
{"data":{"head_slot":"0","sync_distance":"0","is_syncing":false}}
{"data":{"head_slot":"0","sync_distance":"0","is_syncing":false}}
{"data":{"head_slot":"0","sync_distance":"0","is_syncing":false}}

@jakubgs
Copy link
Member Author

jakubgs commented May 25, 2022

As I mentioned, the Ropsten testent config can be found here based on the changes in status-im/nimbus-eth2#3648:
https://github.com/eth-clients/merge-testnets/blob/main/ropsten-beacon-chain/config.yaml

I would recommend some extra reading about Consensus Layer and Beacon Chain:

@jakubgs
Copy link
Member Author

jakubgs commented May 25, 2022

Thew new 1.10.18 release of go-ethereum will be required for Ropsten merge:

This release is ready for the Merge transition on the Ropsten testnet, and will activate the Merge on Ropsten when the testnet reaches a total difficulty of 43531756765713534. Please ensure you have a beacon chain node configured for the transition.

https://github.com/ethereum/go-ethereum/releases/tag/v1.10.18

@jakubgs
Copy link
Member Author

jakubgs commented May 26, 2022

Zahary has generated the Ropsten validators: https://github.com/status-im/nimbus-private/tree/master/ropsten_deposits

@jakubgs
Copy link
Member Author

jakubgs commented May 26, 2022

One more thing worth verifying is the genesis for the testnet the nodes have configured:

[email protected]:~ % for PORT in {9301..9304}; do c "0:$PORT/eth/v1/beacon/genesis" | jq -c '.data | {genesis_time,genesis_fork_version}'; done 
{"genesis_time":"1653922800","genesis_fork_version":"0x80000069"}
{"genesis_time":"1653922800","genesis_fork_version":"0x80000069"}
{"genesis_time":"1653922800","genesis_fork_version":"0x80000069"}
{"genesis_time":"1653922800","genesis_fork_version":"0x80000069"}

Which indeed matches the config provided:

# Monday, May 30th, 2022 3:00:00 PM +UTC
MIN_GENESIS_TIME: 1653318000
GENESIS_FORK_VERSION: 0x80000069

https://github.com/eth-clients/merge-testnets/blob/0c0b3509395451b529ab655ec37a66dcc8b52ec8/ropsten-beacon-chain/config.yaml#L8-L10

You can read about gensis files here:

@Marudny
Copy link
Contributor

Marudny commented May 26, 2022

Zahary has generated the Ropsten validators: https://github.com/status-im/nimbus-private/tree/master/ropsten_deposits

Validators have been deployed in requested fashion (2500 validators per instance).

b1545a6

@jakubgs
Copy link
Member Author

jakubgs commented May 26, 2022

Looks good:

[email protected]:~ % for PORT in {9201..9204}; do c "0:$PORT/metrics" | grep '^validators '; done
validators 2500.0
validators 2500.0
validators 2500.0
validators 2500.0

@jakubgs
Copy link
Member Author

jakubgs commented May 27, 2022

We will need a TTD fix that was added recently: eth-clients/merge-testnets@dd5e025

It has been included in the stable branch, but not yet in the unstable which the nodes run:

For now I just manually edited the systemd services to include this flag:

[email protected]:~ % s cat beacon-node-ropsten-unstable-01.service | grep terminal
    --terminal-total-difficulty-override=100000000000000000000000

And disabled Ansible changes with the toggle script.

@jakubgs
Copy link
Member Author

jakubgs commented May 30, 2022

Looking good:

image

image

Thought I don't know if there's any dashboard for ropsten.

@jakubgs
Copy link
Member Author

jakubgs commented Jun 1, 2022

Appears to be running fine:

image

I consider this done.

@jakubgs jakubgs closed this as completed Jun 1, 2022
@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

Turns out apparently that the original description of the task was incorrect. We need a dedicated Geth node for each Nimbus beacon node instance, otherwise we'd have multiple beacon nodes trying to control the same Geth instance, which won't work.

The alternatives are:

  • Get more storage as Ropsten snap sync is 150 GB and create 3 more Geth instances on the nimbus.ropsten host.
    • Issue with this is that getting more storage on Hetzner might be fast, or might be very slow depending on their load.
  • Getting rid of the other 3 beacon chain nodes and moving all validators to just one instnace.
    • Simple and quick to do, but not sure what value we are losing from not running multiple nodes.

@zah what do you think?

@jakubgs jakubgs reopened this Jun 8, 2022
@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

Right now we have 384 GB on the nimbus.ropsten Hetzner host:

[email protected]:~ % df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/md2        436G   30G  384G   8% /
[email protected]:~ % sudo du -hsc /data/*
4.1G	/data/beacon-node-ropsten-unstable-01
4.1G	/data/beacon-node-ropsten-unstable-02
4.1G	/data/beacon-node-ropsten-unstable-03
4.1G	/data/beacon-node-ropsten-unstable-04
17G	total

And each beacon node takes up about 4 GB. At most we could fit two Geth instances on this host. And it wouldn't last a long time.

@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

@zah made a decision to temporarily use just one beacon chain node with all validators, and split it up after the merge.

jakubgs added a commit that referenced this issue Jun 8, 2022
We can't have multiple nodes controling a single Geth instance.
This will lead to unpredictable behavior on the execution layer.

#97

Signed-off-by: Jakub Sokołowski <[email protected]>
@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

I've fixed the layout: fbc939ae

nodes_layout:
'metal-01.he-eu-hel1.nimbus.ropsten':
# FIXME: Temporary layout change due to one Geth instance.
- { start: 0, end: 10000 }
- { }
- { }
- { }

[email protected]:~ % for PORT in {9201..9204}; do c "0:$PORT/metrics" | grep '^validators '; done
validators 10000.0
validators 0.0
validators 0.0
validators 0.0

Once the merge happens we can try to get extra storage and spread out the validators again.

@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

There's not much time left to the merge:

 > c whendoesropstenmerge.goerli.net | grep remaining
remaining blocks: 2290
remaining time: 26632 seconds

@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

Apparently, according to docs:

On the other hand, generic many-to-one Consensus Layer to Execution Layer configurations are not supported out-of-the-box. The Execution Layer, by default, only supports one chain head at a time and thus has undefined behavior when multiple Consensus Layers simultaneously control the head. The Engine API does work properly, if in such a many-to-one configuration, only one Consensus Layer instantiation is able to write to the Execution Layer's chain head and initiate the payload build process (i.e. call engine_forkchoiceUpdated ), while other Consensus Layers can only safely insert payloads (i.e. engine_newPayload) and read from the Execution Layer.

And according to @tersec:

In reality, I think the system won't explode right away, but some erratic behavior will be observed
in theory, if, and only if, they all stay on the same head, then it shouldn't be too bad
the problems will come more acutely if they disagree on the head and keep switching the EL client's head around
i.e. probably it will manifest as a kind of fragility -- if things are going well, should be mostly okay. If things aren't, they'll go worse than before

jakubgs added a commit to status-im/infra-role-geth that referenced this issue Jun 8, 2022
@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

Looks like the merge failed because we were missing the engine API module in Geth configuration:

{
  "lvl": "DBG",
  "ts": "2022-06-08 16:08:59.069+00:00",
  "msg": "{\"code\":-32601,\"message\":\"the method engine_newPayloadV1 does not exist/is not available\"}"
}

I've enabled it explicitly for the Ropsten Geth instnace, but I also enabled I by default in Geth Ansible role:

  • 427ef2a3 - nimbus-geth-ropsten: add required engine API module
  • 4246694d - enable required for merge engine API module

@jakubgs
Copy link
Member Author

jakubgs commented Jun 8, 2022

We really need some kind of validation process to check if the setup is correct before the next testnet merge.

@tersec
Copy link

tersec commented Jun 8, 2022

We have something which just needs minor adaptation: https://github.com/status-im/nimbus-eth2/blob/stable/scripts/test_merge_vectors.nim

@jakubgs
Copy link
Member Author

jakubgs commented Jun 9, 2022

@tersec And that has to be built separately and called separately from the node?

Any reason why it can't be part of the node and the check be triggered via API?

@tersec
Copy link

tersec commented Jun 9, 2022

That would currently have to be built separately, yes. There's even an engine API call for exactly this (https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#engine_exchangetransitionconfigurationv1) which kiln-dev-auth implements.

So one option is to switch to kiln-dev-auth and monitor for that message. Nimbus will call this once every minute as soon as it's in the Bellatrix fork (merge or not):
https://github.com/status-im/nimbus-eth2/blob/e688d0936baa37e91fe35e7be9ddf53a5518d16d/beacon_chain/nimbus_beacon_node.nim#L1319-L1329

proc onMinute(node: BeaconNode) {.async.} =
  ## This procedure will be called once per minute.
  # https://github.com/ethereum/execution-apis/blob/v1.0.0-alpha.8/src/engine/specification.md#engine_exchangetransitionconfigurationv1
  if  not node.eth1Monitor.isNil and
      node.currentSlot.epoch >= node.dag.cfg.BELLATRIX_FORK_EPOCH:
    try:
      # TODO could create a bool-succeed-or-not interface to this function
      await node.eth1Monitor.exchangeTransitionConfiguration()
    except CatchableError as exc:
      debug "onMinute: exchangeTransitionConfiguration failed",
        error = exc.msg

unstable should support the same API call soon.

@tersec
Copy link

tersec commented Jun 9, 2022

Also if there's a mismatch:

  let ecTransitionConfiguration =
    await p.dataProvider.web3.provider.engine_exchangeTransitionConfigurationV1(
      ccTransitionConfiguration)
  if ccTransitionConfiguration != ecTransitionConfiguration:
    # TODO is this the right place to log?
    info "exchangeTransitionConfiguration: transition configuration mismatch",
      ccTerminalTotalDifficulty =
        ccTransitionConfiguration.terminalTotalDifficulty,
      ccTerminalBlockHash =
        ccTransitionConfiguration.terminalBlockHash,
      ccTerminalBlockNumber =
        ccTransitionConfiguration.terminalBlockNumber.uint64,
      ecTerminalTotalDifficulty =
        ecTransitionConfiguration.terminalTotalDifficulty,
      ecTerminalBlockHash =
        ecTransitionConfiguration.terminalBlockHash,
      ecTerminalBlockNumber =
        ecTransitionConfiguration.terminalBlockNumber.uint64

@jakubgs
Copy link
Member Author

jakubgs commented Jun 15, 2022

I think we can close this.

@jakubgs jakubgs closed this as completed Jun 15, 2022
@jakubgs jakubgs self-assigned this Nov 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants