-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CBCification #701
Comments
Thank you for writing this! This is a really beautiful proposal. (1) About
(2) About bitwise LMD GHOST under validator rotation, what is the rationale of the (3) About slashing,
I don't understand the problem here. The Also, In
Thank you in advance 🙏 |
Yes.
Yes.
I think you mean LMD_GHOST_LOOKBACK slots, not 1/LMD_GHOST_LOOKBACK slots. But yes.
The rationale for having the slacking is the same as it was before: to account for the possibility that the ancestor of a block is actually valid even if it does not look valid from the point of view of the current validator set, because the validator set changed in the meantime. The // 256 is there to account for the fact that the data structure above uses 256 items per block height.
Definitely an interesting idea! Though the "several" would be big; eg. if N = 1 day (probably the best we can do for usability reasons) and LMD_GHOST_LOOKBACK = 8 months that's still ~240 times.
The problem is that it's not guaranteed that the block at H2 is fully available. The body of that particular block could be missing, making the slashing unprovable, as the information of the
Yep! Typo
I think that actually does need to be [i]. The idea is that we are checking that the balance that forks off from the chain at exactly height i does not exceed the balance that stays in the chain at height i+1. |
Thank you for replying!
Yeah, thanks.
Ah, so I assumed the second approach you described: a block is not valid at the moment unless the referenced ("justified") blocks are fully available. Is there any advantage to the challenge-response scheme? In the first place, I'm also assuming that every block includes attestations for off-chain blocks or points to off-chain blocks themselves as CBC's "justification" so that receivers can verify
Hmm, so I assumed that Other minor corrections:
|
The challenge response scheme approach allows us to keep the changes to purely within block validation rules, not changing the block acceptance rules at all. Not sure how valuable this is though.
Even in the current ethereum, every block includes attestations for off-chain blocks.
Another way to look at the problem is, suppose |
@vbuterin Source: in
Without modifying this, liveness will be lost in a case when some protocol-following validators end up in a situation where their previous attestations are not included in the main chain for 1 epoch (due to adaptive corruption on block proposer or temporal network failure) and hence they can not justify their previous attestations. In the first place, the rationale of the limit of attestation inclusion is to set the upper bound of the cost of block verification, right? |
Closing this issue because the details are becoming increasingly stale—the discussion can continue on ethresear.ch or hackmd. CBCfication is still part of the 5-10 year roadmap likely after we've nailed the execution model in phase 2. |
This issue is intended as an illustration of the concrete spec changes that would be required to transition the beacon chain from its current FFG-based spec (see the mini-spec) to CBC (reading material here here here and here).
This is still a work in progress, though it is much more concrete than previous versions of this doc.
Fork choice rule changes
The fork choice rule changes from the current "favor the highest justified checkpoint then use LMD GHOST" to "use bitwise LMD GHOST", but using the balances at each epoch during each epoch.
A new helper:
Here is the new fork choice rule:
But for CBC purposes, we need to have LMD GHOST not just be a suggestion, but rather an enforced rule; that is, a block is not valid unless its parent is the result of executing the LMD GHOST fork choice rule given all the evidence the block knows about. We need to keep track of some extra data to make it possible to verify this.
State changes
In the state, we make the following changes:
validator_balances
list with avalidator_volatile_data
list consisting of an objectValidatorVolatileData = {fractional_balance: uint32, last_agreement_height: uint32, last_agreement_bits: uint8, last_at_height: uint32}
.latest_block_roots
to last one year (ie. ~2**22
entries).off_chain_block_hashes = List[HashRecord]
whereHashRecord = {"hash": "bytes32", "agreement_height": "uint32", "agreement_bits": "uint8"}
.balance_agreeing_upto: List[List[int]]
, initialized as[[] for _ in range(LMD_GHOST_LOOKBACK)]
balance_at: List[int]
, initialized as[0 for _ in range(LMD_GHOST_LOOKBACK)]
ValidatorRecord
, add:activation_init_epoch
andexit_init_epoch
(both initialized toFAR_FUTURE_EPOCH
), and removeexit_initiated
.We add a new helper:
We add a new method for submitting an off-chain block header:
off_chain_block_hashes
are removed if their agreement height // 256 is < current slot - 2**22 (ie. after ~1 year).We add three helpers:
Here is the function for processing a set of attestations:
Here is the function to call every slot for every validator whose rounded balance gets adjusted:
Here is the function to check if a block is valid under the CBC validity condition; this should be run after processing attestations:
For example, suppose
LMD_GHOST_LOOKBACK = 4
andblock.slot % LMD_GHOST_LOOKBACK = 1
andagreement_height = [[10, 20], [30, 80, 50], [], [40]]
, where instead of 256 bits every hash has three bits.Then,
agreement_height
rotated would be[[30, 80, 50], [], [40], [10, 20]
, the flattened version is[30, 80, 50, 0, 0, 0, 40, 0, 0, 10, 20, 0]
and the partial sums are[230, 200, 120, 70, 70, 70, 70, 30, 30, 30, 20, 0]
- which may be invalid unless at least there are >= 10 votesat
the same position as the 40.Note that as a matter of efficient implementation, the sums need not be recalculated each time; nodes can maintain a sum tree locally (see description of sum trees in https://ethresear.ch/t/bitwise-lmd-ghost/4749) and use the binary search algorithm to verify correctness. One can compute a rotated sumtree from an unrotated sumtree in real time as follows: for a rotation by
r
of a list of lengthL
,rotated_sum[i] = sum[i] - sum[r] if i <= r else sum[i] + sum[0] -sum[r]
. One can also generally avoid storing most of the zeroes in high-order bits of each set of 256, and one can avoid a linear pass to verify compliance by doing a repeated binary search to find each position where the remaining sum drops below half of the previous remaining sum.Slashing condition enforcement
We only need two slashing conditions, for these two cases:
last_at_height
is A, and they also sign a block with a slot number B, with A < B < CFor the first slashing condition, we can reuse code from the FFG beacon chain as is. For the second, we need to create a SlashingProof object which contains the parameters:
See here for the definition of
SlashableVoteData
. Verifying this would entail verifying:SlashableVoteData
objects pass averify_slashable_vote_data
checkindex
is part ofintersection(union(data1.custody_bit_0_indices, data1.custody_bit_1_indices), union(data2.custody_bit_0_indices, data2.custody_bit_1_indices))
verify_merkle_branch(leaf=proof.merkle_branch_bottom, branch=proof.merkle_branch_in_volatile_data_tree
, index=proof.index, root=proof.block_header_1.validator_volatile_data_root)`hash(block_header_1) == data1.data
andhash(block_header_2) == data2.data
block_header_1.slot // EPOCH_LENGTH > block_header_2.slot // EPOCH_LENGTH > merkle_branch_bottom.last_at_height // EPOCH_LENGTH
Note that this is almost but not quite sufficient. The reason is that an attacker could make a signature on the main chain at height H1, then sign on a fake off-chain block that only has a header at height H2, include that signature in the main chain and sign at height H3, then keep signing on the main chain, and then sign a message on another chain at a height between H2 and H3. The fact that the Merkle branch for height H2 is absent means that there is no way to catch and penalize the validator. We can solve this in two ways:
(1) could be extended into a general "proof of custody of beacon chain state" mechanism, which may also be useful for other reasons.
Changes to deposit/withdraw logic
We remove the exit queue and replace it with a (deposit+withdrawal) queue. That is, we set
MAX_BALANCE_CHURN_QUOTIENT
to equalLMD_GHOST_LOOKBACK
, and instead of running through validators in order, we pre-sort the indices:Replace
exit_initiated = True
withexit_init_epoch = get_current_epoch(state)
.Sections to remove
process_exit_queue
simply runsprepare_validator_for_withdrawal
on all validators that pass theeligible
check)The text was updated successfully, but these errors were encountered: