Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: Batch proving in Circuits and L1 #7

Merged
merged 4 commits into from
Aug 5, 2024

Conversation

spalladino
Copy link
Contributor

Copy link
Contributor

@alexghr alexghr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔥

end_block_hash: Field, // Identifier of the last block in the range
start_global_variables: GlobalVariables, // Global variables for the first block in the range
end_global_variables: GlobalVariables, // Global variables for the last block in the range
fees: [{ recipient: Address, value: Field }; 32] // Concatenation of all coinbase and fees for the block range
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, this would be at most one entry per block in the tree. If two blocks were submitted by the same sequencer (same coinbase address) then presumably the values would get added together. So at the root level we can only have at most 32 fee recipients for 32 different blocks?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's correct!

}
```

The `last*` fields are updated every time a new block is uploaded, while the `verified*` ones are updated when a proof is uploaded. In the event of a rollback due to failure on proof submission, the `last*` fields are overwritten with the contents of `verified*`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we need another set of variables to track in-flight proving, otherwise publishing the 33rd block would overwrite the last* variables in the rollup which would fail proof validation.

Probably something to think about for proving coordination and having just a verified/last set of variables is enough.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise publishing the 33rd block would overwrite the last* variables in the rollup which would fail proof validation

Agree! That's why I suggested moving the "prove that this block root is in the current archive tree" to Solidity instead of the root rollup circuit. The proof would have a "last block proven", and the submitter would include a merkle path from it to the current lastArchiveTreeRoot. If that root changes, then the submitter needs to generate a new merkle path instead of rebuilding the entire root proof.

To further mitigate against this race condition, we could keep the last N archive tree roots instead of just the latest.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A fairly simple approach that I think I would take for the first version at least, would be to simple store a "list" of the archive values instead of just the tips.

This could be as:

mapping(uint256 blockNumber => bytes32 archive) public archives;

Then you keep track of 2 indexes for how far we have added blocks, and how far we have proven.

This would also allow providing a proof that is not the "next", e.g., to avoid the issue where someone is delaying their proof to limit the time you have to interact. We are looking to add some of this in AztecProtocol/aztec-packages#7614.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LHerskind sounds good as a first approach, given that L1 gas is free!

Copy link
Contributor

@iAmMichaelConnor iAmMichaelConnor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't finished, but here's some early comments. :)
Also, I drew this picture. Would this match your understanding? https://miro.com/app/board/uXjVK4BC8Yg=/ Notice also the blobification circuit, which I think should be loosely considered as part of this proposal. If you don't want to think about actual blobs, then perhaps we could think in terms of sha256-hashes, and at least contemplate what data needs to be hashed and emitted when we reach the 'block root' rollup circuit?
Is there any data above the "block" level (i.e. at the supra-block level) that needs to be submitted to blobs? If so, I'm missing yet another blobification circuit to feed the new 'root rollup' circuit.

(I'll be doing some thinking around this too)

in-progress/7346-batch-proving-circuits-and-l1.md Outdated Show resolved Hide resolved

### Empty block root rollup

Since we no longer submit a proof per block, and thanks to @MirandaWood we now have wonky rollups, we no longer need to fill a block with "empty" txs. This means we can discard the empty private kernel circuit.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this understanding align with what's written here: https://hackmd.io/@aztec-network/ByROkLKIC#Blocks-of-1-transaction ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That hackmd assumed we needed to create one proof per block, which is no longer the case, so we can replace "empty txs" with "empty blocks" in the superblock (name tbd!) proof.


### Binary vs fixed-size block merge rollup circuit

Assuming we implement a proving coordination mechanism where the block ranges proven are of fixed size (eg 32), then we could have a single circuit that directly consumes the 32 block root rollup proofs and outputs the public inputs expected by L1. This could be more efficient than constructing a tree of block root rollup proofs. On the other hand, it is not parallelizable, inflexible if we wanted a dynamic block range size, and must wait until all block root rollup proofs are available to start proving.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd vote for maximum parallelisation :P

7. Insert L2-to-L1 messages into the Outbox using the `out_hash` and block number
8. Pay out `total_fees` to `coinbase` in the L1 gas token

The first five items can keep being carried out by the `process` method for each new block that gets submitted. Proof verification, L2-to-L1 messages, and fee payment messages are moved to `submitProof`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The process name is unclear to me. Would this be better described as submitBlockProposal?
If so:

  • I think perhaps 2 (which will be calling the blob point eval precompile on some earlier-submitted blob) should be deferred until the proof is being verified.
  • The L1-to-L2 messages shouldn't be removed from the L1 box until the proof has been verified (because otherwise the copying of those messages to L2 might be rolled-back if the proof never comes, and therefore a future block will need to be able to copy those messages).
  • Validate the new block header - I thought we didn't want to submit potentially malicious state roots at the time of proposing a block? (and hence shouldn't be submitting new block headers to L1 at this stage)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The process name is unclear to me. Would this be better described as submitBlockProposal?

Depends on what we define as a "block proposal". Naming strikes again! If you mean "a block that is part of the unproven chain", then yeah.

I think perhaps 2 (which will be calling the blob point eval precompile on some earlier-submitted blob) should be deferred until the proof is being verified.

Blobification is not yet clear to me, but yeah, we can defer that until the proof is submitted and verified if it makes sense. FWIW there's a step missing here that is checking the attestations from the validator committee, but that'll come in later.

The L1-to-L2 messages shouldn't be removed from the L1 box until the proof has been verified

Good point! Do you know if we'd need to soft-delete them at least, in order to prevent them from being consumed in two unproven blocks in a row? Or is there another mechanism to prevent this?

Validate the new block header - I thought we didn't want to submit potentially malicious state roots at the time of proposing a block? (and hence shouldn't be submitting new block headers to L1 at this stage)

This proposal is assuming we're still uploading tx effects, but yeah, this may change if/when we switch to tx objects.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The L1-to-L2 messages shouldn't be removed from the L1 box until the proof has been verified

Was wondering about the same thing! I think we need to restore toConsume (but not isProgress) in the inbox to the point verifiedArchiveTreeRoot was set when a proof fails.

Was also considering deferring l1_to_l2_roots to be proved in the final root rollup and consume the root in the inbox from submiteProof. But this will result in a bigger delay for users to make use of a l1_to_l2_msg. So maybe not a good idea.

This proposal is assuming we're still uploading tx effects, but yeah, this may change if/when we switch to tx objects.

What's tx objects?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's tx objects?

I've also seen them referred to as "tx transcripts". It's the tx as submitted to the chain by the client, ie without the side effects from public execution, but with the public execution requests.

@spalladino
Copy link
Contributor Author

@iAmMichaelConnor that picture makes sense, yep! At first glance, blobification makes sense, and yeah, it should happen at the "block root" level.

Is there any data above the "block" level (i.e. at the supra-block level) that needs to be submitted to blobs? If so, I'm missing yet another blobification circuit to feed the new 'root rollup' circuit.

Not that I can think of. The only things that happen at the root-root level are fee payment and L2-to-L1 messages, neither of which need blobspace I believe.

Copy link
Contributor

@PhilWindle PhilWindle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sensible to me. Definitely needs input from the other teams however.

@LHerskind LHerskind mentioned this pull request Jul 23, 2024
}
```

The `last*` fields are updated every time a new block is uploaded, while the `verified*` ones are updated when a proof is uploaded. In the event of a rollback due to failure on proof submission, the `last*` fields are overwritten with the contents of `verified*`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A fairly simple approach that I think I would take for the first version at least, would be to simple store a "list" of the archive values instead of just the tips.

This could be as:

mapping(uint256 blockNumber => bytes32 archive) public archives;

Then you keep track of 2 indexes for how far we have added blocks, and how far we have proven.

This would also allow providing a proof that is not the "next", e.g., to avoid the issue where someone is delaying their proof to limit the time you have to interact. We are looking to add some of this in AztecProtocol/aztec-packages#7614.


The first four items can keep being carried out by the `process` method for each new block that gets submitted. Proof verification, L2-to-L1 messages, and fee payment messages are moved to `submitProof`. Data availability needs more clarity based the interaction between the blob circuits and the point evaluation precompile, but it may be moved entirely to `submitProof`.

As for consuming L1-to-L2 messages, note that these cannot be fully deleted from the Inbox, since we need to be able to rollback unproven blocks in the event of a missing proof, which requires being able to consume those messages again on the new chain.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L1 to L2 messages don't need to be deleted from the inbox, they are not really deleted there, just progressing what tree we are reading. It still needs to perform some action at the "submitBlock", but we can get this fairly easily AztecProtocol/aztec-packages#7592

@just-mitch
Copy link
Collaborator

@spalladino @MirandaWood @LeilaWang Is this one being implemented? Shall we merge the PR?

@spalladino
Copy link
Contributor Author

@just-mitch Miranda has already implemented most of it. I was waiting on approval from @LeilaWang and @LHerskind, but I believe we can merge anyway.

@LeilaWang
Copy link

I'll look at the PR now!

@LeilaWang
Copy link

Oh do you mean this doc pr? 😅

@just-mitch just-mitch merged commit a48a6f1 into main Aug 5, 2024
@just-mitch
Copy link
Collaborator

Oh do you mean this doc pr? 😅

yep!

Sounds good! Thanks all!

MirandaWood added a commit to AztecProtocol/aztec-packages that referenced this pull request Aug 23, 2024
First run at creating new rollup circuits for batch block proving (see
[this PR](AztecProtocol/engineering-designs#7)
for details).

~Please note the e2e tests will fail miserably as the circuits are not
yet linked up to the sequencer/prover/L1! Pushing for visibility.~

EDIT: Added support for verifying block-root proofs on L1. Though we
don't currently have an L1 verifier (so tests would pass whatever public
inputs we had), the method now accepts the new inputs until we have
batch rollups integrated.

---

Changes complete:

- Rename `root` to `block_root` and change outputs
- Add `block_merge` circuit and associated types/structs
- Add new `root` circuit and associated types/structs (NB Github doesn't
realise that old root -> block_root because of this new circuit, so the
comparison is hard to read!)
- Added new tyes ^ to `circuits.js` and useful methods to `bb-prover`,
`circuit-types`, and `noir-protocol-circuits-types`
- Made minor changes to `prover-client` (`orchestrator.ts` and
`block-building-helpers.ts`) to use the new `block_root` public outputs
- `Rollup.sol` now verifies a `block_root` proof and stores `blockHash`

--
TODOs:

- When adding fees in a `block_merge` or `root`, merge fees with the
same recipient - Miranda
- ~Edit publisher and L1 to accept a `block_root` proof with new public
inputs (for testing, so e2es will pass)~ Complete!
- Teach the prover/sequencer to prove many blocks and submit a `root`
proof - Miranda + Phil's team?
- ~Make final L1 changes to verify batch proofs~ - Complete! Currently
not tested with real solidity verifier, but bb verifier passes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants