Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update to twenty-first v0.18 #184

Merged
merged 3 commits into from
Mar 3, 2023
Merged

update to twenty-first v0.18 #184

merged 3 commits into from
Mar 3, 2023

Conversation

jan-ferdinand
Copy link
Member

@jan-ferdinand jan-ferdinand commented Mar 2, 2023

  • Use a persistent Songe state for the Fiat-Shamir transform.
  • Use Tip5 for building Merkle trees and other STARK hashing.
  • Add debug functionality for identifying failing constraints.
  • Improve code readability in various places.

@jan-ferdinand jan-ferdinand marked this pull request as ready for review March 3, 2023 14:05
})
}

fn encode_and_zero_pad_item(item: &Item) -> Vec<BFieldElement> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no appending of 1 going on before appending the 0s. Shouldn't that be there?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to simplify Fiat-Shamir for the recursive verifier, which always absorbs all currently available elements and doesn't keep “stray tails” around until the next RATE many elements are reached. If need be, the difference is filled with 0s.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the 1 should be inserted there. Otherwise you might have two different items that generate the same encoded and zero-padded string, and thus the same Fiat-Shamir challenge.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also simplifies the padding for the variable length hashing greatly, because that is always the vector [1, 0, …, 0]. However, that padding is not yet happening anywhere and must be added.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the 1 should be inserted there. Otherwise you might have two different items that generate the same encoded and zero-padded string, and thus the same Fiat-Shamir challenge.

That also works; whenever any element is absorbed, we act as if it might be the last one, and are thus ready for sampling indices or scalars.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, it might be more efficient to get an explicit call like “.fiat_shamir()” which appends the padding for variable length input, and only include the 1 then. This means that the recursive verifier does not have to spend cycles on figuring out what the correct amount of padding is.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I see it there are three options:

  1. Append 1 and 0s after enqueuing or dequeuing any ProofItem, thereby being prepared for Fiat-Shamir even if that happens not to be the next step.
  2. Store the elements in a separate queue and whenever the queue has rate-many elements or more, absorb them away. When Fiat-Shamir happens, append the 1 and 0s and flush the queue by absorbing all of it before squeezing.
  3. Pad with zeros to absorb all ProofItems as they are enqueued or dequeued, and just squeeze to get the Fiat-Shamir response. In this approach, it is immaterial whether a 1 was appended prior to squeezing out the Fiat-Shamir challenge. This approach can only be secure under certain assumptions on the serializer for ProofItems.

Imposing constraints for security on ProofItems' serializers seems sloppy and prone to error to me, so (3) is out. Between (1) and (2) I think (2) has fewer invocations of the hash function but at the cost of more bookkeeping. It's not obvious to me that (1) results in a worse recufier cycle count; I think that argument might go either way. For now, I would suggest the simpler option which in my eyes is (1).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Option 4: same as option 3, but with an explicit absorption of vector [1, 0, …, 0] to mark the end of en- or de-queueing.

it is immaterial whether a 1 was appended prior to squeezing

I don't see why this holds.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand option 4 correctly:

  • The string [0] will be padded with zeros to [0,0,0,0,0,0,0,0,0,0], then absorbed setting the sponge state to X. Then calling Fiat-Shamir will absorb [1,0,0,0,0,0,0,0,0,0] followed by a squeeze resulting in output Y.
  • The string [0, 0] will be padded with zeros to [0,0,0,0,0,0,0,0,0,0], then absorbed setting the sponge state to X. Then calling Fiat-Shamir will absorb [1,0,0,0,0,0,0,0,0,0] followed by a squeeze resulting in output Y.

There are therefore two distinct strings that generate the same Fiat-Shamir challenge. This is either insecure, or else it is secure due to some analysis of the possible sequences of field elements that could be enqueued or dequeued.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, true. Alright, let's roll with option 1 and see if it requires further optimization down the line.

@aszepieniec
Copy link
Collaborator

I wonder if now is a good time to mark some ProofItems as not for hashing, for the purpose of Fiat-Shamir. This applies to all ProofItems that are computationally uniquely determined by another ProofItem. For example, a Merkle authentication path does not need to be hashed, whereas a Merkle root does.

@jan-ferdinand
Copy link
Member Author

jan-ferdinand commented Mar 3, 2023

It might be a good time, yes. Have you already spent brain cycles on potential realizations of this idea or should I start from scratch?

@aszepieniec
Copy link
Collaborator

Have you already spent brain cycles on potential realizations of this idea or should I start from scratch?

I would add a bool to enqueue and dequeue that activates the sponge update. If the sequence of true/false in the prover does not match that in the verifier, then prover and verifier cannot possibly agree on the Fiat-Shamir challenges. So the proof system's completeness enforces the consistent marking.

@jan-ferdinand jan-ferdinand merged commit 2a0800f into master Mar 3, 2023
@jan-ferdinand jan-ferdinand deleted the tf_v0.18 branch March 3, 2023 15:51
jan-ferdinand added a commit that referenced this pull request Mar 10, 2023
- change instructions `read_mem` and `write_mem` (#179) (022245b)
    - `read_mem` now pushes to stack instead of overwriting
    - `write_mem` now pops the written element from stack
- move to Tip5 hash function (#161) (#182) (d40f0b6)
    - this greatly boosts overall prover performance
- replace Instruction Table by Lookup Argument (#171) (543327a)
- rework U32 Table (#181) (09a4c27)
    - link to processor using Lookup Argument (2719030)
- improve clock jump differences check (#177) (04bb5c4 and 9c2f3c0)
- improve parsing of Triton assembly
    - remove old parser (bbe4aa8)
    - introduce `ParsedInstruction`, simplifying parsing (8602892)
    - fix bug in property test (9e4fcb7)
- improve on constant folding in multicircuits (#183) (c1be5bb)
- update to twenty-first v0.18 (#184) (ff972ff)
- improve specification and documentation
    - explain the various cross-table arguments (567efc0)
    - more and better links (a66b20d and 9386878)
- various simplifications and readability improvements
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants