Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

approval-voting: merge tranche 0 assignments #729

Open
sandreim opened this issue Jan 24, 2023 · 4 comments · May be fixed by paritytech/polkadot#6782
Open

approval-voting: merge tranche 0 assignments #729

sandreim opened this issue Jan 24, 2023 · 4 comments · May be fixed by paritytech/polkadot#6782
Labels
I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task.

Comments

@sandreim
Copy link
Contributor

sandreim commented Jan 24, 2023

As pointed out in #731 we can save a lot of CPU and reduce gossip by doing only one RelayVrfModulo signature per authority and derive multiple assignments from the output which stacks with batch verification.

What it is not clear at least for me is if we can also merge Delay assignments per tranche for the same signer by signing once for multiple candidates. Depending on the distribution this can also save us CPU when importing and reduce the network load.

Another question is why would we need authorities to refuse some assignments ? Is this for being able to control the amount of work to be done ?

CC @burdges @eskimor

@burdges
Copy link

burdges commented Jan 24, 2023

Importantly, we'll only consolidate or merge the RelayVrfModulo checks this way. RelayVrfDelay checks should each remain independent, but RelayVrfDelay should be only maybe 15% of the checks under normal conditions.

IIUC in our case we have a single signer and this would mean that we could batch it's own RelayVrfDelay assignments for the same tranche (different candidates). If my understanding correct ?

A batch verification does not require the same signer, but it only saves like 40%.

Can you detail a bit the pros and cons of having RelayVrfModulo represent 85% of assignments in tranche 0?

No cons.

RelayVrfModulo has a nicer distribution than RelayVrfDelay because individual validators have exactly k assignments from RelayVrfModulo, which avoids some validators having too many tranche zero assignments.

We could've easily made RelayVrfModulo be 100% of the tranche zero checks, but never did so. RelayVrfModulo should already be like 95% of the tranche zero checks, assuming we picked parameters correctly, so maybe like 85% of the total checks under normal conditions.


We should probably have each validator send one RelayVrfModulo assignment per relay parent, which includes a bitfield of which assignments they'll take vs reject. Ain't clear if batch verifying these is worthwhile, likely batch verifying grandpa yields more.

As RelayVrfDelay becomes prevalent if many validators go offline, we could optionally batch verify RelayVrfDelay as a premature optimization against our next chain stall, but not sure if this 40% really matters or if we're better off doing other things.

@burdges
Copy link

burdges commented Jan 24, 2023

After verifying the VRFInOut in the signature, the output routine maybe looks vaguely like:

/// A hard upper bound on num_cores * target_checkers / num_validators
const MAX_MODULO_SAMPLES: usize = 32;

/// Iterates over all parachains which we check in tranche zero
fn relay_vrf_modulo(io: VRFInOut, num_samples: usize, max_cores: usize) -> impl Iterator<Item=u16> {
    vrf_io.make_bytes::<[u8; 2*MAX_MODULO_SAMPLES]>(b"RelayVrfModuloIO")
    .chunks_exact(2)
    .take(num_samples)
    .map(|x| u16::from_le_bytes( array_ref![x,0,2] ) % max_cores )
}

@rphmeier
Copy link
Contributor

Isn't it a problem to derive all RelayVrfDelay assignments from a single VRF Output, because then publishing one assignment leaks all the tranches for other candidates you're assigned to? RelayVrfModulo doesn't have this issue because all assignments are tranche zero. I also don't think that nodes should be allowed to silently "not take" assignments, at least at the moment. If so, then we'd need a way to update the network as to which assignments are being taken.

@burdges
Copy link

burdges commented Jan 25, 2023

Yes exactly, RelayVrfModulo could all be derived from one output, but the RelayVrfDelay in different tranches must stay independent.

I also don't think that nodes should be allowed to silently "not take" assignments, at least at the moment.

It's better to reject, or not announce, an assignment than to announce it and then no show. I've no idea if we'll ever need this, but I made the RelayVrfModulo independent purely to give us this option, which now looks like a mistake.

@sandreim sandreim changed the title approval-voting: merge assignments per tranche and batch verify vrf signatures approval-voting: merge tranche 0 assignments and batch verify vrf signatures Feb 6, 2023
@sandreim sandreim changed the title approval-voting: merge tranche 0 assignments and batch verify vrf signatures approval-voting: merge tranche 0 assignments Feb 24, 2023
@Sophia-Gold Sophia-Gold transferred this issue from paritytech/polkadot Aug 24, 2023
@the-right-joyce the-right-joyce added I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task. T8-parachains_engineering and removed I10-optimisation labels Aug 25, 2023
@the-right-joyce the-right-joyce moved this to In Progress in parachains team board Oct 23, 2023
@sandreim sandreim moved this from In Progress to Done in Road to 1k paravalidators and 200 parachains Nov 11, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
* Reuse location for bitfield calculations

* Make all bitfield functions internal

* Remove bitfield functions using indexes

* Make all ScaleCodec functions internal

* Sprinkle in some unchecked arithmetic

* Trim & inline Bits functions

* Replace post-increments with pre-increments

* Remove unused dependency

* Use unchecked incrementing in for loop

* Consolidate bitstring reversals

* Implement efficient Hamming weight functions

* Remove checks in createBitfield

* Add tsconfig to test directory

* Add hardhat-exposed to test libraries

* Clean up Bitfield comments & naming

* Uncomment unused functions in Bit

Removing these only saves a small amount of gas deploying the contracts
that hardhat-exposed generates, so won't save us anything in production.

* Re-enable solidity optimizer

* Skip copying proof from calldata

* Revert "Replace post-increments with pre-increments"

This reverts commit fdac72017de79d9e1df42338cd6519c52c75f5ff.

* Replace location struct with int pair

* Combine unchecked statements

* Combine bitfield checks

* Remove optimisations from test helper function

* Revert "Make all ScaleCodec functions internal"

This reverts commit ea1d642f07619a650100c7bc0baa282b08a4d215.

* Revert hardhat-exposed for ScaleCodec tests

* Generate randomness hash with assembly

* Revert countSetBits optimization

* Expand docstring

* Remove redundant require

* Fix nits

* Remove hardhat-exposed

* Extract makeIndex

* Move mod operation into Yul block

* Merge tsconfig files

* Update relayer README for new contract structure

* Combine keccak256 and mod instructions

Co-authored-by: Vincent Geddes <[email protected]>

* Reset tsconfig file

Co-authored-by: Vincent Geddes <[email protected]>
@sandreim sandreim moved this from In Progress to Completed in parachains team board May 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I9-optimisation An enhancement to provide better overall performance in terms of time-to-completion for a task.
Projects
Status: Completed
Development

Successfully merging a pull request may close this issue.

4 participants