-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add a flag for stage1->stage2 validium migration #3562
base: main
Are you sure you want to change the base?
Conversation
core/node/node_framework/src/implementations/layers/da_dispatcher.rs
Outdated
Show resolved
Hide resolved
core/node/node_framework/src/implementations/layers/da_dispatcher.rs
Outdated
Show resolved
Hide resolved
let mut conn = self.pool.connection_tagged("da_dispatcher").await?; | ||
conn.data_availability_dal() | ||
.set_dummy_inclusion_data_for_old_batches( | ||
self.contracts_config.l2_da_validator_addr.unwrap(), // during the transition having the L2 DA validator address is mandatory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dumb question: Is it checked somewhere before that the address is not None
? If not, it could make sense to use .expect(_)
/ .context(_)
with a descriptive message, so that the error can be debugged. Also, I'm not entirely sure that panicking would be user-friendly; optimally, errors should be propagated as anyhow::Error
s to the component task boundary (i.e., DataAvailabilityDispatcher::run()
in this case).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
makes sense, added .context(_)
on the second point, totally agree in general, but in this specific context poll_for_inclusion
's errors are only logged in run
function without propagating them further, so it would be quite a mess with all .map_err()
that would need to be added in poll_for_inclusion
to make it propagatable
we can also add a check that the l2_da_validator_addr
is present to check_for_misconfiguration
, though it doesn't remove the need to use unwrap
pub fn inclusion_verification_transition_enabled(&self) -> bool { | ||
self.inclusion_verification_transition_enabled | ||
.unwrap_or(DEFAULT_INCLUSION_VERIFICATION_TRANSITION_ENABLED) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Is there a difference with defining inclusion_verification_transition_enabled: bool
and slapping #[serde(default)]
on it / slightly changing the Protobuf conversion logic? From the type perspective, it looks easier, and in the new config system (that I hope we will use eventually 🙃), a simpler type would mean better auto-generated docs / validations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mostly did it like that to keep in line with the other fields in this config, IMO it would be even worse if some fields were Optional
with a getter and the others had #[serde(default)]
I would even consider adding #[serde(default)]
to all of them and removing Optional
, but maybe it makes sense to do it as a part of a migration to the new config system
What ❔
Introduce an
inclusion_verification_transition_enabled
flag, which is necessary for enabling inclusion verification in Validiums.When this flag is used -
da_dispatcher
only populates the older batches with dummy proofs, and stops polling for inclusion data for new batches.The migration would look like this:
use_dummy_inclusion_data: true
inclusion_verification_transition_enabled: false
As a side-quest: a check whether dummy inclusion proofs are allowed was added.
Why ❔
To ensure that the migration can be performed without downtime.
Checklist
zkstack dev fmt
andzkstack dev lint
.