Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalize block available time metric for PeerDAS #6850

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions beacon_node/beacon_chain/src/data_availability_checker.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ use crate::block_verification_types::{
use crate::data_availability_checker::overflow_lru_cache::{
DataAvailabilityCheckerInner, ReconstructColumnsDecision,
};
use crate::validator_monitor::timestamp_now;
use crate::{metrics, BeaconChain, BeaconChainTypes, BeaconStore};
use kzg::Kzg;
use slog::{debug, error, Logger};
Expand Down Expand Up @@ -230,6 +231,11 @@ impl<T: BeaconChainTypes> DataAvailabilityChecker<T> {
block_root: Hash256,
custody_columns: DataColumnSidecarList<T::EthSpec>,
) -> Result<Availability<T::EthSpec>, AvailabilityCheckError> {
let seen_timestamp = self
.slot_clock
.now_duration()
.ok_or(AvailabilityCheckError::SlotClockError)?;

// TODO(das): report which column is invalid for proper peer scoring
// TODO(das): batch KZG verification here, but fallback into checking each column
// individually to report which column(s) are invalid.
Expand All @@ -238,7 +244,7 @@ impl<T: BeaconChainTypes> DataAvailabilityChecker<T> {
.map(|column| {
let index = column.index;
Ok(KzgVerifiedCustodyDataColumn::from_asserted_custody(
KzgVerifiedDataColumn::new(column, &self.kzg)
KzgVerifiedDataColumn::new(column, &self.kzg, seen_timestamp)
.map_err(|e| AvailabilityCheckError::InvalidColumn(index, e))?,
))
})
Expand Down Expand Up @@ -565,6 +571,8 @@ impl<T: BeaconChainTypes> DataAvailabilityChecker<T> {
&self.kzg,
&pending_components.verified_data_columns,
&self.spec,
// TODO(das): what timestamp to use when reconstructing columns?
timestamp_now(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On one hand I think would be a bit more consistent if we put it after reconstruction, and right before we add the columns to put_kzg_verified_data_columns, similar to elsewhere; On the other hand, we can argue that we have these columns once we see the last column that triggered reconstruction. I don't see an issue going either way.

One scenario i can think of is -

  1. receive column 64, and trigger reconstruction, last_seen timestamp is now
  2. receive column 65 from gossip, last seen timestamp is now + t
  3. block is available when reconstruction completes, and last seen timestamp is actually now + t, not the reconstructed columns that made the block available.

Another scenario

  1. receive column 64, and trigger reconstruction, last_seen timestamp is now
  2. receive column 65-127 from gossip, last seen timestamp is now + t
  3. block is made available before reconstruction completed
  4. reconstruction completed, but we ignore the columns because they're already in the cache.
  5. Note that we trigger make_available again here (bug), but it doesnt change the last seem timestamp, still now + t

(bug issue: #6439)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not suggesting a change here - but if you think it makes sense to use the timestamp before reconstruction (i.e. the last data column that triggered reconstruction to make this block available), we can remove this TODO

)
.map_err(|e| {
error!(
Expand Down Expand Up @@ -732,7 +740,8 @@ where
// If len > 1 at least one column MUST fail.
if data_columns.len() > 1 {
for data_column in data_columns {
if let Err(e) = verify_kzg_for_data_column(data_column.clone(), kzg) {
let seen_timestamp = Duration::ZERO; // not used
if let Err(e) = verify_kzg_for_data_column(data_column.clone(), kzg, seen_timestamp) {
return Err(AvailabilityCheckError::InvalidColumn(data_column.index, e));
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -283,17 +283,24 @@ impl<E: EthSpec> PendingComponents<E> {
..
} = self;

let blobs_available_timestamp = verified_blobs
.iter()
.flatten()
.map(|blob| blob.seen_timestamp())
.max();

let Some(diet_executed_block) = executed_block else {
return Err(AvailabilityCheckError::Unexpected);
};

let is_peer_das_enabled = spec.is_peer_das_enabled_for_epoch(diet_executed_block.epoch());
let blobs_available_timestamp = if is_peer_das_enabled {
verified_data_columns
.iter()
.map(|data_column| data_column.seen_timestamp())
.max()
} else {
verified_blobs
.iter()
.flatten()
.map(|blob| blob.seen_timestamp())
.max()
};

let (blobs, data_columns) = if is_peer_das_enabled {
let data_columns = verified_data_columns
.into_iter()
Expand Down
45 changes: 37 additions & 8 deletions beacon_node/beacon_chain/src/data_column_verification.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ use ssz_derive::{Decode, Encode};
use std::iter;
use std::marker::PhantomData;
use std::sync::Arc;
use std::time::Duration;
use types::data_column_sidecar::{ColumnIndex, DataColumnIdentifier};
use types::{
BeaconStateError, ChainSpec, DataColumnSidecar, DataColumnSubnetId, EthSpec, Hash256,
Expand Down Expand Up @@ -233,11 +234,17 @@ impl<T: BeaconChainTypes, O: ObservationStrategy> GossipVerifiedDataColumn<T, O>
#[ssz(struct_behaviour = "transparent")]
pub struct KzgVerifiedDataColumn<E: EthSpec> {
data: Arc<DataColumnSidecar<E>>,
#[ssz(skip_serializing, skip_deserializing)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need Encode and Decode here? might be able to remove it now we don't store overflow cache anymore.

seen_timestamp: Duration,
}

impl<E: EthSpec> KzgVerifiedDataColumn<E> {
pub fn new(data_column: Arc<DataColumnSidecar<E>>, kzg: &Kzg) -> Result<Self, KzgError> {
verify_kzg_for_data_column(data_column, kzg)
pub fn new(
data_column: Arc<DataColumnSidecar<E>>,
kzg: &Kzg,
seen_timestamp: Duration,
) -> Result<Self, KzgError> {
verify_kzg_for_data_column(data_column, kzg, seen_timestamp)
}
pub fn to_data_column(self) -> Arc<DataColumnSidecar<E>> {
self.data
Expand Down Expand Up @@ -293,29 +300,38 @@ impl<E: EthSpec> CustodyDataColumn<E> {
#[ssz(struct_behaviour = "transparent")]
pub struct KzgVerifiedCustodyDataColumn<E: EthSpec> {
data: Arc<DataColumnSidecar<E>>,
#[ssz(skip_serializing, skip_deserializing)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sam as above

seen_timestamp: Duration,
}

impl<E: EthSpec> KzgVerifiedCustodyDataColumn<E> {
/// Mark a column as custody column. Caller must ensure that our current custody requirements
/// include this column
pub fn from_asserted_custody(kzg_verified: KzgVerifiedDataColumn<E>) -> Self {
Self {
seen_timestamp: kzg_verified.seen_timestamp,
data: kzg_verified.to_data_column(),
}
}

/// Verify a column already marked as custody column
pub fn new(data_column: CustodyDataColumn<E>, kzg: &Kzg) -> Result<Self, KzgError> {
verify_kzg_for_data_column(data_column.clone_arc(), kzg)?;
pub fn new(
data_column: CustodyDataColumn<E>,
kzg: &Kzg,
seen_timestamp: Duration,
) -> Result<Self, KzgError> {
verify_kzg_for_data_column(data_column.clone_arc(), kzg, seen_timestamp)?;
Ok(Self {
data: data_column.data,
seen_timestamp,
})
}

pub fn reconstruct_columns(
kzg: &Kzg,
partial_set_of_columns: &[Self],
spec: &ChainSpec,
seen_timestamp: Duration,
) -> Result<Vec<KzgVerifiedCustodyDataColumn<E>>, KzgError> {
let all_data_columns = reconstruct_data_columns(
kzg,
Expand All @@ -329,7 +345,10 @@ impl<E: EthSpec> KzgVerifiedCustodyDataColumn<E> {
Ok(all_data_columns
.into_iter()
.map(|data| {
KzgVerifiedCustodyDataColumn::from_asserted_custody(KzgVerifiedDataColumn { data })
KzgVerifiedCustodyDataColumn::from_asserted_custody(KzgVerifiedDataColumn {
data,
seen_timestamp,
})
})
.collect::<Vec<_>>())
}
Expand All @@ -347,6 +366,10 @@ impl<E: EthSpec> KzgVerifiedCustodyDataColumn<E> {
pub fn index(&self) -> ColumnIndex {
self.data.index
}

pub fn seen_timestamp(&self) -> Duration {
self.seen_timestamp
}
}

/// Complete kzg verification for a `DataColumnSidecar`.
Expand All @@ -355,10 +378,14 @@ impl<E: EthSpec> KzgVerifiedCustodyDataColumn<E> {
pub fn verify_kzg_for_data_column<E: EthSpec>(
data_column: Arc<DataColumnSidecar<E>>,
kzg: &Kzg,
seen_timestamp: Duration,
) -> Result<KzgVerifiedDataColumn<E>, KzgError> {
let _timer = metrics::start_timer(&metrics::KZG_VERIFICATION_DATA_COLUMN_SINGLE_TIMES);
validate_data_columns(kzg, iter::once(&data_column))?;
Ok(KzgVerifiedDataColumn { data: data_column })
Ok(KzgVerifiedDataColumn {
data: data_column,
seen_timestamp,
})
}

/// Complete kzg verification for a list of `DataColumnSidecar`s.
Expand All @@ -383,6 +410,7 @@ pub fn validate_data_column_sidecar_for_gossip<T: BeaconChainTypes, O: Observati
subnet: u64,
chain: &BeaconChain<T>,
) -> Result<GossipVerifiedDataColumn<T, O>, GossipDataColumnError> {
let seen_timestamp = chain.slot_clock.now_duration().unwrap_or_default();
let column_slot = data_column.slot();
verify_data_column_sidecar(&data_column, &chain.spec)?;
verify_index_matches_subnet(&data_column, subnet, &chain.spec)?;
Expand All @@ -394,8 +422,9 @@ pub fn validate_data_column_sidecar_for_gossip<T: BeaconChainTypes, O: Observati
verify_slot_higher_than_parent(&parent_block, column_slot)?;
verify_proposer_and_signature(&data_column, &parent_block, chain)?;
let kzg = &chain.kzg;
let kzg_verified_data_column = verify_kzg_for_data_column(data_column.clone(), kzg)
.map_err(GossipDataColumnError::InvalidKzgProof)?;
let kzg_verified_data_column =
verify_kzg_for_data_column(data_column.clone(), kzg, seen_timestamp)
.map_err(GossipDataColumnError::InvalidKzgProof)?;

chain
.observed_slashable
Expand Down
Loading