-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
full light client impl #20
Conversation
WalkthroughThe pull request introduces a new constant, additional workspace dependencies, and new modules across the codebase. In the parser crate, several beacon-related modules now include updated and new structs with parsing methods and refined memory management. In the shire crate, new error variants expand the Changes
Sequence Diagram(s)sequenceDiagram
participant LC as LightClient
participant E1 as Endpoint 1
participant E2 as Endpoint 2
LC->>E1: Request finality update
LC->>E2: Request finality update
E1-->>LC: Return update data
E2-->>LC: Return update data
LC->>LC: Aggregate and sort responses
LC-->>User: Return FinalityUpdate
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (3)
crates/shire/src/concensus.rs (3)
214-222
: Implement chain_id method.Both implementations of
chain_id
are marked astodo!()
. This is a critical method for consensus verification.Would you like me to help implement the
chain_id
method? Here's a suggested implementation:- pub async fn chain_id(&self) -> Result<u64, ConsensusError> { - // match self.rpc.get_chain_id().await { - // Ok(id) => id - // .as_u64() - // .ok_or_else(|| ConsensusError::SyncError("Invalid chain ID".into())), - // Err(e) => Err(ConsensusError::SyncError(e.to_string())), - // } - todo!() - } + pub async fn chain_id(&self) -> Result<u64, ConsensusError> { + match self.rpc.get_chain_id().await { + Ok(id) => id + .as_u64() + .ok_or_else(|| ConsensusError::SyncError("Invalid chain ID".into())), + Err(e) => Err(ConsensusError::SyncError(e.to_string())), + } + }Also applies to: 321-329
224-242
: Implement update_state method.The
update_state
method is crucial for maintaining consensus state but is currently marked astodo!()
.Would you like me to help implement the
update_state
method? Here's a suggested implementation:- pub async fn update_state(&self) -> Result<(), ConsensusError> { - // let latest = self - // .rpc - // .get_block_number() - // .await - // .map_err(|e| ConsensusError::SyncError(e.to_string()))?; - - // let mut state = self.state.write().unwrap(); - - // state.sync_status = if latest > state.current_block { - // SyncStatus::Syncing { - // target: latest, - // current: state.current_block, - // } - // } else { - // SyncStatus::Synced - // }; - todo!() - } + pub async fn update_state(&self) -> Result<(), ConsensusError> { + let latest = self + .rpc + .get_block_number() + .await + .map_err(|e| ConsensusError::SyncError(e.to_string()))?; + + let mut state = self + .state + .write() + .map_err(|_| ConsensusError::SyncError("Lock poisoned".into()))?; + + state.sync_status = if latest > state.current_block { + SyncStatus::Syncing { + target: latest, + current: state.current_block, + } + } else { + SyncStatus::Synced + }; + + Ok(()) + }
140-141
: Fix incorrect finalization check.The comment "ISSUE" suggests a known problem with the finalization check. The current implementation only checks if the block number is less than or equal to the current block, which is not sufficient for finalization verification.
Apply this fix to properly verify finalization:
- //ISSUE - Ok(block_number <= state.current_block) + Ok(block_number <= state.finalized_block_number)
🧹 Nitpick comments (4)
crates/parser/src/beacon/light_finality_update.rs (2)
1-2
: Consider removing unused imports.
The import "sync" from core does not appear to be used. Tidying up unused imports helps maintain clarity.
45-103
: Consider robust JSON parsing.
Parsing with string matching is prone to breakage if the JSON format changes (e.g., spacing or ordering). A dedicated JSON parser (e.g., serde_json) would be more robust for production use.crates/shire/src/light_client.rs (1)
189-220
: Consolidate duplicate concurrency logic for get_optimistic_update.
The concurrency + sort pattern is repeated in several methods. Consider abstracting it into a helper function for easier maintenance.crates/parser/src/beacon/light_client_update.rs (1)
45-160
: Recommend using a real JSON parser.
As with finality updates, direct string matching is fragile. A JSON parser can handle escaped characters and flexible formats more reliably.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
crates/mordor/src/lib.rs
(1 hunks)crates/parser/Cargo.toml
(1 hunks)crates/parser/src/beacon/light_client_bootstrap.rs
(3 hunks)crates/parser/src/beacon/light_client_update.rs
(1 hunks)crates/parser/src/beacon/light_finality_update.rs
(1 hunks)crates/parser/src/beacon/light_optimistic_update.rs
(1 hunks)crates/parser/src/beacon/mod.rs
(1 hunks)crates/shire/Cargo.toml
(1 hunks)crates/shire/src/concensus.rs
(1 hunks)crates/shire/src/concensus_rpc.rs
(0 hunks)crates/shire/src/lib.rs
(1 hunks)crates/shire/src/light_client.rs
(1 hunks)
💤 Files with no reviewable changes (1)
- crates/shire/src/concensus_rpc.rs
✅ Files skipped from review due to trivial changes (1)
- crates/mordor/src/lib.rs
🔇 Additional comments (13)
crates/parser/src/beacon/light_finality_update.rs (2)
34-43
: Clarify partial parsing when "code" field is present.
When the method detects the "code" field, it returns a FinalityUpdate with only the code set and all other fields defaulted. This partial parse may indicate incomplete data handling. Confirm that this logic is intended and verify that returning a defaulted struct is valid in all scenarios.
105-128
: Validate the logic in finality_branch parsing.
Nested loops increment 'pos' until they encounter specific delimiters, but there's a risk of index overflow if the input is malformed. Ensure safe bounds checks exist or consider using safe iteration methods to avoid potential panics.crates/shire/src/light_client.rs (2)
63-93
: Validate sorting approach in get_latest_finality_update.
Responses are sorted by their sync_committee_signature (a B256). Sorting by signature alone might not guarantee consensus or correctness if multiple endpoints return different data. Ensure this approach is a valid tie-breaker or consider applying majority-voting logic.
222-224
: Confirm division by SLOTS_PER_PERIOD.
The method returns slot / SLOTS_PER_PERIOD. Verify that integer division meets your needs (e.g., rounding down). If you require precise boundary checks, consider a modulus check or documenting that any remainder is discarded.crates/parser/src/beacon/light_client_update.rs (2)
34-43
: Clarify partial parsing when "code" field is present.
Similar to FinalityUpdate, this function also sets only the 'code' field, defaulting the rest. Confirm that this behavior is intentional and won't result in partially formed updates that break downstream logic.
187-210
: Ensure safety in pubkeys() parsing.
As with finality_branch, be mindful of potential index-out-of-bounds in the while loops. Implement additional checks or consider using methods that guarantee safe iteration.crates/parser/src/beacon/mod.rs (1)
5-8
: LGTM! Module structure aligns with light client implementation.The new modules follow consistent naming conventions and provide a clear separation of concerns for light client functionality.
crates/parser/src/beacon/light_client_bootstrap.rs (1)
5-12
: LGTM! Memory management improvements.The removal of lifetime parameters and switch to owned String simplifies memory management. The addition of Clone derive is appropriate for these data structures.
crates/parser/src/beacon/light_optimistic_update.rs (1)
91-139
: 🏁 Script executed:#!/bin/bash # Search for usage of these methods rg "parse_pub_keys_array|parse_committee_branch_array" --type rustLength of output: 859
crates/shire/src/concensus.rs (2)
79-84
: LGTM! New error variants enhance error handling.The new error variants (
Response
,NoConsensus
,Parse
) provide more granular error reporting capabilities, which is essential for a light client implementation.
279-284
: 🏁 Script executed:#!/bin/bash # Search for similar verification methods in the codebase rg -A 5 "verify_(sync_committee|signature|finality_proof|header)"Length of output: 2088
crates/parser/Cargo.toml (1)
7-7
: LGTM! Added mordor dependency.The addition of the mordor dependency is appropriate for the light client implementation.
crates/shire/Cargo.toml (1)
18-20
: LGTM! Added required async dependencies.The addition of
futures
andreqwest
dependencies is appropriate for handling async operations in the light client implementation.
crates/shire/src/light_client.rs
Outdated
pub async fn get_latest_update( | ||
&self, | ||
period: u64, | ||
count: u64, | ||
) -> Result<Updates, ConsensusError> { | ||
let mut responses = Vec::new(); | ||
|
||
// Query all endpoints concurrently | ||
let results = futures::future::join_all(self.endpoints.iter().map(|endpoint| { | ||
self.client | ||
.get(format!( | ||
"{}/eth/v1/beacon/light_client/updates?period={}&count={}", | ||
endpoint, period, count | ||
)) | ||
.send() | ||
})) | ||
.await; | ||
|
||
// Collect responses with signatures | ||
for response in results { | ||
if let Ok(resp) = response { | ||
let input = resp | ||
.bytes() | ||
.await | ||
.map(|b| b.to_vec()) | ||
.map_err(|_| ConsensusError::Parse)?; | ||
|
||
let update = Updates::parse(&input).ok_or(ConsensusError::Parse)?; | ||
|
||
responses.push((update.sync_aggregate.sync_committee_signature, update)); | ||
} | ||
} | ||
|
||
responses.sort_by(|a, b| a.0.cmp(&b.0)); | ||
Ok(responses[0].1.clone()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check error handling when no valid updates are returned.
This function sorts and takes the first result. If no valid updates are returned, the code will panic when accessing responses[0]. Ensure you handle empty or malformed responses gracefully.
+ if responses.is_empty() {
+ return Err(ConsensusError::NoValidResponses);
+ }
responses.sort_by(|a, b| a.0.cmp(&b.0));
Ok(responses[0].1.clone())
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub async fn get_latest_update( | |
&self, | |
period: u64, | |
count: u64, | |
) -> Result<Updates, ConsensusError> { | |
let mut responses = Vec::new(); | |
// Query all endpoints concurrently | |
let results = futures::future::join_all(self.endpoints.iter().map(|endpoint| { | |
self.client | |
.get(format!( | |
"{}/eth/v1/beacon/light_client/updates?period={}&count={}", | |
endpoint, period, count | |
)) | |
.send() | |
})) | |
.await; | |
// Collect responses with signatures | |
for response in results { | |
if let Ok(resp) = response { | |
let input = resp | |
.bytes() | |
.await | |
.map(|b| b.to_vec()) | |
.map_err(|_| ConsensusError::Parse)?; | |
let update = Updates::parse(&input).ok_or(ConsensusError::Parse)?; | |
responses.push((update.sync_aggregate.sync_committee_signature, update)); | |
} | |
} | |
responses.sort_by(|a, b| a.0.cmp(&b.0)); | |
Ok(responses[0].1.clone()) | |
} | |
pub async fn get_latest_update( | |
&self, | |
period: u64, | |
count: u64, | |
) -> Result<Updates, ConsensusError> { | |
let mut responses = Vec::new(); | |
// Query all endpoints concurrently | |
let results = futures::future::join_all(self.endpoints.iter().map(|endpoint| { | |
self.client | |
.get(format!( | |
"{}/eth/v1/beacon/light_client/updates?period={}&count={}", | |
endpoint, period, count | |
)) | |
.send() | |
})) | |
.await; | |
// Collect responses with signatures | |
for response in results { | |
if let Ok(resp) = response { | |
let input = resp | |
.bytes() | |
.await | |
.map(|b| b.to_vec()) | |
.map_err(|_| ConsensusError::Parse)?; | |
let update = Updates::parse(&input).ok_or(ConsensusError::Parse)?; | |
responses.push((update.sync_aggregate.sync_committee_signature, update)); | |
} | |
} | |
+ if responses.is_empty() { | |
+ return Err(ConsensusError::NoValidResponses); | |
+ } | |
responses.sort_by(|a, b| a.0.cmp(&b.0)); | |
Ok(responses[0].1.clone()) | |
} |
pub mod concensus; | ||
pub mod concensus_rpc; | ||
pub mod libp2p; | ||
pub mod light_client; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify RPC functionality after consensus_rpc removal.
The removal of consensus_rpc
module could impact RPC interactions. Please ensure that the new light_client
module provides alternative RPC functionality or confirm if RPC functionality is intentionally removed.
🏁 Script executed:
#!/bin/bash
# Search for RPC-related functionality in the new light_client module
rg -A 5 "RpcClient|Transport|rpc" src/light_client.rs
Length of output: 115
I'll generate a new script to verify the location of the light_client module since the previous command failed to find "src/light_client.rs". This script will look in the expected crate directory for files or directories related to light_client, then we can examine further for RPC-related code if the module is found.
🏁 Script executed:
#!/bin/bash
echo "Searching for light_client module files in crates/shire/src..."
echo "Files named light_client.rs:"
find crates/shire/src -type f -name "light_client.rs"
echo "- - - - - - - - - - - - - - - -"
echo "Directories named light_client (likely with a mod.rs):"
find crates/shire/src -type d -name "light_client"
Length of output: 531
I'll generate another script to search within the detected "crates/shire/src/light_client.rs" for any RPC-specific keywords. This should help determine if the new module provides equivalent RPC functionality.
🏁 Script executed:
#!/bin/bash
echo "Searching for RPC-related keywords in crates/shire/src/light_client.rs..."
echo "Searching for 'RpcClient':"
rg -n "RpcClient" crates/shire/src/light_client.rs
echo "Searching for 'Transport':"
rg -n "Transport" crates/shire/src/light_client.rs
echo "Searching for 'rpc' (case-insensitive):"
rg -ni "rpc" crates/shire/src/light_client.rs
Length of output: 537
RPC Functionality Appears Missing After consensus_rpc Removal
Our search confirms that the new light_client module (crates/shire/src/light_client.rs) does not contain any RPC-related code, such as references to RpcClient, Transport, or similar keywords. This indicates that the consensus_rpc removal wasn’t accompanied by new RPC functionality, which could potentially impact RPC interactions unless it was an intentional change.
• Removed consensus_rpc module with no RPC replacement in light_client
• No alternative RPC-related code detected in the new module
Please verify if RPC functionality was intentionally removed or if additional changes are needed to provide the required RPC endpoints.
pub fn parse_pub_keys_array(data: &[u8]) -> Option<Vec<(usize, usize)>> { | ||
let start = memchr::memmem::find(data, b"\"pubkeys\":[")?; | ||
let mut pos = start + b"\"pubkeys\":[".len(); | ||
let mut result = Vec::new(); | ||
|
||
while data[pos] != b']' { | ||
while data[pos] != b'"' && data[pos] != b']' { | ||
pos += 1; | ||
} | ||
if data[pos] == b']' { | ||
break; | ||
} | ||
pos += 1; | ||
let committee_start = pos; | ||
|
||
while data[pos] != b'"' { | ||
pos += 1; | ||
} | ||
result.push((committee_start, pos)); | ||
pos += 1; | ||
} | ||
|
||
Some(result) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor duplicate parsing logic.
The parse_pub_keys_array
and parse_committee_branch_array
methods contain identical logic with different array identifiers. Consider extracting the common logic into a helper method.
+ fn parse_array(data: &[u8], array_identifier: &[u8]) -> Option<Vec<(usize, usize)>> {
+ let start = memchr::memmem::find(data, array_identifier)?;
+ let mut pos = start + array_identifier.len();
+ let mut result = Vec::new();
+
+ while data[pos] != b']' {
+ while data[pos] != b'"' && data[pos] != b']' {
+ pos += 1;
+ }
+ if data[pos] == b']' {
+ break;
+ }
+ pos += 1;
+ let committee_start = pos;
+
+ while data[pos] != b'"' {
+ pos += 1;
+ }
+ result.push((committee_start, pos));
+ pos += 1;
+ }
+
+ Some(result)
+ }
+
+ pub fn parse_pub_keys_array(data: &[u8]) -> Option<Vec<(usize, usize)>> {
+ Self::parse_array(data, b"\"pubkeys\":[")
+ }
+
+ pub fn parse_committee_branch_array(data: &[u8]) -> Option<Vec<(usize, usize)>> {
+ Self::parse_array(data, b"\"current_sync_committee_branch\":[")
+ }
Also applies to: 116-139
pub fn parse(input: &[u8]) -> Option<Self> { | ||
if memchr::memmem::find(input, b"\"code\":").is_some() { | ||
let code = find_field(input, b"\"code\":", b",")?; | ||
let code_str = std::str::from_utf8(&input[code.0..code.1]).ok()?; | ||
return Some(Self { | ||
code: Some(code_str.parse().ok()?), | ||
..Default::default() | ||
}); | ||
} | ||
|
||
let version = find_field(input, b"\"version\":\"", b"\"")?; | ||
let slot = find_field(input, b"\"slot\":\"", b"\"")?; | ||
let proposer_index = find_field(input, b"\"proposer_index\":\"", b"\"")?; | ||
let parent_root = find_field(input, b"\"parent_root\":\"", b"\"")?; | ||
let state_root = find_field(input, b"\"state_root\":\"", b"\"")?; | ||
let body_root = find_field(input, b"\"body_root\":\"", b"\"")?; | ||
let sync_committee_bits = find_field(input, b"\"sync_committee_bits\":\"", b"\"")?; | ||
let sync_committee_signatures = | ||
find_field(input, b"\"sync_committee_signatures\":\"", b"\"")?; | ||
|
||
let sync_aggregate = SyncAggregate { | ||
sync_committee_bits: hex_to_u64(&input[sync_committee_bits.0..sync_committee_bits.1]), | ||
sync_committee_signature: hex_to_b256( | ||
&input[sync_committee_signatures.0..sync_committee_signatures.1], | ||
), | ||
}; | ||
|
||
|
||
let signature_slot = find_field( | ||
&input, | ||
b"\"signature_slot\":\"", | ||
b"\"", | ||
)?; | ||
|
||
let beacon = Beacon { | ||
slot: std::str::from_utf8(&input[slot.0..slot.1]) | ||
.ok()? | ||
.parse() | ||
.ok()?, | ||
proposer_index: std::str::from_utf8(&input[proposer_index.0..proposer_index.1]) | ||
.ok()? | ||
.parse() | ||
.ok()?, | ||
parent_root: hex_to_b256(&input[parent_root.0..parent_root.1]), | ||
state_root: hex_to_b256(&input[state_root.0..state_root.1]), | ||
body_root: hex_to_b256(&input[body_root.0..body_root.1]), | ||
}; | ||
|
||
|
||
Some(LightOptimisticUpdate { | ||
version: std::str::from_utf8(&input[version.0..version.1]).ok()?.to_string(), | ||
attested_header: beacon, | ||
sync_aggregate, | ||
signature_slot: hex_to_u64(&input[signature_slot.0..signature_slot.1]), | ||
code: None, | ||
|
||
}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add bounds checking and improve error handling.
The parse method needs additional bounds checking and error handling:
- Add input length validation before accessing array indices
- Handle malformed JSON input more gracefully
pub fn parse(input: &[u8]) -> Option<Self> {
+ if input.is_empty() {
+ return None;
+ }
+
if memchr::memmem::find(input, b"\"code\":").is_some() {
let code = find_field(input, b"\"code\":", b",")?;
+ if code.0 >= input.len() || code.1 > input.len() || code.0 >= code.1 {
+ return None;
+ }
let code_str = std::str::from_utf8(&input[code.0..code.1]).ok()?;
return Some(Self {
code: Some(code_str.parse().ok()?),
..Default::default()
});
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn parse(input: &[u8]) -> Option<Self> { | |
if memchr::memmem::find(input, b"\"code\":").is_some() { | |
let code = find_field(input, b"\"code\":", b",")?; | |
let code_str = std::str::from_utf8(&input[code.0..code.1]).ok()?; | |
return Some(Self { | |
code: Some(code_str.parse().ok()?), | |
..Default::default() | |
}); | |
} | |
let version = find_field(input, b"\"version\":\"", b"\"")?; | |
let slot = find_field(input, b"\"slot\":\"", b"\"")?; | |
let proposer_index = find_field(input, b"\"proposer_index\":\"", b"\"")?; | |
let parent_root = find_field(input, b"\"parent_root\":\"", b"\"")?; | |
let state_root = find_field(input, b"\"state_root\":\"", b"\"")?; | |
let body_root = find_field(input, b"\"body_root\":\"", b"\"")?; | |
let sync_committee_bits = find_field(input, b"\"sync_committee_bits\":\"", b"\"")?; | |
let sync_committee_signatures = | |
find_field(input, b"\"sync_committee_signatures\":\"", b"\"")?; | |
let sync_aggregate = SyncAggregate { | |
sync_committee_bits: hex_to_u64(&input[sync_committee_bits.0..sync_committee_bits.1]), | |
sync_committee_signature: hex_to_b256( | |
&input[sync_committee_signatures.0..sync_committee_signatures.1], | |
), | |
}; | |
let signature_slot = find_field( | |
&input, | |
b"\"signature_slot\":\"", | |
b"\"", | |
)?; | |
let beacon = Beacon { | |
slot: std::str::from_utf8(&input[slot.0..slot.1]) | |
.ok()? | |
.parse() | |
.ok()?, | |
proposer_index: std::str::from_utf8(&input[proposer_index.0..proposer_index.1]) | |
.ok()? | |
.parse() | |
.ok()?, | |
parent_root: hex_to_b256(&input[parent_root.0..parent_root.1]), | |
state_root: hex_to_b256(&input[state_root.0..state_root.1]), | |
body_root: hex_to_b256(&input[body_root.0..body_root.1]), | |
}; | |
Some(LightOptimisticUpdate { | |
version: std::str::from_utf8(&input[version.0..version.1]).ok()?.to_string(), | |
attested_header: beacon, | |
sync_aggregate, | |
signature_slot: hex_to_u64(&input[signature_slot.0..signature_slot.1]), | |
code: None, | |
}) | |
} | |
pub fn parse(input: &[u8]) -> Option<Self> { | |
+ if input.is_empty() { | |
+ return None; | |
+ } | |
+ | |
if memchr::memmem::find(input, b"\"code\":").is_some() { | |
let code = find_field(input, b"\"code\":", b",")?; | |
+ if code.0 >= input.len() || code.1 > input.len() || code.0 >= code.1 { | |
+ return None; | |
+ } | |
let code_str = std::str::from_utf8(&input[code.0..code.1]).ok()?; | |
return Some(Self { | |
code: Some(code_str.parse().ok()?), | |
..Default::default() | |
}); | |
} | |
let version = find_field(input, b"\"version\":\"", b"\"")?; | |
let slot = find_field(input, b"\"slot\":\"", b"\"")?; | |
let proposer_index = find_field(input, b"\"proposer_index\":\"", b"\"")?; | |
let parent_root = find_field(input, b"\"parent_root\":\"", b"\"")?; | |
let state_root = find_field(input, b"\"state_root\":\"", b"\"")?; | |
let body_root = find_field(input, b"\"body_root\":\"", b"\"")?; | |
let sync_committee_bits = find_field(input, b"\"sync_committee_bits\":\"", b"\"")?; | |
let sync_committee_signatures = | |
find_field(input, b"\"sync_committee_signatures\":\"", b"\"")?; | |
let sync_aggregate = SyncAggregate { | |
sync_committee_bits: hex_to_u64(&input[sync_committee_bits.0..sync_committee_bits.1]), | |
sync_committee_signature: hex_to_b256( | |
&input[sync_committee_signatures.0..sync_committee_signatures.1], | |
), | |
}; | |
let signature_slot = find_field( | |
&input, | |
b"\"signature_slot\":\"", | |
b"\"", | |
)?; | |
let beacon = Beacon { | |
slot: std::str::from_utf8(&input[slot.0..slot.1]) | |
.ok()? | |
.parse() | |
.ok()?, | |
proposer_index: std::str::from_utf8(&input[proposer_index.0..proposer_index.1]) | |
.ok()? | |
.parse() | |
.ok()?, | |
parent_root: hex_to_b256(&input[parent_root.0..parent_root.1]), | |
state_root: hex_to_b256(&input[state_root.0..state_root.1]), | |
body_root: hex_to_b256(&input[body_root.0..body_root.1]), | |
}; | |
Some(LightOptimisticUpdate { | |
version: std::str::from_utf8(&input[version.0..version.1]).ok()?.to_string(), | |
attested_header: beacon, | |
sync_aggregate, | |
signature_slot: hex_to_u64(&input[signature_slot.0..signature_slot.1]), | |
code: None, | |
}) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
crates/parser/src/beacon/light_optimistic_update.rs (1)
100-123
: Add bounds checking and improve error handling.By design, the while loops here rely on the assumption that the input remains within valid bounds. If
pos
grows larger than the length ofdata
, you risk an out-of-bounds read. Incorporate sanity checks onpos
before indexing intodata
.crates/shire/src/light_client.rs (1)
69-101
: Handle empty responses array.The method could panic when accessing
responses[0]
if no valid responses are received.Apply this diff to handle empty responses:
responses.sort_by(|a, b| a.0.cmp(&b.0)); + if responses.is_empty() { + return Err(ConsensusError::NoValidResponses); + } Ok(responses[0].1.clone())
🧹 Nitpick comments (4)
crates/parser/src/beacon/light_optimistic_update.rs (1)
20-73
: Consider more robust error handling inparse
.This function currently returns
Option<Self>
, which can obscure the reason for a parsing failure. Switching to aResult<Self, ParseError>
design could help you surface and address specific parsing errors, including malformed or missing fields.crates/parser/src/beacon/light_client_bootstrap.rs (1)
23-79
: Consider returning aResult
inparse
.Similar feedback applies here: replacing
Option<Self>
withResult<Self, SomeError>
could make debugging parse errors easier.crates/parser/src/beacon/light_client_update.rs (2)
28-157
: Consider breaking down the parse method into smaller functions.While the parsing logic is correct, the method is quite long and handles multiple responsibilities. Consider extracting the header parsing logic into separate methods to improve maintainability and readability.
Example refactor:
impl<'a> Updates { pub fn parse(input: &'a [u8]) -> Option<Self> { if memchr::memmem::find(input, b"\"code\":").is_some() { let code = find_field(input, b"\"code\":", b",")?; let code_str = std::str::from_utf8(&input[code.0..code.1]).ok()?; return Some(Self { code: Some(code_str.parse().ok()?), ..Default::default() }); } let version = find_field(input, b"\"version\":\"", b"\"")?; let signature_key = find_field(input, b"\"signature_slot\":\"", b"\"")?; - let finalized_header = find_field(input, b"\"finalized_header\":\"", b"}")?; - let attested_header = find_field(input, b"\"attested_header\":\"", b"}")?; - let sync_committee_bits = find_field(input, b"\"sync_committee_bits\":\"", b"\"")?; - let sync_committee_signatures = - find_field(input, b"\"sync_committee_signatures\":\"", b"\"")?; + let finalized_header = Self::parse_header(input, "finalized_header")?; + let attested_header = Self::parse_header(input, "attested_header")?; + let sync_aggregate = Self::parse_sync_aggregate(input)?; let finality_branch: Vec<B256> = Self::finality_branch(input)? .iter() .map(|&(start, end)| hex_to_b256(&input[start..end])) .collect(); let next_sync_committee_branch: Vec<B256> = Self::next_sync_committee_branch(input)? .iter() .map(|&(start, end)| hex_to_b256(&input[start..end])) .collect(); let pubkeys: Vec<B256> = Self::pubkeys(input)? .iter() .map(|&(start, end)| hex_to_b256(&input[start..end])) .collect(); - let slot_f = find_field( - &input[finalized_header.0..finalized_header.1], - b"\"slot\":\"", - b"\"", - )?; - // ... rest of finalized header parsing - let beacon_f = Beacon { - slot: hex_to_u64(&input[slot_f.0..slot_f.1]), - // ... rest of beacon construction - }; - - let slot_a = find_field( - &input[attested_header.0..attested_header.1], - b"\"slot\":\"", - b"\"", - )?; - // ... rest of attested header parsing - let beacon_a = Beacon { - slot: hex_to_u64(&input[slot_a.0..slot_a.1]), - // ... rest of beacon construction - }; let aggregate_pub_key = find_field(input, b"\"aggregate_pubkey\":\"", b"\"")?; let next_sync_committee = SyncCommittee { pub_keys: pubkeys, aggregate_pubkey: hex_to_b256(&input[aggregate_pub_key.0..aggregate_pub_key.1]), }; Some(Updates { version: std::str::from_utf8(&input[version.0..version.1]) .ok()? .to_string(), - attested_header: beacon_a, - finalized_header: beacon_f, + attested_header: attested_header, + finalized_header: finalized_header, finality_branch, - sync_aggregate, + sync_aggregate: sync_aggregate, signature_slot: hex_to_u64(&input[signature_key.0..signature_key.1]), next_sync_committee, next_sync_committee_branch, code: None, }) } + fn parse_header(input: &[u8], header_type: &str) -> Option<Beacon> { + let header = find_field(input, format!("\"{}\":\"", header_type).as_bytes(), b"}")?; + let slot = find_field( + &input[header.0..header.1], + b"\"slot\":\"", + b"\"", + )?; + let proposer_index = find_field( + &input[header.0..header.1], + b"\"proposer_index\":\"", + b"\"", + )?; + let parent_root = find_field( + &input[header.0..header.1], + b"\"parent_root\":\"", + b"\"", + )?; + let state_root = find_field( + &input[header.0..header.1], + b"\"state_root\":\"", + b"\"", + )?; + let body_root = find_field( + &input[header.0..header.1], + b"\"body_root\":\"", + b"\"", + )?; + + Some(Beacon { + slot: hex_to_u64(&input[slot.0..slot.1]), + proposer_index: hex_to_u64(&input[proposer_index.0..proposer_index.1]), + parent_root: hex_to_b256(&input[parent_root.0..parent_root.1]), + state_root: hex_to_b256(&input[state_root.0..state_root.1]), + body_root: hex_to_b256(&input[body_root.0..body_root.1]), + }) + } + + fn parse_sync_aggregate(input: &[u8]) -> Option<SyncAggregate> { + let sync_committee_bits = find_field(input, b"\"sync_committee_bits\":\"", b"\"")?; + let sync_committee_signatures = + find_field(input, b"\"sync_committee_signatures\":\"", b"\"")?; + + Some(SyncAggregate { + sync_committee_bits: hex_to_u64(&input[sync_committee_bits.0..sync_committee_bits.1]), + sync_committee_signature: hex_to_b256( + &input[sync_committee_signatures.0..sync_committee_signatures.1], + ), + }) + }
159-182
: Extract common parsing logic into a helper function.The
finality_branch
,pubkeys
, andnext_sync_committee_branch
methods contain duplicated code. Consider extracting the common logic into a helper function to improve maintainability and reduce code duplication.Example refactor:
impl<'a> Updates { + fn parse_array(data: &[u8], field_name: &str) -> Option<Vec<(usize, usize)>> { + let start = memchr::memmem::find(data, format!("\"{}\":[", field_name).as_bytes())?; + let mut pos = start + field_name.len() + 4; // 4 for "\":[" + let mut result = Vec::new(); + + while data[pos] != b']' { + while data[pos] != b'"' && data[pos] != b']' { + pos += 1; + } + if data[pos] == b']' { + break; + } + pos += 1; + let start = pos; + + while data[pos] != b'"' { + pos += 1; + } + result.push((start, pos)); + pos += 1; + } + + Some(result) + } + pub fn finality_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { - let start = memchr::memmem::find(data, b"\"finality_branch\":[")?; - let mut pos = start + b"\"finality_branch\":[".len(); - let mut result = Vec::new(); - - while data[pos] != b']' { - while data[pos] != b'"' && data[pos] != b']' { - pos += 1; - } - if data[pos] == b']' { - break; - } - pos += 1; - let committee_start = pos; - - while data[pos] != b'"' { - pos += 1; - } - result.push((committee_start, pos)); - pos += 1; - } - - Some(result) + Self::parse_array(data, "finality_branch") } pub fn pubkeys(data: &[u8]) -> Option<Vec<(usize, usize)>> { - let start = memchr::memmem::find(data, b"\"pubkeys\":[")?; - let mut pos = start + b"\"pubkeys\":[".len(); - let mut result = Vec::new(); - - while data[pos] != b']' { - while data[pos] != b'"' && data[pos] != b']' { - pos += 1; - } - if data[pos] == b']' { - break; - } - pos += 1; - let committee_start = pos; - - while data[pos] != b'"' { - pos += 1; - } - result.push((committee_start, pos)); - pos += 1; - } - - Some(result) + Self::parse_array(data, "pubkeys") } pub fn next_sync_committee_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { - let start = memchr::memmem::find(data, b"\"next_sync_committee_branch\":[")?; - let mut pos = start + b"\"next_sync_committee_branch\":[".len(); - let mut result = Vec::new(); - - while data[pos] != b']' { - while data[pos] != b'"' && data[pos] != b']' { - pos += 1; - } - if data[pos] == b']' { - break; - } - pos += 1; - let committee_start = pos; - - while data[pos] != b'"' { - pos += 1; - } - result.push((committee_start, pos)); - pos += 1; - } - - Some(result) + Self::parse_array(data, "next_sync_committee_branch") } }Also applies to: 184-207, 209-232
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
crates/parser/src/beacon/beacon_finality_checkpoints.rs
(1 hunks)crates/parser/src/beacon/beacon_genesis_parser.rs
(0 hunks)crates/parser/src/beacon/light_client_bootstrap.rs
(3 hunks)crates/parser/src/beacon/light_client_update.rs
(1 hunks)crates/parser/src/beacon/light_finality_update.rs
(1 hunks)crates/parser/src/beacon/light_optimistic_update.rs
(1 hunks)crates/parser/src/beacon/mod.rs
(1 hunks)crates/parser/src/lib.rs
(1 hunks)crates/parser/src/types.rs
(2 hunks)crates/shire/src/libp2p.rs
(5 hunks)crates/shire/src/light_client.rs
(1 hunks)
💤 Files with no reviewable changes (1)
- crates/parser/src/beacon/beacon_genesis_parser.rs
✅ Files skipped from review due to trivial changes (2)
- crates/parser/src/lib.rs
- crates/shire/src/libp2p.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- crates/parser/src/beacon/mod.rs
- crates/parser/src/beacon/light_finality_update.rs
🔇 Additional comments (16)
crates/parser/src/beacon/light_optimistic_update.rs (2)
1-19
: Looks good at a glance.The struct definitions and basic setup appear consistent with the rest of the codebase. No immediate correctness concerns.
75-98
: Refactor duplicate parsing logic.This logic to parse
"pubkeys"
is highly similar toparse_committee_branch_array
below. Consider merging them into a more generic helper function that takes the array label as a parameter.Here is a snippet from your past review comment, repeated for clarity:
The
parse_pub_keys_array
andparse_committee_branch_array
methods contain identical logic with different array identifiers. Consider extracting the common logic into a helper method.crates/parser/src/beacon/beacon_finality_checkpoints.rs (1)
35-41
: Logic forcode
retrieval looks correct.The approach consistently sets
code
if the"data":
field is missing, matching your fallback logic. This matches the rest of the code’s behavior and appears valid.crates/parser/src/beacon/light_client_bootstrap.rs (4)
3-6
: New imports and references approved.Inclusion of the
SyncCommittee
type and relevant helper imports looks consistent with the revised design.
8-9
: Struct definitions forLightClientBootstrap
are clear.The move to a
String
forversion
and derivingClone
for easy duplication aligns well with typical Rust patterns.
55-61
: Good approach for building theSyncCommittee
.The code properly extracts public keys and the aggregate public key, employing the same parse logic found in other modules, ensuring consistency.
81-104
: Refactor repeated logic and add bounds checks.Both
parse_pub_keys_array
andparse_committee_branch_array
replicate near-identical loops. A single helper function could handle parsing of generic string arrays, reducing duplication. Also ensurepos
never goes out of bounds when incrementing.Also applies to: 106-129
crates/parser/src/beacon/light_client_update.rs (2)
8-19
: LGTM!The struct fields are well-defined with appropriate types, and the optional
code
field allows for proper error handling.
21-25
: LGTM!The struct is well-focused with appropriate field types for sync committee data.
crates/parser/src/types.rs (2)
270-277
: LGTM!The addition of the
Clone
trait is appropriate as the struct is used in multiple contexts where cloning is necessary.
279-283
: LGTM!The addition of the
Clone
trait is appropriate as the struct is used in multiple contexts where cloning is necessary.crates/shire/src/light_client.rs (5)
24-31
: LGTM!The struct is well-designed with appropriate fields and traits for managing light client state.
35-39
: LGTM!The struct is well-documented and has appropriate fields for managing light client synchronization.
44-47
: LGTM!The struct is well-documented and has appropriate fields for handling beacon chain endpoint interactions.
156-191
: Same issues as other endpoint querying methods.This method has the same issues with empty responses and duplicated endpoint querying logic.
199-231
: Same issues as other endpoint querying methods.This method has the same issues with empty responses and duplicated endpoint querying logic.
pub fn new(endpoints: Vec<String>) -> Self { | ||
let client = Client::builder() | ||
.timeout(Duration::from_secs(5)) | ||
.build() | ||
.unwrap(); | ||
|
||
Self { | ||
client: LightClient { endpoints, client }, | ||
slot_sync: SlotSynchronizer::default(), | ||
store: None, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add proper error handling for client builder.
The method currently unwraps the client builder result, which could panic if the client creation fails. Consider propagating the error instead.
Example fix:
- pub fn new(endpoints: Vec<String>) -> Self {
+ pub fn new(endpoints: Vec<String>) -> Result<Self, reqwest::Error> {
let client = Client::builder()
.timeout(Duration::from_secs(5))
- .build()
- .unwrap();
+ .build()?;
- Self {
+ Ok(Self {
client: LightClient { endpoints, client },
slot_sync: SlotSynchronizer::default(),
store: None,
- }
+ })
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn new(endpoints: Vec<String>) -> Self { | |
let client = Client::builder() | |
.timeout(Duration::from_secs(5)) | |
.build() | |
.unwrap(); | |
Self { | |
client: LightClient { endpoints, client }, | |
slot_sync: SlotSynchronizer::default(), | |
store: None, | |
} | |
} | |
pub fn new(endpoints: Vec<String>) -> Result<Self, reqwest::Error> { | |
let client = Client::builder() | |
.timeout(Duration::from_secs(5)) | |
.build()?; | |
Ok(Self { | |
client: LightClient { endpoints, client }, | |
slot_sync: SlotSynchronizer::default(), | |
store: None, | |
}) | |
} |
pub async fn sync(&mut self, trusted_block_root: B256) -> Result<(), ConsensusError> { | ||
// Initialize store if needed | ||
if self.store.is_none() { | ||
self.initialize_store(trusted_block_root).await?; | ||
} | ||
let mut store = self.store.clone().ok_or(ConsensusError::Parse)?; | ||
|
||
loop { | ||
// apply_light_client_update // | ||
let current_slot = self | ||
.slot_sync | ||
.current_slot() | ||
.map_err(|_| ConsensusError::Parse)?; | ||
let finalized_period = self.get_sync_committee_period(store.finalized_header.slot.to()); | ||
let optimistic_period = | ||
self.get_sync_committee_period(store.optimistic_header.slot.to()); | ||
let current_period = self.get_sync_committee_period(current_slot); | ||
|
||
if finalized_period == optimistic_period && !self.is_next_sync_committee_known() { | ||
let updates = self.get_latest_update(finalized_period, 1).await?; | ||
store.next_sync_committee = Some(updates.next_sync_committee); | ||
} | ||
|
||
if finalized_period + 1 < current_period { | ||
for period in (finalized_period + 1)..current_period { | ||
let updates = self.get_latest_update(period, 1).await?; | ||
|
||
// Apply the update to move forward | ||
store.finalized_header = updates.finalized_header; | ||
store.optimistic_header = updates.attested_header; | ||
store.current_sync_committee = updates.next_sync_committee; | ||
store.next_sync_committee = None; | ||
} | ||
} | ||
|
||
// Case 3: Stay up to date with latest updates | ||
if finalized_period + 1 >= current_period { | ||
// Process finality update | ||
if let Ok(finality_update) = self.get_latest_finality_update().await { | ||
store.finalized_header = finality_update.finalized_header; | ||
store.optimistic_header = finality_update.attested_header; | ||
} | ||
|
||
// Process optimistic update | ||
if let Ok(optimistic_update) = self.get_optimistic_update().await { | ||
store.optimistic_header = optimistic_update.attested_header; | ||
} | ||
} | ||
|
||
let wait_time = self | ||
.slot_sync | ||
.time_until_next_slot() | ||
.map_err(|_| ConsensusError::Parse)?; | ||
tokio::time::sleep(wait_time).await; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add a way to break out of the infinite sync loop.
The sync loop runs indefinitely without any way to gracefully stop it. Consider adding a shutdown mechanism.
Example fix:
- pub async fn sync(&mut self, trusted_block_root: B256) -> Result<(), ConsensusError> {
+ pub async fn sync(
+ &mut self,
+ trusted_block_root: B256,
+ mut shutdown: tokio::sync::watch::Receiver<bool>,
+ ) -> Result<(), ConsensusError> {
if self.store.is_none() {
self.initialize_store(trusted_block_root).await?;
}
let mut store = self.store.clone().ok_or(ConsensusError::Parse)?;
loop {
+ if *shutdown.borrow() {
+ break;
+ }
+
let current_slot = self
.slot_sync
.current_slot()
.map_err(|_| ConsensusError::Parse)?;
// ... rest of the sync logic ...
let wait_time = self
.slot_sync
.time_until_next_slot()
.map_err(|_| ConsensusError::Parse)?;
- tokio::time::sleep(wait_time).await;
+ tokio::select! {
+ _ = tokio::time::sleep(wait_time) => {},
+ _ = shutdown.changed() => {
+ if *shutdown.borrow() {
+ break;
+ }
+ }
+ }
}
+ Ok(())
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub async fn sync(&mut self, trusted_block_root: B256) -> Result<(), ConsensusError> { | |
// Initialize store if needed | |
if self.store.is_none() { | |
self.initialize_store(trusted_block_root).await?; | |
} | |
let mut store = self.store.clone().ok_or(ConsensusError::Parse)?; | |
loop { | |
// apply_light_client_update // | |
let current_slot = self | |
.slot_sync | |
.current_slot() | |
.map_err(|_| ConsensusError::Parse)?; | |
let finalized_period = self.get_sync_committee_period(store.finalized_header.slot.to()); | |
let optimistic_period = | |
self.get_sync_committee_period(store.optimistic_header.slot.to()); | |
let current_period = self.get_sync_committee_period(current_slot); | |
if finalized_period == optimistic_period && !self.is_next_sync_committee_known() { | |
let updates = self.get_latest_update(finalized_period, 1).await?; | |
store.next_sync_committee = Some(updates.next_sync_committee); | |
} | |
if finalized_period + 1 < current_period { | |
for period in (finalized_period + 1)..current_period { | |
let updates = self.get_latest_update(period, 1).await?; | |
// Apply the update to move forward | |
store.finalized_header = updates.finalized_header; | |
store.optimistic_header = updates.attested_header; | |
store.current_sync_committee = updates.next_sync_committee; | |
store.next_sync_committee = None; | |
} | |
} | |
// Case 3: Stay up to date with latest updates | |
if finalized_period + 1 >= current_period { | |
// Process finality update | |
if let Ok(finality_update) = self.get_latest_finality_update().await { | |
store.finalized_header = finality_update.finalized_header; | |
store.optimistic_header = finality_update.attested_header; | |
} | |
// Process optimistic update | |
if let Ok(optimistic_update) = self.get_optimistic_update().await { | |
store.optimistic_header = optimistic_update.attested_header; | |
} | |
} | |
let wait_time = self | |
.slot_sync | |
.time_until_next_slot() | |
.map_err(|_| ConsensusError::Parse)?; | |
tokio::time::sleep(wait_time).await; | |
} | |
} | |
pub async fn sync( | |
&mut self, | |
trusted_block_root: B256, | |
mut shutdown: tokio::sync::watch::Receiver<bool>, | |
) -> Result<(), ConsensusError> { | |
// Initialize store if needed | |
if self.store.is_none() { | |
self.initialize_store(trusted_block_root).await?; | |
} | |
let mut store = self.store.clone().ok_or(ConsensusError::Parse)?; | |
loop { | |
+ if *shutdown.borrow() { | |
+ break; | |
+ } | |
// apply_light_client_update // | |
let current_slot = self | |
.slot_sync | |
.current_slot() | |
.map_err(|_| ConsensusError::Parse)?; | |
let finalized_period = self.get_sync_committee_period(store.finalized_header.slot.to()); | |
let optimistic_period = | |
self.get_sync_committee_period(store.optimistic_header.slot.to()); | |
let current_period = self.get_sync_committee_period(current_slot); | |
if finalized_period == optimistic_period && !self.is_next_sync_committee_known() { | |
let updates = self.get_latest_update(finalized_period, 1).await?; | |
store.next_sync_committee = Some(updates.next_sync_committee); | |
} | |
if finalized_period + 1 < current_period { | |
for period in (finalized_period + 1)..current_period { | |
let updates = self.get_latest_update(period, 1).await?; | |
// Apply the update to move forward | |
store.finalized_header = updates.finalized_header; | |
store.optimistic_header = updates.attested_header; | |
store.current_sync_committee = updates.next_sync_committee; | |
store.next_sync_committee = None; | |
} | |
} | |
// Case 3: Stay up to date with latest updates | |
if finalized_period + 1 >= current_period { | |
// Process finality update | |
if let Ok(finality_update) = self.get_latest_finality_update().await { | |
store.finalized_header = finality_update.finalized_header; | |
store.optimistic_header = finality_update.attested_header; | |
} | |
// Process optimistic update | |
if let Ok(optimistic_update) = self.get_optimistic_update().await { | |
store.optimistic_header = optimistic_update.attested_header; | |
} | |
} | |
let wait_time = self | |
.slot_sync | |
.time_until_next_slot() | |
.map_err(|_| ConsensusError::Parse)?; | |
- tokio::time::sleep(wait_time).await; | |
+ tokio::select! { | |
+ _ = tokio::time::sleep(wait_time) => {}, | |
+ _ = shutdown.changed() => { | |
+ if *shutdown.borrow() { | |
+ break; | |
+ } | |
+ } | |
+ } | |
} | |
+ Ok(()) | |
} |
pub async fn get_latest_update( | ||
&self, | ||
period: u64, | ||
count: u64, | ||
) -> Result<Updates, ConsensusError> { | ||
let mut responses = Vec::new(); | ||
|
||
// Query all endpoints concurrently | ||
let results = futures::future::join_all(self.client.endpoints.iter().map(|endpoint| { | ||
self.client | ||
.client | ||
.get(format!( | ||
"{}/eth/v1/beacon/light_client/updates?period={}&count={}", | ||
endpoint, period, count | ||
)) | ||
.send() | ||
})) | ||
.await; | ||
|
||
// Collect responses with signatures | ||
for response in results { | ||
if let Ok(resp) = response { | ||
let input = resp | ||
.bytes() | ||
.await | ||
.map(|b| b.to_vec()) | ||
.map_err(|_| ConsensusError::Parse)?; | ||
|
||
let update = Updates::parse(&input).ok_or(ConsensusError::Parse)?; | ||
|
||
responses.push((update.sync_aggregate.sync_committee_signature, update)); | ||
} | ||
} | ||
|
||
responses.sort_by(|a, b| a.0.cmp(&b.0)); | ||
Ok(responses[0].1.clone()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle empty responses array and extract common endpoint querying logic.
The method has two issues:
- It could panic when accessing
responses[0]
if no valid responses are received. - The endpoint querying logic is duplicated across multiple methods.
[refactor_suggestion_essential]
Example fix:
+impl LightClient {
+ async fn query_endpoints<T, F>(&self, path: &str, parse_fn: F) -> Result<T, ConsensusError>
+ where
+ F: Fn(&[u8]) -> Option<T>,
+ {
+ let mut responses = Vec::new();
+
+ let results = join_all(self.endpoints.iter().map(|endpoint| {
+ self.client.get(format!("{}{}", endpoint, path)).send()
+ }))
+ .await;
+
+ for response in results {
+ if let Ok(resp) = response {
+ let input = resp
+ .bytes()
+ .await
+ .map(|b| b.to_vec())
+ .map_err(|_| ConsensusError::Parse)?;
+
+ if let Some(update) = parse_fn(&input) {
+ responses.push(update);
+ }
+ }
+ }
+
+ if responses.is_empty() {
+ return Err(ConsensusError::NoValidResponses);
+ }
+
+ Ok(responses[0].clone())
+ }
+}
impl LightClientSyncer {
pub async fn get_latest_update(
&self,
period: u64,
count: u64,
) -> Result<Updates, ConsensusError> {
- let mut responses = Vec::new();
-
- let results = join_all(self.client.endpoints.iter().map(|endpoint| {
- self.client
- .client
- .get(format!(
- "{}/eth/v1/beacon/light_client/updates?period={}&count={}",
- endpoint, period, count
- ))
- .send()
- }))
- .await;
-
- for response in results {
- if let Ok(resp) = response {
- let input = resp
- .bytes()
- .await
- .map(|b| b.to_vec())
- .map_err(|_| ConsensusError::Parse)?;
-
- let update = Updates::parse(&input).ok_or(ConsensusError::Parse)?;
-
- responses.push((update.sync_aggregate.sync_committee_signature, update));
- }
- }
-
- responses.sort_by(|a, b| a.0.cmp(&b.0));
- Ok(responses[0].1.clone())
+ self.client
+ .query_endpoints(
+ &format!("/eth/v1/beacon/light_client/updates?period={}&count={}", period, count),
+ Updates::parse,
+ )
+ .await
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub async fn get_latest_update( | |
&self, | |
period: u64, | |
count: u64, | |
) -> Result<Updates, ConsensusError> { | |
let mut responses = Vec::new(); | |
// Query all endpoints concurrently | |
let results = futures::future::join_all(self.client.endpoints.iter().map(|endpoint| { | |
self.client | |
.client | |
.get(format!( | |
"{}/eth/v1/beacon/light_client/updates?period={}&count={}", | |
endpoint, period, count | |
)) | |
.send() | |
})) | |
.await; | |
// Collect responses with signatures | |
for response in results { | |
if let Ok(resp) = response { | |
let input = resp | |
.bytes() | |
.await | |
.map(|b| b.to_vec()) | |
.map_err(|_| ConsensusError::Parse)?; | |
let update = Updates::parse(&input).ok_or(ConsensusError::Parse)?; | |
responses.push((update.sync_aggregate.sync_committee_signature, update)); | |
} | |
} | |
responses.sort_by(|a, b| a.0.cmp(&b.0)); | |
Ok(responses[0].1.clone()) | |
} | |
impl LightClient { | |
async fn query_endpoints<T, F>(&self, path: &str, parse_fn: F) -> Result<T, ConsensusError> | |
where | |
F: Fn(&[u8]) -> Option<T>, | |
{ | |
let mut responses = Vec::new(); | |
let results = futures::future::join_all(self.endpoints.iter().map(|endpoint| { | |
self.get(format!("{}{}", endpoint, path)).send() | |
})) | |
.await; | |
for response in results { | |
if let Ok(resp) = response { | |
let input = resp | |
.bytes() | |
.await | |
.map(|b| b.to_vec()) | |
.map_err(|_| ConsensusError::Parse)?; | |
if let Some(update) = parse_fn(&input) { | |
responses.push(update); | |
} | |
} | |
} | |
if responses.is_empty() { | |
return Err(ConsensusError::NoValidResponses); | |
} | |
Ok(responses[0].clone()) | |
} | |
} | |
impl LightClientSyncer { | |
pub async fn get_latest_update( | |
&self, | |
period: u64, | |
count: u64, | |
) -> Result<Updates, ConsensusError> { | |
self.client | |
.query_endpoints( | |
&format!("/eth/v1/beacon/light_client/updates?period={}&count={}", period, count), | |
Updates::parse, | |
) | |
.await | |
} | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (1)
crates/parser/src/beacon/light_client_bootstrap.rs (1)
107-130
:⚠️ Potential issueAdd safety checks to prevent out-of-bounds indexing when parsing committee branch arrays.
As in other parsing methods, advancing
pos
can result in an out-of-range index for malformed or truncated JSON inputs. Transform panics into graceful error returns.while data[pos] != b']' { + if pos >= data.len() { + return None; + } while data[pos] != b'"' && data[pos] != b']' { pos += 1; + if pos >= data.len() { + return None; + } } ... }
♻️ Duplicate comments (4)
crates/parser/src/beacon/light_optimistic_update.rs (1)
76-99
: 🛠️ Refactor suggestionExtract common logic for parsing arrays.
These lines duplicate the logic found in
parse_committee_branch_array
. Refactor the shared loop into a helper method to avoid code duplication and improve readability.crates/shire/src/light_client.rs (3)
54-65
:⚠️ Potential issueAdd proper error handling for client builder.
The constructor still unwraps the client builder result, which could panic.
73-151
: 🛠️ Refactor suggestionExtract common endpoint querying logic.
The endpoint querying logic is duplicated across multiple methods.
269-324
:⚠️ Potential issueAdd a way to break out of the infinite sync loop.
The sync loop runs indefinitely without any way to gracefully stop it.
🧹 Nitpick comments (3)
crates/parser/src/beacon/light_client_bootstrap.rs (1)
82-105
: Refactor repeated array parsing logic.The code in
parse_pub_keys_array
resembles logic found in otherparse_*_array
methods throughout the codebase. Factor out a shared function to streamline maintenance and reduce duplication.crates/shire/src/light_client.rs (2)
24-51
: Enhance struct documentation.While the structs are well-defined, consider adding more detailed documentation:
- Document field purposes
- Add examples
- Document thread safety guarantees
Example documentation improvement:
/// Stores the current state of a light client, including finalized and optimistic headers, /// and sync committee information +/// +/// # Fields +/// * `finalized_header` - The most recent finalized beacon block header +/// * `optimistic_header` - The most recent optimistically accepted beacon block header +/// * `attested_header` - The most recent attested beacon block header +/// * `current_sync_committee` - The current sync committee +/// * `next_sync_committee` - The next sync committee (if known) +/// +/// # Thread Safety +/// This struct implements `Clone` for shared ownership across threads. #[derive(Debug, Default, Clone)] pub struct LightClientStore {
251-266
: Optimize store initialization.The method unnecessarily clones the same header multiple times. Consider using a single clone or reference.
Apply this diff to optimize:
- finalized_header: bootstrap.header.beacon.clone(), - optimistic_header: bootstrap.header.beacon.clone(), - attested_header: bootstrap.header.beacon, + let header = bootstrap.header.beacon; + finalized_header: header.clone(), + optimistic_header: header.clone(), + attested_header: header,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
crates/parser/src/beacon/light_client_bootstrap.rs
(3 hunks)crates/parser/src/beacon/light_client_update.rs
(1 hunks)crates/parser/src/beacon/light_finality_update.rs
(1 hunks)crates/parser/src/beacon/light_optimistic_update.rs
(1 hunks)crates/shire/src/checkpoint.rs
(1 hunks)crates/shire/src/lib.rs
(1 hunks)crates/shire/src/libp2p.rs
(1 hunks)crates/shire/src/light_client.rs
(1 hunks)rustc-ice-2025-02-11T12_59_39-34292.txt
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- crates/shire/src/checkpoint.rs
- crates/shire/src/libp2p.rs
🔇 Additional comments (10)
crates/shire/src/lib.rs (1)
3-6
: Verify architectural changes after RPC removal.The removal of
consensus_rpc
module and addition oflight_client
andcheckpoint
modules represents a significant architectural change. Please ensure:
- All RPC functionality previously handled by
consensus_rpc
is properly migrated or intentionally removed- The relationship between the new modules is well-documented
Let's verify the new module structure and dependencies:
#!/bin/bash echo "Checking module structure..." echo "1. Searching for imports between new modules:" rg -A 5 "use (super::)?(checkpoint|light_client)" crates/shire/src/ echo "2. Checking for any remaining RPC references:" rg -i "rpc" crates/shire/src/ echo "3. Checking module documentation:" fd -e rs . crates/shire/src/ -x head -n 20 {}crates/parser/src/beacon/light_optimistic_update.rs (1)
22-29
: Confirm early return logic when parsingcode
.By returning immediately after detecting
"code"
, the rest of the fields (e.g.,version
,attested_header
) will not be parsed. Verify that this is intentional and that partial parsing is acceptable under these conditions.crates/parser/src/beacon/light_finality_update.rs (2)
22-29
: Confirm partial parsing oncode
detection.As with
LightOptimisticUpdate
, returning immediately after parsingcode
may result in ignoring other fields likeversion
,attested_header
, etc. Double-check whether partial parsing is permissible and intentional here.
73-92
: Validate brace matching inparse_header
for malformed input.The method locates braces but may fail if input is malformed (e.g., too few braces, missing closing brace). Add defensive checks to ensure
depth
does not become negative or thatend
is not out of range.crates/parser/src/beacon/light_client_bootstrap.rs (3)
24-31
: Double-check early return upon parsingcode
.If
"code"
is detected, the function returns immediately, bypassing the rest of the parsing logic. Confirm that discarding other fields is an intended behavior.
72-74
: Good approach using.ok()?
and converting toString
.This pattern gracefully handles UTF-8 conversion errors by returning
None
rather than panicking. It improves robustness.
56-62
: Ensure validity ofparse_pub_keys_array
output before constructingSyncCommittee
.If
parse_pub_keys_array
returnsNone
, the function returns early. Confirm that partial presence of data does not cause any inconsistent internal state or panic.crates/parser/src/beacon/light_client_update.rs (2)
8-25
: LGTM! Well-structured data types.The structs are well-defined with appropriate fields and derive macros. The
Updates
struct effectively captures all necessary fields for light client updates, whileSyncAggregate
provides a clean representation of sync committee data.
27-94
:⚠️ Potential issueImprove error handling in parse methods.
The parse methods use unwrap which could cause panics. Consider using proper error handling:
- Replace unwrap with proper error handling
- Add input validation
- Propagate parsing errors
Apply this diff to improve error handling:
- slot: std::str::from_utf8(slot).unwrap().parse().unwrap(), - proposer_index: std::str::from_utf8(proposer_index) - .unwrap() - .parse() - .unwrap(), + slot: std::str::from_utf8(slot) + .ok() + .and_then(|s| s.parse().ok()) + .ok_or(ConsensusError::Parse)?, + proposer_index: std::str::from_utf8(proposer_index) + .ok() + .and_then(|s| s.parse().ok()) + .ok_or(ConsensusError::Parse)?,Likely an incorrect or invalid review comment.
rustc-ice-2025-02-11T12_59_39-34292.txt (1)
1-45
: Report internal compiler error (ICE) to Rust team.This is an internal compiler error (ICE) in the nightly Rust compiler. This indicates a bug in the compiler itself, not in your code.
Please:
- Check if this is a known issue by searching the Rust issue tracker
- If not found, report it to the Rust team
✅ Verification successful
🌐 Web query:
Search for existing ICE issues in Rust repository with error message "Box<dyn Any> panic in cstore_impl.rs"
💡 Result:
The search results reveal two relevant Internal Compiler Error (ICE) issues involving
Box<dyn Any>
panics in Rust'scstore_impl.rs
:1. Issue #113266
Error Context:
- Panic occurs during crate metadata decoding with message:
"thread 'rustc' panicked at 'Box', compiler/rustc_metadata/src/rmeta/decoder/cstore_impl.rs"- Triggered by incremental compilation scenarios involving complex type interactions
- Labeled with
I-ICE
(Internal Compiler Error) and under active investigation[2][9]2. Issue #112700
Error Context:
- Involves a panic at line 595 of
cstore_impl.rs
during span deserialization- Related to incorrect handling of
StableCrateId
identifiers- Demonstrates how internal compiler state inconsistencies can lead to ICEs[10]
Key Observations:
Both issues involve low-level metadata handling in the compiler
Workaround for #113266:
cargo clean && cargo build
Clears incremental compilation artifacts that might trigger the ICE[2][9]
Common patterns:
- Heavy use of generics/trait objects (
Box<dyn Any>
)[4][6][8]- Complex type hierarchies with associated types[7][8]
The Rust compiler team actively tracks these issues under the
I-ICE
label[1][5]. Users encountering similar errors are encouraged to:
- Include a minimal reproducible example
- Provide full backtraces using
RUST_BACKTRACE=1
[3][7]- Check if cleaning build artifacts resolves intermittent issues[2][9]
Citations:
- 1: https://github.com/rust-lang/glacier
- 2: uninterned StableCrateId: StableCrateId(10368175013544693233) thread 'rustc' panicked at 'Box<dyn Any>' rust-lang/rust#113266
- 3: https://users.rust-lang.org/t/trapped-under-ice/2394
- 4: https://users.rust-lang.org/t/function-which-takes-an-iterator-item-box-dyn-any/74010
- 5: https://users.rust-lang.org/t/glacier-a-big-ol-pile-of-ice/3380
- 6: https://users.rust-lang.org/t/how-to-impl-trait-box-dyn-any-for-a-generic-struct/81558
- 7: https://users.rust-lang.org/t/needs-help-in-debugging-a-rustc-ice/102418
- 8: https://quinedot.github.io/rust-learning/dyn-any.html
- 9: ICE report as requested; no ICE on stable rust-lang/rust#130272
- 10: ICE: compiler/rustc_metadata/src/rmeta/decoder/cstore_impl.rs:595:32: uninterned StableCrateId rust-lang/rust#112700
ICE already reported; refer to existing Rust issues
Based on our web search, the "Box<dyn Any>" ICE in
cstore_impl.rs
is a known compiler bug. Specifically:
- Issue #113266: Reports a similar panic during crate metadata decoding. A workaround mentioned is running
cargo clean && cargo build
to clear problematic incremental artifacts.- Issue #112700: Also describes a panic related to metadata handling, confirming that ICEs in this area have been observed.
Since these issues are already tracked by the Rust team, you do not need to file a new report. You may refer to these issues for updates and potential workarounds.
@@ -1,8 +1,9 @@ | |||
#![feature(trivial_bounds)] | |||
|
|||
pub mod checkpoint; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Empty module requires implementation.
The checkpoint
module appears to be empty. Since this is a partial implementation of the light client, please:
- Document the intended purpose of this module
- Add TODO comments for pending implementations
- Consider adding module-level documentation
Let's verify the module state:
🏁 Script executed:
#!/bin/bash
# Check if checkpoint.rs exists and is empty
if [ -f "crates/shire/src/checkpoint.rs" ]; then
echo "checkpoint.rs exists. Content:"
cat "crates/shire/src/checkpoint.rs"
else
echo "checkpoint.rs not found. Checking for checkpoint/mod.rs:"
if [ -f "crates/shire/src/checkpoint/mod.rs" ]; then
cat "crates/shire/src/checkpoint/mod.rs"
fi
fi
Length of output: 152
Checkpoint Module is Empty – Implementation and Documentation Required
The file at crates/shire/src/checkpoint.rs
is confirmed to be empty. To address the review comment, please add:
- Module-level documentation outlining the intended purpose of the
checkpoint
module. - TODO comments indicating pending implementations for the light client functionality.
.to_string(), | ||
attested_header: beacon, | ||
sync_aggregate, | ||
signature_slot: std::str::from_utf8(&input[signature_slot.0..signature_slot.1]).unwrap().parse().unwrap(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace double unwrap()
calls with safer error handling.
Using unwrap()
twice in a row can cause a panic on malformed or invalid input. Prefer returning an error (e.g., None
) or using a safe pattern.
- signature_slot: std::str::from_utf8(&input[signature_slot.0..signature_slot.1]).unwrap().parse().unwrap(),
+ let slot_str = std::str::from_utf8(&input[signature_slot.0..signature_slot.1]).ok()?;
+ let slot_parsed = slot_str.parse().ok()?;
+ signature_slot: slot_parsed,
Committable suggestion skipped: line range outside the PR's diff.
pub fn parse_committee_branch_array(data: &[u8]) -> Option<Vec<(usize, usize)>> { | ||
let start = memchr::memmem::find(data, b"\"current_sync_committee_branch\":[")?; | ||
let mut pos = start + b"\"current_sync_committee_branch\":[".len(); | ||
let mut result = Vec::new(); | ||
|
||
while data[pos] != b']' { | ||
while data[pos] != b'"' && data[pos] != b']' { | ||
pos += 1; | ||
} | ||
if data[pos] == b']' { | ||
break; | ||
} | ||
pos += 1; | ||
let committee_start = pos; | ||
|
||
while data[pos] != b'"' { | ||
pos += 1; | ||
} | ||
result.push((committee_start, pos)); | ||
pos += 1; | ||
} | ||
|
||
Some(result) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure bounds checking and handle malformed input in array parsing.
When advancing pos
, the code does not check if pos
might exceed data.len()
. This can cause out-of-bounds panics on malformed input. Add checks or graceful error handling.
while data[pos] != b']' {
+ if pos >= data.len() {
+ return None;
+ }
while data[pos] != b'"' && data[pos] != b']' {
pos += 1;
+ if pos >= data.len() {
+ return None;
+ }
}
...
}
Committable suggestion skipped: line range outside the PR's diff.
signature_slot: std::str::from_utf8(&input[signature_slot.0..signature_slot.1]) | ||
.unwrap() | ||
.parse() | ||
.unwrap(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using .unwrap()
calls for parsing signature_slot
.
Using .unwrap()
can lead to runtime panics on unexpected input. Consider returning an error or using a safe pattern (e.g., ?
) to handle invalid data.
fn finality_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | ||
let start = memchr::memmem::find(data, b"\"finality_branch\":[")?; | ||
let mut pos = start + b"\"finality_branch\":[".len(); | ||
let mut result = Vec::new(); | ||
|
||
while data[pos] != b']' { | ||
while data[pos] != b'"' && data[pos] != b']' { | ||
pos += 1; | ||
} | ||
if data[pos] == b']' { | ||
break; | ||
} | ||
pos += 1; | ||
let committee_start = pos; | ||
|
||
while data[pos] != b'"' { | ||
pos += 1; | ||
} | ||
result.push((committee_start, pos)); | ||
pos += 1; | ||
} | ||
|
||
Some(result) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Unify branching array parsing or add error handling to prevent out-of-bounds indexing.
Similar logic is used for extracting items from arrays (sync_committee_branch
,finality_branch
, etc.). Consider moving this into a shared helper function and add bounds checks to avoid potential panics on malformed input.
pub fn finality_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | ||
let start = memchr::memmem::find(data, b"\"finality_branch\":[")?; | ||
let mut pos = start + b"\"finality_branch\":[".len(); | ||
let mut result = Vec::new(); | ||
|
||
while data[pos] != b']' { | ||
while data[pos] != b'"' && data[pos] != b']' { | ||
pos += 1; | ||
} | ||
if data[pos] == b']' { | ||
break; | ||
} | ||
pos += 1; | ||
let committee_start = pos; | ||
|
||
while data[pos] != b'"' { | ||
pos += 1; | ||
} | ||
result.push((committee_start, pos)); | ||
pos += 1; | ||
} | ||
|
||
Some(result) | ||
} | ||
|
||
pub fn pubkeys(data: &[u8]) -> Option<Vec<(usize, usize)>> { | ||
let start = memchr::memmem::find(data, b"\"pubkeys\":[")?; | ||
let mut pos = start + b"\"pubkeys\":[".len(); | ||
let mut result = Vec::new(); | ||
|
||
while data[pos] != b']' { | ||
while data[pos] != b'"' && data[pos] != b']' { | ||
pos += 1; | ||
} | ||
if data[pos] == b']' { | ||
break; | ||
} | ||
pos += 1; | ||
let committee_start = pos; | ||
|
||
while data[pos] != b'"' { | ||
pos += 1; | ||
} | ||
result.push((committee_start, pos)); | ||
pos += 1; | ||
} | ||
|
||
Some(result) | ||
} | ||
|
||
pub fn next_sync_committee_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | ||
let start = memchr::memmem::find(data, b"\"next_sync_committee_branch\":[")?; | ||
let mut pos = start + b"\"next_sync_committee_branch\":[".len(); | ||
let mut result = Vec::new(); | ||
|
||
while data[pos] != b']' { | ||
while data[pos] != b'"' && data[pos] != b']' { | ||
pos += 1; | ||
} | ||
if data[pos] == b']' { | ||
break; | ||
} | ||
pos += 1; | ||
let committee_start = pos; | ||
|
||
while data[pos] != b'"' { | ||
pos += 1; | ||
} | ||
result.push((committee_start, pos)); | ||
pos += 1; | ||
} | ||
|
||
Some(result) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Reduce code duplication in array parsing methods.
The three methods for parsing arrays (finality_branch
, pubkeys
, next_sync_committee_branch
) share almost identical logic. Consider extracting the common functionality into a reusable method.
Apply this diff to reduce duplication:
+ fn parse_array(data: &[u8], key: &str) -> Option<Vec<(usize, usize)>> {
+ let start = memchr::memmem::find(data, format!("\"{}\":[", key).as_bytes())?;
+ let mut pos = start + key.len() + 4; // 4 for "\":["
+ let mut result = Vec::new();
+
+ while data[pos] != b']' {
+ while data[pos] != b'"' && data[pos] != b']' {
+ pos += 1;
+ }
+ if data[pos] == b']' {
+ break;
+ }
+ pos += 1;
+ let committee_start = pos;
+
+ while data[pos] != b'"' {
+ pos += 1;
+ }
+ result.push((committee_start, pos));
+ pos += 1;
+ }
+
+ Some(result)
+ }
+
+ pub fn finality_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> {
+ Self::parse_array(data, "finality_branch")
+ }
+
+ pub fn pubkeys(data: &[u8]) -> Option<Vec<(usize, usize)>> {
+ Self::parse_array(data, "pubkeys")
+ }
+
+ pub fn next_sync_committee_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> {
+ Self::parse_array(data, "next_sync_committee_branch")
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn finality_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | |
let start = memchr::memmem::find(data, b"\"finality_branch\":[")?; | |
let mut pos = start + b"\"finality_branch\":[".len(); | |
let mut result = Vec::new(); | |
while data[pos] != b']' { | |
while data[pos] != b'"' && data[pos] != b']' { | |
pos += 1; | |
} | |
if data[pos] == b']' { | |
break; | |
} | |
pos += 1; | |
let committee_start = pos; | |
while data[pos] != b'"' { | |
pos += 1; | |
} | |
result.push((committee_start, pos)); | |
pos += 1; | |
} | |
Some(result) | |
} | |
pub fn pubkeys(data: &[u8]) -> Option<Vec<(usize, usize)>> { | |
let start = memchr::memmem::find(data, b"\"pubkeys\":[")?; | |
let mut pos = start + b"\"pubkeys\":[".len(); | |
let mut result = Vec::new(); | |
while data[pos] != b']' { | |
while data[pos] != b'"' && data[pos] != b']' { | |
pos += 1; | |
} | |
if data[pos] == b']' { | |
break; | |
} | |
pos += 1; | |
let committee_start = pos; | |
while data[pos] != b'"' { | |
pos += 1; | |
} | |
result.push((committee_start, pos)); | |
pos += 1; | |
} | |
Some(result) | |
} | |
pub fn next_sync_committee_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | |
let start = memchr::memmem::find(data, b"\"next_sync_committee_branch\":[")?; | |
let mut pos = start + b"\"next_sync_committee_branch\":[".len(); | |
let mut result = Vec::new(); | |
while data[pos] != b']' { | |
while data[pos] != b'"' && data[pos] != b']' { | |
pos += 1; | |
} | |
if data[pos] == b']' { | |
break; | |
} | |
pos += 1; | |
let committee_start = pos; | |
while data[pos] != b'"' { | |
pos += 1; | |
} | |
result.push((committee_start, pos)); | |
pos += 1; | |
} | |
Some(result) | |
} | |
fn parse_array(data: &[u8], key: &str) -> Option<Vec<(usize, usize)>> { | |
let start = memchr::memmem::find(data, format!("\"{}\":[", key).as_bytes())?; | |
let mut pos = start + key.len() + 4; // 4 for "\"[:" characters | |
let mut result = Vec::new(); | |
while data[pos] != b']' { | |
while data[pos] != b'"' && data[pos] != b']' { | |
pos += 1; | |
} | |
if data[pos] == b']' { | |
break; | |
} | |
pos += 1; | |
let committee_start = pos; | |
while data[pos] != b'"' { | |
pos += 1; | |
} | |
result.push((committee_start, pos)); | |
pos += 1; | |
} | |
Some(result) | |
} | |
pub fn finality_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | |
Self::parse_array(data, "finality_branch") | |
} | |
pub fn pubkeys(data: &[u8]) -> Option<Vec<(usize, usize)>> { | |
Self::parse_array(data, "pubkeys") | |
} | |
pub fn next_sync_committee_branch(data: &[u8]) -> Option<Vec<(usize, usize)>> { | |
Self::parse_array(data, "next_sync_committee_branch") | |
} |
#[tokio::test] | ||
async fn test_sync() { | ||
let a = LightClientSyncer::new(vec!["https://eth-beacon-chain.drpc.org/rest/".to_string()]); | ||
// println!("{:?}", a.get_latest_finality_update().await); | ||
println!("{:?}", a.get_latest_update(1, 1).await); | ||
// println!("{:?}", a.get_optimistic_update().await); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve test coverage and assertions.
The test function has several issues:
- Contains commented out code
- Tests only a single endpoint
- Lacks assertions
Consider implementing comprehensive tests:
- Test multiple endpoints
- Add error cases
- Add proper assertions
- Remove commented code
Example improvement:
- #[tokio::test]
- async fn test_sync() {
- let a = LightClientSyncer::new(vec!["https://eth-beacon-chain.drpc.org/rest/".to_string()]);
- // println!("{:?}", a.get_latest_finality_update().await);
- println!("{:?}", a.get_latest_update(1, 1).await);
- // println!("{:?}", a.get_optimistic_update().await);
- }
+ #[tokio::test]
+ async fn test_get_latest_update() {
+ let syncer = LightClientSyncer::new(vec![
+ "https://endpoint1.example.com".to_string(),
+ "https://endpoint2.example.com".to_string(),
+ ]);
+
+ let result = syncer.get_latest_update(1, 1).await;
+ assert!(result.is_ok(), "Should successfully get update");
+
+ let update = result.unwrap();
+ assert!(update.version.len() > 0, "Version should not be empty");
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#[tokio::test] | |
async fn test_sync() { | |
let a = LightClientSyncer::new(vec!["https://eth-beacon-chain.drpc.org/rest/".to_string()]); | |
// println!("{:?}", a.get_latest_finality_update().await); | |
println!("{:?}", a.get_latest_update(1, 1).await); | |
// println!("{:?}", a.get_optimistic_update().await); | |
} | |
#[tokio::test] | |
async fn test_get_latest_update() { | |
let syncer = LightClientSyncer::new(vec![ | |
"https://endpoint1.example.com".to_string(), | |
"https://endpoint2.example.com".to_string(), | |
]); | |
let result = syncer.get_latest_update(1, 1).await; | |
assert!(result.is_ok(), "Should successfully get update"); | |
let update = result.unwrap(); | |
assert!(update.version.len() > 0, "Version should not be empty"); | |
} |
Light client partial impl
Summary by CodeRabbit
New Features
Refactor
Chores