Skip to content
This repository has been archived by the owner on Mar 13, 2023. It is now read-only.

Doc: Ethereum Header MMR Spec #163

Closed
hackfisher opened this issue Jun 11, 2020 · 3 comments
Closed

Doc: Ethereum Header MMR Spec #163

hackfisher opened this issue Jun 11, 2020 · 3 comments

Comments

@hackfisher
Copy link
Contributor

hackfisher commented Jun 11, 2020

Same with #162

Ethereum does not describe the detail spec related to MMR struct, so we need to define them for the eth relay we will develop.

There are basically several parts need define:

  • Hashing Algorithm
    type Hashing: Hash<Output = Self::Hash>;

type Hashing = BlakeTwo256;
pub type Hash = sp_core::H256;
Note: For easier library reuse and less developement effort considerations, only befinifit to the MMR related functions for those chains that are does not have MMR included in header.

  • Leaf Append
    If the value(Ethereum Header Hash) is already H256, then directly append as leaf, if not, hashing to H256 using Hashing Algorithm.

  • Branch Merge

Using MMR Merge same with Darwinia Header MMR Spec:

pub struct MMRMerge<T>(PhantomData<T>);
impl<T: Trait> merkle_mountain_range::Merge for MMRMerge<T> {
	type Item = <T as frame_system::Trait>::Hash;
	fn merge(lhs: &Self::Item, rhs: &Self::Item) -> Self::Item {
		let encodable = (lhs, rhs);
		<T as frame_system::Trait>::Hashing::hash_of(&encodable)
	}
}

Note: (lhs, rhs) is encoded in SCALE codec before hasing.

  • Root's child peaks bagging
    Using same bagging algorithm with Darwinia Header MMR Spec:
fn bag_rhs_peaks(&self, mut rhs_peaks: Vec<T>) -> Result<Option<T>> {
        while rhs_peaks.len() > 1 {
            let right_peak = rhs_peaks.pop().expect("pop");
            let left_peak = rhs_peaks.pop().expect("pop");
            rhs_peaks.push(M::merge(&right_peak, &left_peak));
        }
        Ok(rhs_peaks.pop())
    }

So if the peaks are [p1, p2, p3, p4, p5], the root will be MMRMerge(p1, MMRMerge(p2, MMRMerge(p3, MMRMerge(p4, p5)))))

@clearloop
Copy link
Contributor

clearloop commented Jun 12, 2020

The current implementation in dargo

The ETH mmr implementation in dargo currently follows the design of the mmr in darwinia.

Hashing Algorithm

/// BlakeTwo256 hash function
pub fn hash(data: &[u8]) -> [u8; 32] {
    let mut dest = [0; 32];
    dest.copy_from_slice(blake2b(32, &[], data).as_bytes());
    dest
}

The code above is actually how <T as frame_system::Trait>::Hashing::hash_of(&encodable) works.

Leaf Append

The leaves in storage are hex strings of Hashes.

Branch Merge

/// MMR Merge trait for ETHash
pub struct MergeHash;
impl Merge for MergeHash {
    type Item = [u8; 32];
    fn merge(lhs: &Self::Item, rhs: &Self::Item) -> Self::Item {
        let mut data = vec![];
        data.append(&mut lhs.to_vec());
        data.append(&mut rhs.to_vec());
        // blakeTwo256
        hash(&data.as_slice())
    }
}

This data equals to the result of the scale codec in the source code of header-mmr in darwinia.

Root's child peaks bagging

fn bag_rhs_peaks(&self, mut rhs_peaks: Vec<T>) -> Result<Option<T>> {
        while rhs_peaks.len() > 1 {
            let right_peak = rhs_peaks.pop().expect("pop");
            let left_peak = rhs_peaks.pop().expect("pop");
            rhs_peaks.push(M::merge(&right_peak, &left_peak));
        }
        Ok(rhs_peaks.pop())
}

Same as above, using the implementation of merkle-mountain-range.


If using the implementation above, there will be no more workload in dargo, because it is already implemented, but don't know if it is friendly to darwinia-network/apps, free to change if it is required.

@hackfisher
Copy link
Contributor Author

hackfisher commented Jun 23, 2020

@clearloop I'm not quite sure about the branch merge you mention here, is it using SCALE codec and giving the same result?

/// MMR Merge trait for ETHash
pub struct MergeHash;
impl Merge for MergeHash {
    type Item = [u8; 32];
    fn merge(lhs: &Self::Item, rhs: &Self::Item) -> Self::Item {
        let mut data = vec![];
        data.append(&mut lhs.to_vec());
        data.append(&mut rhs.to_vec());
        // blakeTwo256
        hash(&data.as_slice())
    }
}

@clearloop
Copy link
Contributor

@clearloop I'm not quite sure about the branch merge you mention here, is it using SCALE codec and giving the same result?

Yep, the result is equal to the scale codec method, for example

let l: [u8; 1] = [1];
let r: [u8; 1] = [2];

// true
assert!((l, r).encode().eq([1, 2]))

Because l and r in tuple have type definitions, the scale codec method just concatenating them.

@aurexav aurexav closed this as completed Sep 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants