-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LP Message with custom data format for serialization/deserialization #1889
Conversation
Currently, the serialization is the same except for the 0 padding used for enum variants and the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wish this PR existed 24 hours ago when I decided to start the message refactoring while waiting for compilers 🥲
Would be awesome if this worked out!
I think once we get the correct serialization, it can be easily ported to your branch. |
Both issues are now fixed and the serialization & deserialization is the same as expected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I like the change. The raw types look weird, but the spec itself matches the code now better.
No strong opinion as long as it is compatible. The main issue for me is that we do not control the serialization here. But tests should cover everything.
THen lets go with this approach. |
As I am currently working on the message reordering, I have added all known fields for the new messages already. In order to not do redundant work, I would like this PR to get merged ASAP. However, it should be based on the new LP v2 feature branch It appears to me that we have found consensus and can finish this PR today @lemunozm @mustermeiszer? |
Why not leave this directly in |
And rebase later LPv2 over it. I understand that LPv2 is a branch that should be rebased over main continuously, right? |
Also fine with that! |
Yes |
I'm on the way to clean this up and have it ready. |
5cf75e3
to
d1d7d22
Compare
Ok, after some thoughts, I think you need to implement custom serialization for the struct Message {
UpdateRestriction(UpdateRestriction),
}
enum UpdateRestriction {
UpdateMember {},
Freeze {},
Unfreeze {},
}
impl Serialize for UpdateRestriction {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
match self {
UpdateMember {} => {
serializer.serialize_u8(1);
serializer.serialize_field(..)
},
Freeze {} => {
serializer.serialize_u8(2);
serializer.serialize_field(..)
},
..
}
}
}
impl<'de> Deserialize<'de> for UpdateRestriction {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
match deserializer.deserialize_u8(u8Visitor) {
1 => //Deserialize UpdateMember fields
2 => //Deserialize Freeze,
..
}
}
} |
That's similar to what I had in mind with my limited understanding of the |
I guess so. Or maybe merge this today and then rebase the whole feat/lpv2 branch with all sub-branches. Let's see if I can solve the CI issues from this |
That would be better overall of course. The lpv2 branch is in sync with |
I'm not getting to overpass the CI issue. It seems like when |
If I make:
I obtain the CI issue locally. If I do:
It works 😅 |
I couldn't bypass the CI issue because something is off with So, I've decided to implement a custom serializer. It has been simpler than I expected (around 500 lines in total, quite straightforward). I think it's the purest and most scalable solution, allowing us to define the GMPF encoding correctly and clean the |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1889 +/- ##
==========================================
- Coverage 47.38% 46.03% -1.35%
==========================================
Files 176 179 +3
Lines 13305 13093 -212
==========================================
- Hits 6304 6028 -276
- Misses 7001 7065 +64 ☔ View full report in Codecov by Sentry. |
@wischli this should be ready to unblock your work on the message reorder |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to always call gmpf::to_vec()
for a message and not just serialize()
?
Both are the same, actually Outside of that module, you can/should just call to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm honestly surprised to see that implementing the byte serializer suffices even though it is missing for the deserializer. Thanks for putting so much work into this! From my perspective, we can merge this but I'd like to wait for @mustermeiszer's opinion.
fn deserialize_bytes<V: Visitor<'de>>(self, _visitor: V) -> Result<V::Value> { | ||
Err(Error::Unimplemented) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I missing something or should this be implemented before we merge this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question! I was surprised too when implemented this. It results that the fixed arrays are serialized/deserialized as tuples (if you think about it, actually it makes sense). Only dynamic lists of bytes are deserialized with such method, and we do not have such fields right now
/// | ||
/// NOTE: Each message must immutably map to the same u8. Messages are | ||
/// decoded in other domains and MUST follow the defined standard. | ||
fn call_type(&self) -> u8 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be sure: The call type is now inherited from the order of the enum variants, right? So when I have a nested enum Y
inside an LP Message X
, Y
will have its own inner call type such that X
has an outer and an inner one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call type is now inherited from the order of the enum variants, right?
Yes
So when I have a nested enum Y inside an LP Message X, Y will have its own inner call type such that X has an outer and an inner one.
And yes too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we should have a test checking that each msg has the expected index in the spec, to avoid errors swapping variants in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I was thinking of exactly that. It is more or less tested in the message serialization cycle UTs in Message.rs
but these are too opaque to guarantee call type safety. No need for you to do it as part of this PR because I am changing almost all indices anyway. Will do it as part of #1892 then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is more or less tested in the message serialization cycle UTs in Message.rs but these are too opaque to guarantee call type safety
fully agree!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid boilerplate in testing, maybe each test construction can go to a function: i.e. increase_invest_order_msg()
, and later:
`assert_eq!(msg_increase_invest_order().serialize()[0], 5)`
or similar.
Also, not sure if the it's more manageable to have the message as:
enum Message {
AddPool(AddPoolMessage),
AllowInvestCurrency(AllowInvestCurrencyMessage),
..
},
It seems like "more" code at first, but later, you can reuse each variant for different purposes. I'm just thinking out loud.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, not sure if the it's more manageable to have the message as:
enum Message { AddPool(AddPoolMessage), AllowInvestCurrency(AllowInvestCurrencyMessage), .. },
Not sure if I see the win for doing it this way right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enum call type encoding should be checked by integration tests implicitly. If we cover all messages in the next month.
fix: adapt passthrough router fix: docs and hash chore: rm unused test Release v0.11.3 (#1886) * Switch to full wasm execution * feat: use correct encoded size. * fix: lint * refactor: reduce dev session duration from 6h to 2min * chore: update centrifuge weights * chore: update dev weights * chore: update altair weights * chore: update frame_system weights * fmt * fix: re-enable frame_system benches * chore: bump spec version * fmt: revert using latest nightly * v0.11.3-rc1: revert checkmetadata ext * feat: `CheckMetadataHash` extension (#1865) * set dependency versions * upgrade libs/* * upgrade pallet-anchors * upgrade pallet-fees * upgrade pallet-bridge * migrate simple pallets * migrate pallet-order-book * migrated collator-allowlist & swaps * upgrade rewards * upgraded pallet-mux * upgrade transfer-allowlist * fix hold reason in restricted tokens * simplify set_balance removing the holding part * upgrade restricted-xtokens * upgrade some pallets * upgrade pallet-ethereum-transaction * upgrade pallet-loans * upgrade pool-system * upgrade pool-fees * upgrade pool-registry * upgrade liquidity-pools stuffs * avoid duplicated polkadot-sdk repos * minor fixes * upgraded runtime-common * CfgLocation to RestrictedTransferLocation * restricted tokens with NativeIndex for native fungibles * rename dependency * upgraded development-runtime * fix partially benchmarks * fix benchmarks * overpass xcmp-queue integrity tests * minor coments * upgrade altair & centrifuge * remove some benchmarking pallets that are not needed * fix runtime upgrades * upgrade integration-test proc * upgrade integration-tests framework * upgraded all tests except liquidity pools * 99% upgraded liquidity-pools tests * fix lookup * cargo fmt * taplo fmt * using nightly cargo in CI * restore set_balance as it was * allow nightly support in CI * use restricted_tokens again to fetch from investement portfolio * Install rust-src for docs * fix tests * remove unused restricted tokens * fix block rewards * fix WrongWitness for some tests in IT * fix liquidity-pools * minor fixes * fix clippy * remove unneeded tests * feat: Update client to Polkadot v1.7.2 (#1844) * wip: update client * chore: update crate versions * chore: update anchor rpc api * chore: remove rewards, pools rpc * chore: compile with development runtime * fix: client for all runtimes * fix: build spec * feat: update relay docker images * fix: apply deprecation of export genesis state to scripts * fmt: taplo * refactor: use xcm v4 sugar * fix: revert tmp change in local para run * refactor: simplify xcm v4 locations in chain spec * cargo fmt * fix clippy * feat: Polkadot v1.7.2 migrations (#1849) * feat: add hold reason migration * feat: centrifuge migrations * feat: altair migrations * feat: dev + demo migrations * fix: clippy * fix: build * fmt: fix using nightly * last William iteration review * increase passed blocks * use rococo instead of polkadot-test-runtime * fix tests * remove line * dirty fix to fix Hrmp test issue * use default weights for treasury * remove getrandom unused dep * upgrade to last polkadot-sdk version * feat: `CheckMetadataHash` extension * fix it (#1866) * fmt: taplo * refactor: signed ext order * fix: signed ext order for ITs * IT: more support for router tests (#1885) * move routers to its own module * remove outdated markers * for all runtimes * remove previous tests * minor fixes * v0.11.2 rc * panic if event is not found in the expected blocks (#1880) * fix: ci * remove generic module (#1887) * revert check metadata hash disable * fix: disable metadata hash check for integration tests --------- Co-authored-by: lemunozm <[email protected]> --------- Co-authored-by: Bastian Köcher <[email protected]> Co-authored-by: Frederik Gartenmeister <[email protected]> Co-authored-by: lemunozm <[email protected]> Refactor: Decoupling routers from gateway (#1891) * decoupling routers from gateway. Use Vec<u8> * gateway adapted * fix imports * move test msg to cfg-traits * fix imports feat: Open Gov for Development and Altair Runtimes (#1828) * chore: add OpenGov pallets to root toml * feat: add OpenGov tracks to runtime common * feat: add technical committee migration tc: fix clippy * refactor: move origins from primitives to common common: fix typo common: fix test after HalfOfCouncil refactor * feat: add OpenGov to Development fix: complete Development OpenGov development: finalize opengov development: add proxy calls development: overwrite missing origins dev: fix treasury spender dev: remove crowdloan pallets dev: taplo fmt * feat: add OpenGov to Altair fix: complete Altair OpenGov fix: add missing technical fellowship weights Altair altair: finalize opengov altair: rename track tests altair: gather imports altair: remove crowdloan pallets altair: taplo fmt * feat: add OpenGov to Centrifuge chain fix: centrifuge toml chore: minor centrifuge chain improvements refactor: Centrifuge OpenGov centrifuge: finalize opengov centrifuge: fix cargo tml cfg: fix clippy test * Revert "feat: add OpenGov to Centrifuge chain" This reverts commit 4d78c10. * fix: clippy * tests: fix CouncilCollective import path * fix: technical committee migrations * altair: configure tc migration (incomplete) * dev: configure tc migration * chore: cleanup altair dep diff * chore: remove unused getrandom crate from runtimes * chore: add missing Altair TC members * refactor: move to_ppm and to_percent to utils * Merge remote-tracking branch 'origin/main' into feat/open-gov-2 * fix: issues after rebasing * fmt * fix: imports * feat: v0.12.0 for altair RU Altair: release v0.12.0 (#1896) * fix: altair chain spec * chore: update gov2 weights * chore: update dev OpenGov weights * fix: weight declaration Fix RustDocs deployment & codecov patch checks (#1870) * Fix deploy docs * Make patch checks also informational (not fail) LP Message with custom data format for serialization/deserialization (#1889) * test compiling * fix Domain serialization * fix issues * some reorganizations * using bincode 2 with non-std support * use custom trait * tests passing for lp * remove ENCODED_MSG constant * fix runtimes compilation * cargo fmt & taplo fmt * cargo clippy * minor change * add custom serializer passing all tests * no_std support * minor renames * cargo fmt * cargo clippy * minor comment change * cargo fmt extra Succeed when patch codecov fails (#1900) Liquidity pools: Add UTs to for `update_token_price()` (#1890) * add token price UTs * update IT * fix tests compilation remove some unused code investment-related (#1902) Release v0.13.0 (#1898) * chore: update deps to enable metadata hash check * chore: update srtool + add on-chain-release-build opts * chore: bump spec versions + cleanup migrations * fmt: taplo * ci: fix srtool fmt * ci: apply proper CLI * attempt2 to enable on-chain-release-build * change srtool build for chevdor/strool-actions action * desperate attempts to get it working * override workdir * Try to set permissions wide open before running docker * add RUST_BACKTRACE * revert to our GHA manual process & add enhancements * fix package name * Output information about srtool * Revert "desperate attempts to get it working" This reverts commit a882fd9. * fix: some scripts * fix bad colon on echo command * Fix missing colon and remove docker publish release * upload wasm to release * fix issue with gchr tags * more semicolon issues * unique name for delete untagged * fix delete_untagged * move delete_untagged under workflows * move delete untagged to manual runs * review bash syntax for wasm build * recover cache and limit sanity check build time --------- Co-authored-by: Guillermo Perez <[email protected]> cargo update (#1904) add uts (#1905) IT: Support `#ignore = reason` for `test_runtimes` macro (#1908) * support #ignore = reason for test_runtimes macro * fix clippy ci: disable checks for drafts (#1913) LP: Unitary testing for add_currency (#1912) * port add-currency tests into uts * remove LiquidityPoolsWrappedToken struct * remove unused code * fix compilation issues * fix clippy * fix clippy feat: adatpt to latest version LP: Unitary testing for all pending extrinsics (#1916) * allow and disallow currency tests * schedule_upgrade cancel_upgrade and update_tranche_token_metadata Feat: Adapt proxy settings (#1922) * feat: adapt proxy settings * fix: match pattern Small CI improvements (#1924) LP: Unitary testing for inbound messages (#1927) * tests inbound messages * some cleanings * some adjustements fix: array size fix: clippy + fmt fix: update blake hash fix: compilation Refactor: Transform `TrancheCurrency` type in a tuple (#1926) * remove TrancheCurrency type * fix benchmarks * fix clippy fix: std import KeccakHasher
Description
Current serialization/deserialization methods don't scale well and are error-prone. This PR adds a custom
serde
data format that implements our liquidity pool's Generic Message Passing Format. This would remove entirely the need for manual serialization/deserialization code