-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support linking ethereum accounts #355
Conversation
f399028
to
1c444d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use Substrate
instead of Dotsama
?
0d5a771
to
c299cf0
Compare
regarding the needed migration: I see about 1.3k AssociationEstablished events on spiritnet, so I guess that just writing a migration and migrating all the storage at once will not work. What would be the best way to deal with that? @wischli |
Green CI :) Btw. we have stricter linter settings than moonbeam! They had implemented some Into's for their AccountId20 type and clippy complained it wants From's instead 🤷♂️ |
Theoretically, it should easily fit into one block as the migration basically does However, I would stretch this into multiple blocks as this will be part of a runtime upgrade. When you can fit such migrations into 2-3 blocks, the recommendation is to use the Scheduler pallet. |
@wischli albi suggested that we could also do it lazily by having two storage entries and that we could always first read and write to the new storage item and as backup read old entries from the old storage and convert on the fly. What do you think? I guess in the scheduler approach we would also need the two storage entries to coexist. |
We could also go that route but I fear this will make the code less readable and we have to keep that state until we do a force migration anyway. We cannot guarantee users to perform actions which would migrate on the fly. If @weichweich thinks we should go that way, I am fine with it. |
@wischli Could you have a look? I left the old storage simply untouched for now and only operate on new storage items ConnectedAccountsV2 and ConnectedDidsV2. I also provide a extrinsic to migrate all accounts associated with the sender did. Do you think this is doable like this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wischli Could you have a look? I left the old storage simply untouched for now and only operate on new storage items ConnectedAccountsV2 and ConnectedDidsV2. I also provide a extrinsic to migrate all accounts associated with the sender did. Do you think this is doable like this?
I am not happy with the proposed approach as I see two major issues which have to be improved:
- If a user does not migrate, it's impossible for them and anyone else to remove their connected DID and/or association.
- A user can have storage entries in v1 and v2. If they don't migrate, both versions co-exist.
- a. There is no incentive for a user to migrate as they have to pay the transaction fee and don't get any benefit. We should not require users to actively migrate. They should not have to worry/know about this at all.
- b. Moreover, any frontend that uses our API would have to query twice as many records for potential DID connections.
- c. This also breaks/is not aligned with feat: did rpc #348. Of course, we could refactor feat: did rpc #348 such that we handle the fallback from v2 to v1 and as long as services use our optimized RPC, they are fine. However, we still have to do redundant checks on our side then.
I can imagine that Albi's lazy migration rather wanted to migrate on the fly when adding a new association or removing an existing association. But then we would always have to double check both storage entries as long as we haven't removed. Again: Where is the point if we either have to keep the old storage alive and have spaghetti code OR force a migration at some point?
All in all, I am not in favor of migrating lazily. We definitely increase technical debt and create problems/migration paths for the SDK and Sporran, at least. There are less than 1400 accounts which we need to migrate. I don't see any issue in going forward with a (scheduled) migration which does not create any secondary problems. Let's ask Shawn to be sure and don't implement any half solution.
If this needs further discussion, let's schedule a call together with @ntn-x2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice solution. I agree with William that we should probably use the schedule migration approach. Moreover, I would REALLY like to see some more comments and some clearer variable names, perhaps even splitting up operations into multiple steps to enhance readability, if possible.
Last but not least, I would love to see some test cases for the pallet that test with linking a new Ethereum account, as I don't think there's any right now.
I would then like to give another review once at least the comments and the test cases have been added.
Btw, I also did not do a full review yet. I wanted to wait until the migration path was finalized. |
6963a03
to
a6a6e95
Compare
40e0587
to
028a0e0
Compare
Co-authored-by: William Freudenberger <[email protected]> Co-authored-by: Antonio <[email protected]>
2f377a1
to
9e9eb8e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice PR! 🙂
I have a few minor things and a question regarding the migration. Didn't we want to do the migration lazily? As far as I can see, we do the migration now in the runtime upgrade?
Also I think you are looking for the translate call in the migration?
// Move TmpStorage | ||
move_storage::<Pallet<T>>(b"TmpConnectedDids", b"ConnectedDids"); | ||
move_storage::<Pallet<T>>(b"TmpConnectedAccounts", b"ConnectedAccounts"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we read everything from ConnectedDids to TmpConnectedDids and write TmpConnectedDids back to TmpConnectedDids?
I would rather propose to translate the storage inplace. THis would cost half as many storage accesses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wischli wrote the migration, but the basic problem was that the type of the keys of the map changed. Just translating the values would have been straight forward, but because we need to translate the keys, this whole dance is necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even if the keys change, you can still simple drain the old storage and write it back in the new one without the additional intermediate storage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no new storage since we didn't want to change the name of the storage entry.
Its like a_vec.iter().for_each( |i| a_vec.push_back(2*i))
, it doesn't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would propose to not write it back to storage immediately but write it into a vec. After that we can write the content of the vec back into storage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated in 719e7d9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely a nice improvement over the previous version, but I still have few questions about some design choices, and I would also love the code to be adjusted and restructured a little bit, so that it follows the style and conventions we have used everywhere else in the codebase.
/bench runtime peregrine pallet-did-lookup |
Benchmark Runtime Substrate Pallet for branch "tr-ethereum-did-linking" with command cargo run --quiet --release -p kilt-parachain --features=runtime-benchmarks -- benchmark pallet --chain=dev --steps=50 --repeat=20 --pallet=pallet-did-lookup --extrinsic="*" --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./runtimes/peregrine/src/weights/pallet_did-lookup.rs --template=.maintain/runtime-weight-template.hbs Toolchain: nightly-2022-05-11-x86_64-unknown-linux-gnu (overridden by '/home/benchbot/bench-bot/git/kilt-node/rust-toolchain.toml') Results
|
…hmarks -- benchmark pallet --chain=dev --steps=50 --repeat=20 --pallet=pallet-did-lookup --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=./runtimes/peregrine/src/weights/pallet_did-lookup.rs --template=.maintain/runtime-weight-template.hbs
/bench runtime spiritnet-pallet parachain-staking |
Benchmark Runtime Pallet for branch "tr-ethereum-did-linking" with command cargo run --quiet --release -p kilt-parachain --features=runtime-benchmarks -- benchmark pallet --chain=spiritnet-dev --steps=50 --repeat=20 --pallet=parachain-staking --extrinsic="*" --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=pallets/parachain-staking/src/default_weights.rs --template=.maintain/weight-template.hbs Toolchain: nightly-2022-05-11-x86_64-unknown-linux-gnu (overridden by '/home/benchbot/bench-bot/git/kilt-node/rust-toolchain.toml') Results
|
…hmarks -- benchmark pallet --chain=spiritnet-dev --steps=50 --repeat=20 --pallet=parachain-staking --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=pallets/parachain-staking/src/default_weights.rs --template=.maintain/weight-template.hbs
/bench runtime spiritnet-pallet pallet-did-lookup |
Benchmark Runtime Pallet for branch "tr-ethereum-did-linking" with command cargo run --quiet --release -p kilt-parachain --features=runtime-benchmarks -- benchmark pallet --chain=spiritnet-dev --steps=50 --repeat=20 --pallet=pallet-did-lookup --extrinsic="*" --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=pallets/pallet-did-lookup/src/default_weights.rs --template=.maintain/weight-template.hbs Toolchain: nightly-2022-05-11-x86_64-unknown-linux-gnu (overridden by '/home/benchbot/bench-bot/git/kilt-node/rust-toolchain.toml') Results
|
…hmarks -- benchmark pallet --chain=spiritnet-dev --steps=50 --repeat=20 --pallet=pallet-did-lookup --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=pallets/pallet-did-lookup/src/default_weights.rs --template=.maintain/weight-template.hbs
…ime-benchmarks -- benchmark pallet --chain=spiritnet-dev --steps=50 --repeat=20 --pallet=parachain-staking --extrinsic=* --execution=wasm --wasm-execution=compiled --heap-pages=4096 --output=pallets/parachain-staking/src/default_weights.rs --template=.maintain/weight-template.hbs" This reverts commit 8bd953d.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This reverts commit c8f4577.
fixes KILTProtocol/ticket#1936
This adds support to link ethereum accounts to a DID. It used moonbeams AccountId20 implementation and related Signer implementations etc.
Update: Its working now :) Following the instructions below should result in linking your eth account to a DID
How to test:
Its a bit harder to test, but here are the instructions:
did-submit
did:kilt:4siJtc4dYq2gPre8Xj6KJcSjVAdi1gmjctUzjf3AwrtNnhvy
should existgit clone [email protected]:KILTprotocol/minimal-eth-linking-signer.git
(repo)yarn install && yarn start
encoded call data
(without the leading0x
!) which is your scale encoded extrinsicKILTCTL_STORAGE=./kilt-test.db kiltctl did sign_and_submit --data "$DATA" --payment alice alice
Checklist:
array[3]
useget(3)
, ...)