-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read-only Metadata forwarding and hashing scheme #7113
Comments
Yes, that's about it. Just about incompatible change's 2., just to clarify if I understood correctly, this is a breaking change because currently the data which gets signed is different from that sent (through the method you linked), but after this change, that won't be needed anymore, and you'll be free to send the addresses non-checksummed (i.e. as per your internal representation on initiator). I'd say this breaking change is worth it because it defines a protocol for this part of the message encoding which hopefully won't need to be changed again later: new fields in a diversity of formats will be able to be added to the metadata without having to break compatibility or having to coordinate on some specific type (e.g. address) serialization format, everything will be accepted and forwarded. |
Exactly, actually this will take some weirdness out of our Metadata hashing scheme that was WIP currently.
I totally agree |
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. Fixes: raiden-network#7113
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. Fixes: raiden-network#7113
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. The originally received metadata dict is now passed through the state-machine for later inclusion in following LockTransfer messages for the next hop. New arguments have been added throughout the state-machine to pass the `previous_metadata` to the corresponding creation of SendEvents and StateTransitions. Before this commit, the serialization-schema for hashing of the RouteMetadata was hard-coded, and the hashing of the Metadata was using an inconsistent mixture of canonicaljson and rlp-encoding for the hash-generation. Now, the whole message is first serialized with the already employed DictSerializer where typing information is stripped and then a canonical serialized representation is built with canonicaljson. Fixes: raiden-network#7113
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. The originally received metadata dict is now passed through the state-machine for later inclusion in following LockTransfer messages for the next hop. New arguments have been added throughout the state-machine to pass the `previous_metadata` to the corresponding creation of SendEvents and StateTransitions. Before this commit, the serialization-schema for hashing of the RouteMetadata was hard-coded, and the hashing of the Metadata was using an inconsistent mixture of canonicaljson and rlp-encoding for the hash-generation. Now, the whole message is first serialized with the already employed DictSerializer where typing information is stripped and then a canonical serialized representation is built with canonicaljson. Fixes: raiden-network#7113
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. The originally received metadata dict is now passed through the state-machine for later inclusion in following LockTransfer messages for the next hop. New arguments have been added throughout the state-machine to pass the `previous_metadata` to the corresponding creation of SendEvents and StateTransitions. Before this commit, the serialization-schema for hashing of the RouteMetadata was hard-coded, and the hashing of the Metadata was using an inconsistent mixture of canonicaljson and rlp-encoding for the hash-generation. Now, the whole message is first serialized with the already employed DictSerializer where typing information is stripped and then a canonical serialized representation is built with canonicaljson. Fixes: raiden-network#7113
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. The originally received metadata dict is now passed through the state-machine for later inclusion in following LockTransfer messages for the next hop. New arguments have been added throughout the state-machine to pass the `previous_metadata` to the corresponding creation of SendEvents and StateTransitions. Before this commit, the serialization-schema for hashing of the RouteMetadata was hard-coded, and the hashing of the Metadata was using an inconsistent mixture of canonicaljson and rlp-encoding for the hash-generation. Now, the whole message is first serialized with the already employed DictSerializer where typing information is stripped and then a canonical serialized representation is built with canonicaljson. Fixes: raiden-network#7113
This will add a `original_data` field on the `class Metadata` message-dataclass. This field can contain the original metadata-dict as received from a previous node. The serialization-scheme of the Metadata class considers the existence of the original_data and will ignore other fields on dumping and thus only dump the original_data. The originally received metadata dict is now passed through the state-machine for later inclusion in following LockTransfer messages for the next hop. New arguments have been added throughout the state-machine to pass the `previous_metadata` to the corresponding creation of SendEvents and StateTransitions. Before this commit, the serialization-schema for hashing of the RouteMetadata was hard-coded, and the hashing of the Metadata was using an inconsistent mixture of canonicaljson and rlp-encoding for the hash-generation. Now, the whole message is first serialized with the already employed DictSerializer where typing information is stripped and then a canonical serialized representation is built with canonicaljson. Fixes: #7113
Abstract
Allow for arbitrary Metadata being sent along a Locked-Transfer, and transition to the Metadata being a read-only data structure for mediators, where the metadata is passed on as-is from the previous hop, and the hash of that data is included in the signature for the mediator's message.
This supersedes #7021 to some extent.
Motivation
We want to provide a means to have better forwards compatibility with arbitrary
Metadata
sent along a Locked-Transfer,and we want to coordinate this with the LC, so that the transition is as smooth as possible.
Specification
@andrevmatos proposed specs for the protocol of handling of the
Metadata
, which would result in following PC behaviour:LockedTransfer
'smetadata
on mediation is passed as is*, completely unchanged;LockedTransfer.message_hash
remains thekeccak
of the bytes_packed_data + metadata.hash
- this is theadditionalHash
and included in the signed-dataLockedTransfer.sender
), theLockedTransfer.metadata.hash
is created as the keccak of the exact canonical-JSON serialized received metadata, independent on how the Metadata is deserialised to our internal representationmetadata
known fields and change it to match a given schema: e.g. decoding from; these mappings are only used for internal logic, and doesn't reach the created/relayed message.LockedTransfer
), which is EIP191-prefixed, and also is guaranteed to contain at least some known fields (or else we coudln't route the transfer), there's no risk of it being used elsewhere/replayed.Changes needed
SendLockedTransfer
event on mediation to the next hopclass Metadata
for internal processing and verification onlyclass Metadata
's internal representation for sending LockedTransfer only on initiationThis most likely will boil down to adding a
_original_data
field to theclass Metadata
, which represents received metadata as is:deserialisation
_original_data
serialisation
_original_data
can only be not present, when the Metadata is part of a LockedTransfer because our node is the initiator_original_data
is not present, use a well supported serialisation scheme (LCs should be able to process it internally easily), and use the hashing scheme as described above for signing_original_data
is present, just use the original_data as the serialised data and use the hashing scheme as described above for signingwrite-access
Backwards Compatibility
This feature is backwards incompatible with the current
2.0.0
version, under some conditions:Metadata.hash
raiden/raiden/messages/metadata.py
Lines 33 to 48 in 3a3636e
Since we want to implement
1.
very soon, we should assume backwards-incompatibility in any case.The text was updated successfully, but these errors were encountered: