-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to include more tx in the bundle and skip including tx that already included in previous bundle #2454
Conversation
Signed-off-by: linning <[email protected]>
…e extrinsic is added to the bundle body Signed-off-by: linning <[email protected]>
…&mut self This refactoring is for incoming changes in the next commit Signed-off-by: linning <[email protected]>
Signed-off-by: linning <[email protected]>
…o one Signed-off-by: linning <[email protected]>
f2623e8
to
71935c7
Compare
@@ -1,32 +1,222 @@ | |||
//! Shared domain worker functions. | |||
// Copyright 2020 Parity Technologies (UK) Ltd. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this refactoring necessary for this PR? If not, would it be better into its own PR?
I am talking about this commit: 71935c7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really necessary for this PR, will do that next time 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense
Left some questions
@@ -204,6 +255,9 @@ where | |||
estimated_bundle_weight = next_estimated_bundle_weight; | |||
bundle_size = next_bundle_size; | |||
extrinsics.push(pending_tx_data.clone()); | |||
|
|||
self.previous_bundled_tx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We seem to just add the bundle after submission but there is always a possiblity that bundle never made it into Consensus node TX pool.
I would suggest checking the Consensus node TX pool if this bundle is included without any failures and then only proceed to index this here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but there is always a possiblity that bundle never made it into Consensus node TX pool.
Do you mean the local consensus node's tx pool? bundle should always be able to submit to the local consensus node tx pool, if it is failed then most likely there is a bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bundled tx hashes are cleared whenever the consensus chain tip is changed, which is because the bundle may not be included by the next consensus block due to:
- the bundle is dropped silently during propagation due to network issue
- the bundle can't fit into the next consensus block or didn't arrive at the block author's tx pool in time, and becomes invalid as the head ER is changed
- etc
We can't predict these situations locally thus we clear the bundled tx hashes whenever the consensus chain tip is changed to have more retry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While it is possible to monitor consensus tx pool and based on that make decision on whether to clear the cache or not. The implementation IIUC would be similar to how substrate transaction pool monitors block import/finalization, which can be very involved and in my opinion, not worth the effort.
This PR contains 2 improvements on bundle construction:
More detail on the second one:
Suppose in a domain there is an operator with a large share of stake and is expected to produce multiple bundles for a domain block, when the operator constructs a bundle it iterates over its tx pool, and tx is returned according to its tip and the time it arrived, if the operator gonna produce 5 bundles, most likely all the tx inside these bundle will be the same, thus a lot of duplicate tx.
With this PR, when the operator produces the second bundle it will skip tx already included in the first bundle, if there is no more tx, it simply stops producing an empty bundle, and once the consensus chain tip changed (no matter if there is new domain block or not) the operator will not skip and retry to include these tx in the next bundle.
One scenario that is different from the main is when a user sends 2 tx with nonce
N
andN+1
, if txN
is included in a bundle while txN+1
is not, txN+1
has to wait until the next consensus chain block before included in the bundle again, I have spent quite some time to support this scenario to make it just like main, but it is much much more complicate and I believe this edge case doesn't worth it.Code contributor checklist: