-
Notifications
You must be signed in to change notification settings - Fork 809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fatxpool
: handling limits and priorities improvements
#6405
Conversation
d0d8ddc
to
7b96ca3
Compare
fatxpool
: improvements in handling of prioritiesfatxpool
:limits and priorities improvements
fatxpool
:limits and priorities improvementsfatxpool
: limits and priorities improvements
fatxpool
: limits and priorities improvementsfatxpool
: handling limits and priorities improvements
/cmd prdoc --bump major --audience node_dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Not entirely familiar with the code, some minor needs that could all be ignored /followed up later if need be 🙏
Some(( | ||
// These transactions are coming from retracted blocks, we should | ||
// simply consider them external. | ||
TimedTransactionSource::new_external(false), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably not be changed since it was like this before, but any idea why this is External
instead of InBlock
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understand it could be InBlock
. But I don't understand all the consequences of changing this value, as it goes to runtime side. I'd not touch it in this PR :)
self.future_transaction_views.entry(tx_hash).or_default().insert(block_hash); | ||
}, | ||
TransactionStatus::Ready | TransactionStatus::InBlock(..) => { | ||
if let Some(mut views) = self.future_transaction_views.remove(&tx_hash) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DQ: Why do we move the entire list of views from future to ready here? This is just one view reporting the transaction as ready now right? I would expect that we only remove that one block hash from the future transactions view list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question.
It is about if we want to drop transaction when the last view referencing a transaction is dropped.
And now for transaction that is future, and never was seen as ready on any fork, we want to remove it from the pool when the last view referencing transaction is gone. We basically don't want to keep future transaction too long. If the transaction is no longer in any view it was likely pushed out by some newly incoming transactions.
If the transaction was seen as ready on any fork, it means that it can be soon included into some new fork that can pop out. This is why in a DroppedWatcher
I decided to treat such transaction as ready. Due to this we will keep this transaction in a mempool for a while - we will not send a dropped event to the pool.
I treat an internal mempool as a secondary, best-effort, buffer for transactions - meaning that it can hold ready transactions that are currently not referenced by any view (note that it applies to the situation when limits are hit). When the blocks are built, those txs will be resubmitted and will get to the view. The other approach that could be simple would be to simply drop everything what is not referenced by views. Maybe it is a way to go. I'll give it more consideration when all groundwork is done.
Please also note that hitting limits on transaction pool is not a typical case. I would not expect hitting limits even during spamming tests. Hitting limits may occur during some attack or maybe when block production is stopped. This should not even happen when finality is stalled (becasue FinalityTimeout events shall be triggered and listeners will be dropped - still to be tested).
I am still here experimenting a bit, so it may happen that these code will get re-iterated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I could follow this explanation, but its kind of complicated behaviour, maybe a comment in the code would help for unsuspecting developers :D
All GitHub workflows were cancelled due to failure one of the required jobs. |
/cmd fmt |
Command "fmt" has started 🚀 See logs here |
Command "fmt" has finished ✅ See logs here |
) This PR provides a number of improvements around handling limits and priorities in the fork-aware transaction pool. #### Notes to reviewers. #### Following are the notable changes: 1. #### [Better support](paritytech@414ec3c) for `Usurped` transactions When any view reports an `Usurped` transaction (replaced by other with higher priority) it is removed from all the views (also inactive). Removal is implemented by simply submitting usurper transaction to all the views. It is also ensured that usurped tx will not sneak into the `view_store` in newly created view (this is why `ViewStore::pending_txs_replacements` was added). 1. #### [`TimedTransactionSource`](paritytech@f10590f) introduced: Every view now has an information when the transaction entered the pool. Enforce limits (now only for future txs) uses this timestamp to find worst transactions. Having common timestamp ensures coherent assessment of the transaction's importance across different views. This also could later be used to select which ready transaction shall be dropped. 1. #### `DroppedWatcher`: [improved logic](paritytech@560db28) for future transactions For future transaction - if the last referencing view is removed, the transaction will be dropped from the pool. This prevents future unincluded and un-promoted transactions from staying in the pool for long time. #### And some minor changes: 1. [simplified](paritytech@2d0bbf8) the flow in `update_view_with_mempool` (code duplication + minor bug fix). 2. `graph::BasePool`: [handling priorities](paritytech@c9f2d39) for future transaction improved (previously transaction with lower prio was reported as failed), 3. `graph::listener`: dedicated `limit_enforced`/`usurped`/`dropped` [calls added](paritytech@7b58a68), 4. flaky test [fixed](paritytech@e0a7bc6) 5. new tests added, related to: paritytech#5809 --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Iulian Barbu <[email protected]>
) This PR provides a number of improvements around handling limits and priorities in the fork-aware transaction pool. #### Notes to reviewers. #### Following are the notable changes: 1. #### [Better support](paritytech@414ec3c) for `Usurped` transactions When any view reports an `Usurped` transaction (replaced by other with higher priority) it is removed from all the views (also inactive). Removal is implemented by simply submitting usurper transaction to all the views. It is also ensured that usurped tx will not sneak into the `view_store` in newly created view (this is why `ViewStore::pending_txs_replacements` was added). 1. #### [`TimedTransactionSource`](paritytech@f10590f) introduced: Every view now has an information when the transaction entered the pool. Enforce limits (now only for future txs) uses this timestamp to find worst transactions. Having common timestamp ensures coherent assessment of the transaction's importance across different views. This also could later be used to select which ready transaction shall be dropped. 1. #### `DroppedWatcher`: [improved logic](paritytech@560db28) for future transactions For future transaction - if the last referencing view is removed, the transaction will be dropped from the pool. This prevents future unincluded and un-promoted transactions from staying in the pool for long time. #### And some minor changes: 1. [simplified](paritytech@2d0bbf8) the flow in `update_view_with_mempool` (code duplication + minor bug fix). 2. `graph::BasePool`: [handling priorities](paritytech@c9f2d39) for future transaction improved (previously transaction with lower prio was reported as failed), 3. `graph::listener`: dedicated `limit_enforced`/`usurped`/`dropped` [calls added](paritytech@7b58a68), 4. flaky test [fixed](paritytech@e0a7bc6) 5. new tests added, related to: paritytech#5809 --------- Co-authored-by: GitHub Action <[email protected]> Co-authored-by: Iulian Barbu <[email protected]>
Higher-priority transactions can now replace lower-priority transactions even when the internal _tx_mem_pool_ is full. **Notes for reviewers:** - The _tx_mem_pool_ now maintains information about transaction priority. Although _tx_mem_pool_ itself is stateless, transaction priority is updated after submission to the view. An alternative approach could involve validating transactions at the `at` block, but this is computationally expensive. To avoid additional validation overhead, I opted to use the priority obtained from runtime during submission to the view. This is the rationale behind introducing the `SubmitOutcome` struct, which synchronously communicates transaction priority from the view to the pool. This results in a very brief window during which the transaction priority remains unknown - those transaction are not taken into consideration while dropping takes place. In the future, if needed, we could update transaction priority using view revalidation results to keep this information fully up-to-date (as priority of transaction may change with chain-state evolution). - When _tx_mem_pool_ becomes full (an event anticipated to be rare), transaction priority must be known to perform priority-based removal. In such cases, the most recent block known is utilized for validation. I think that speculative submission to the view and re-using the priority from this submission would be an unnecessary complication. - Once the priority is determined, lower-priority transactions whose cumulative size meets or exceeds the size of the new transaction are collected to ensure the pool size limit is not exceeded. - Transaction removed from _tx_mem_pool_ , also needs to be removed from all the views with appropriate event (which is done by `remove_transaction_subtree`). To ensure complete removal, the `PendingTxReplacement` struct was re-factored to more generic `PendingPreInsertTask` (introduced in #6405) which covers removal and submssion of transaction in the view which may be potentially created in the background. This is to ensure that removed transaction will not re-enter to the newly created view. - `submit_local` implementation was also improved to properly handle priorities in case when mempool is full. Some missing tests for this method were also added. Closes: #5809 --------- Co-authored-by: command-bot <> Co-authored-by: Iulian Barbu <[email protected]>
…ytech#6647) Higher-priority transactions can now replace lower-priority transactions even when the internal _tx_mem_pool_ is full. **Notes for reviewers:** - The _tx_mem_pool_ now maintains information about transaction priority. Although _tx_mem_pool_ itself is stateless, transaction priority is updated after submission to the view. An alternative approach could involve validating transactions at the `at` block, but this is computationally expensive. To avoid additional validation overhead, I opted to use the priority obtained from runtime during submission to the view. This is the rationale behind introducing the `SubmitOutcome` struct, which synchronously communicates transaction priority from the view to the pool. This results in a very brief window during which the transaction priority remains unknown - those transaction are not taken into consideration while dropping takes place. In the future, if needed, we could update transaction priority using view revalidation results to keep this information fully up-to-date (as priority of transaction may change with chain-state evolution). - When _tx_mem_pool_ becomes full (an event anticipated to be rare), transaction priority must be known to perform priority-based removal. In such cases, the most recent block known is utilized for validation. I think that speculative submission to the view and re-using the priority from this submission would be an unnecessary complication. - Once the priority is determined, lower-priority transactions whose cumulative size meets or exceeds the size of the new transaction are collected to ensure the pool size limit is not exceeded. - Transaction removed from _tx_mem_pool_ , also needs to be removed from all the views with appropriate event (which is done by `remove_transaction_subtree`). To ensure complete removal, the `PendingTxReplacement` struct was re-factored to more generic `PendingPreInsertTask` (introduced in paritytech#6405) which covers removal and submssion of transaction in the view which may be potentially created in the background. This is to ensure that removed transaction will not re-enter to the newly created view. - `submit_local` implementation was also improved to properly handle priorities in case when mempool is full. Some missing tests for this method were also added. Closes: paritytech#5809 --------- Co-authored-by: command-bot <> Co-authored-by: Iulian Barbu <[email protected]>
This PR provides a number of improvements around handling limits and priorities in the fork-aware transaction pool.
Notes to reviewers.
Following are the notable changes:
Better support for
Usurped
transactionsWhen any view reports an
Usurped
transaction (replaced by other with higher priority) it is removed from all the views (also inactive). Removal is implemented by simply submitting usurper transaction to all the views. It is also ensured that usurped tx will not sneak into theview_store
in newly created view (this is whyViewStore::pending_txs_replacements
was added).TimedTransactionSource
introduced:Every view now has an information when the transaction entered the pool. Enforce limits (now only for future txs) uses this timestamp to find worst transactions. Having common timestamp ensures coherent assessment of the transaction's importance across different views. This also could later be used to select which ready transaction shall be dropped.
DroppedWatcher
: improved logic for future transactionsFor future transaction - if the last referencing view is removed, the transaction will be dropped from the pool. This prevents future unincluded and un-promoted transactions from staying in the pool for long time.
And some minor changes:
update_view_with_mempool
(code duplication + minor bug fix).graph::BasePool
: handling priorities for future transaction improved (previously transaction with lower prio was reported as failed),graph::listener
: dedicatedlimit_enforced
/usurped
/dropped
calls added,related to: #5809