-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
limit rpc call #32
base: master
Are you sure you want to change the base?
limit rpc call #32
Conversation
eb2b9d3
to
4ee0299
Compare
I don't know if I like this solution since there isn't any way for the caller to know to ask for more info, the request is just silently incomplete. This breaks the principle of least astonishment (not to mention the currently defined interface). |
@jeffro256 whats the difference between this and restricted-rpc behavior restricted batches in 100tx/request afaik. |
Yes you're right, I skimmed it too quickly and assumed it was |
src/rpc/core_rpc_server.cpp
Outdated
@@ -661,7 +661,7 @@ namespace cryptonote | |||
|
|||
bool incremental; | |||
std::vector<std::pair<crypto::hash, tx_memory_pool::tx_details>> added_pool_txs; | |||
bool success = m_core.get_pool_info((time_t)req.pool_info_since, allow_sensitive, max_tx_count, added_pool_txs, res.remaining_added_pool_txids, res.removed_pool_txids, incremental); | |||
bool success = m_core.get_pool_info((time_t)req.pool_info_since, allow_sensitive, max_tx_count, added_pool_txs, res.remaining_added_pool_txids, res.removed_pool_txids, incremental, LEVIN_DEFAULT_MAX_PACKET_SIZE * 0.9); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we also have any blocks sent during this RPC call (i.e. when req.requested_info == COMMAND_RPC_GET_BLOCKS_FAST::BLOCKS_AND_POOL
), then this limit is not conservative enough. We care about the message size with all data included, after serialization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, if we limit the transactions sent but set the pool clock time to what it would be if we did NOT limit the transactions sent, then there will be "gaps" in our scanning of the pool, so we need to account for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of 0.9
, I can add macro in cryptonote_config
. What ratio do you think is ideal here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will think about the gap. Off the top of your head. What is a best solution to address the gap issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And please keep in mind this is exact ( logically ) solution the restricted rpc is using right now.
So if there is any inconsistency issue, it is totally applicable to that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The best, in terms of consistency, would return the pool clock time value which of the last returned pool event, and make sure we only return pool updates in contiguous pool time chunks.
This applicable to restricted and unrestricted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, the restricted RPC limits are very much soft limits put much lower than other hard, technical limits. Also, the restricted RPC limits tend to be deterministic before we fetch the data (e.g. number of entries) as opposed to the number of bytes of a serialized response, which isn't known to the RPC endpoint methods.
Can you elaborate this with an example? Since it is not obvious to me how that makes it different logically.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This applicable to restricted and unrestricted.
It wouldn't be applicable to unrestricted because an unrestricted fetch will return a daemon time for which pool updates are consistent. Whereas the way the code is structured right now, if we don't include ALL pool updates since pool_info_since
BUT return the current wall time in daemon_time
, then the wallet will necessarily miss pool updates in the next fetch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jeffro256 based on discussion in research meeting with jberman, do you think we can go ahead and submit this PR to main repo?
P.S. Logs should be available here soon: monero-project/meta#1078
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limit for pool calculation needs to include block data since it's all returned in the same response
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The general approach is solid, but there's some updates that need to be made to make it consistent and to not overflow the limit again.
src/cryptonote_core/tx_pool.cpp
Outdated
{ | ||
CRITICAL_REGION_LOCAL(m_transactions_lock); | ||
CRITICAL_REGION_LOCAL1(m_blockchain); | ||
|
||
txs.clear(); | ||
size_t size = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should have a more descriptive name IMO like cumul_txblob_size
src/rpc/core_rpc_server.cpp
Outdated
@@ -661,7 +661,7 @@ namespace cryptonote | |||
|
|||
bool incremental; | |||
std::vector<std::pair<crypto::hash, tx_memory_pool::tx_details>> added_pool_txs; | |||
bool success = m_core.get_pool_info((time_t)req.pool_info_since, allow_sensitive, max_tx_count, added_pool_txs, res.remaining_added_pool_txids, res.removed_pool_txids, incremental); | |||
bool success = m_core.get_pool_info((time_t)req.pool_info_since, allow_sensitive, max_tx_count, added_pool_txs, res.remaining_added_pool_txids, res.removed_pool_txids, incremental, LEVIN_DEFAULT_MAX_PACKET_SIZE * 0.9); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limit for pool calculation needs to include block data since it's all returned in the same response
src/cryptonote_core/tx_pool.cpp
Outdated
@@ -644,19 +644,21 @@ namespace cryptonote | |||
return true; | |||
} | |||
//------------------------------------------------------------------ | |||
bool tx_memory_pool::get_transactions_info(const std::vector<crypto::hash>& txids, std::vector<std::pair<crypto::hash, tx_details>>& txs, bool include_sensitive) const | |||
bool tx_memory_pool::get_transactions_info(const std::vector<crypto::hash>& txids, std::vector<std::pair<crypto::hash, tx_details>>& txs, bool include_sensitive, size_t limit_size) const |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limit_size
could have a more descriptive name IMO like cumul_txblob_size_limit
src/cryptonote_core/tx_pool.cpp
Outdated
@@ -985,7 +987,7 @@ namespace cryptonote | |||
if (pit.second >= start_time) | |||
txids.push_back(pit.first); | |||
} | |||
get_transactions_info(txids, added_txs, include_sensitive); | |||
get_transactions_info(txids, added_txs, include_sensitive, limit_size); | |||
if (added_txs.size() > max_tx_count) | |||
{ | |||
remaining_added_txids.reserve(added_txs.size() - max_tx_count); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This for
loop adding the remaining TXIDs needs updating to catch when txs were skipped because of the size limit. This logic won't do it correctly if any txs are skipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's a commit that implements the logic for correctly handling skipped txs for this PR and includes the removed and remaining TXIDs in the size limit: jeffro256@93b35b1
src/cryptonote_core/tx_pool.cpp
Outdated
@@ -975,7 +977,7 @@ namespace cryptonote | |||
remaining_added_txids = std::vector<crypto::hash>(txids.begin() + max_tx_count, txids.end()); | |||
txids.erase(txids.begin() + max_tx_count, txids.end()); | |||
} | |||
get_transactions_info(txids, added_txs, include_sensitive); | |||
get_transactions_info(txids, added_txs, include_sensitive, limit_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The limit we want to pass here should actually be limit_size - 32ish * txids.size()
because we want to only use up the space for full-bodied txs that won't be used up by the TXIDs. 32ish because of encoding details on top of the 32 raw bytes per hash. If the cumulative tx blob size would be much greater than the limit then the cumulative size of the remaining TXIDs will push the total data size (full tx + txids) over the limit size.
4ee0299
to
4824095
Compare
This [1] is the only remaining issue from the issues raised by @jeffro256. All other issues has been addressed by me or @jeffro256 . |
Co-authored-by: jeffro256 <[email protected]>
4824095
to
dc30880
Compare
No description provided.