-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RPC call returns "PrefixKeysForbidden" error code #272
Comments
The full node indeed delivers all the required state entries. The error is returned simply because this feature isn't fully implemented in smoldot. It's a bit tricky in terms of logic to determine from a proof the list of entries starting with a prefix, and since it was so far useless I decided in the past to return an error instead of implementing it. |
An unintended side effect of #259 is that the runtime code might request the list of all keys in the entire storage in order to build the trie root cache. This might be why you're getting this error, and this is definitely problematic. I might need to revert #259. However, as a side note, I really feel like there's a problem somewhere with what you're doing. |
The problem can be reproduced by using this json rpc request a polkadot light client: {"id": "id", "method": "state_call", "params": ["ParachainHost_persisted_validation_data", "0xe803000001"], "jsonrpc": "2.0"} This is be a valid json rpc call for polkadot and should request the persisted validation data for statemint. Looking at the runtime call implementation here and here, the call is indeed read-only when the occupied core assumption is Also you are right that this is related to #259. Before that commit, the call works fine. After, it is broken. This is also why the smoldot internal calls succeed, they still use the read-only-runtime-host if I see that correctly. |
Ah ok, I had wrongly assumed that
The issue is that in the case of the read-only, when the runtime asks for the storage trie root, we can just provide the value that is in the header of the block, since we know that the storage hasn't been modified. In the case of the non-read-only, however, we (re-)calculate this storage trie root. Unfortunately, this calculation at the moment assumes that we have access to the whole storage. For example, when the calculation needs the trie node hash of a storage item, it just asks for the storage item. And because the state trie root calculation cache isn't populated, we try to access every single storage item in order to calculate the hash of the root. This obviously isn't really compatible with reading a Merkle proof. If I were to implement the "prefix keys" thing that the issue title mentions, the outcome would be that the Merkle proof is missing entries. So indeed #259 made things worse. The proper solution is to refactor a bit the calculation in order to only ask what is needed and nothing more, and what is needed can be found in the Merkle proof. This is however non-trivial. Another alternative would be for the runtime to not ask for the storage trie root. In principle it shouldn't, but for some reason I've noticed that many (if not all?) runtime calls try to read it. |
Thanks for the explanation!
You lost me on this part, can you rephrase maybe?
The relay chain storage root is part of |
The |
It seems clear to me that the calculations cache system needs to be "refactored" so that the cache is maintained higher in the stack of layers, and the low level code asks for node values (which would then come either from the cache or from the Merkle proof). What is giving me a lot of troubles is that there are situations where it's really hard to determine what is the minimum amount of information required. I am going to take two example situations, which seem tricky to me, to get an idea about this: If we imagine a runtime call that just sets a (potentially non-existing) random storage value then gets the root hash, what you need in order to fulfill this demand is the node value of its parent, but also the node value of the child of its parent that is in the same direction of the modified node, and potentially of other siblings depending on the structure of the trie, because you need to know where in the trie to insert the new entry. If we imagine a runtime call that just clears a prefix then gets the root hash, what you need is the node value of the closest ancestor of the prefix, but also, because the number of nodes being removed is bounded, you need to be able to walk down the trie. At the moment, we have a The naive solution would be a I think the solution is a |
As a first step, we can add a Then, add a cache to Then, remove the existing cache from |
Roadmap:
|
#639 was supposed to be bulk of the changes. However, I've realized that the "requests" that are generated were still too loose to operate over proofs. |
This should now be fixed after #670. At least calling I can't be sure that all runtime calls work because this thing is insanely complicated. I've tried my best to make the runtime calls use as few proof entries as possible, and I think that I can't reduce this any further. |
Runtime calls of the
persisted_validation_data
method viastate_call
return a "PrefixKeysForbidden" error.Previously I assumed this is related to #177, since this runtime call uses the
state_root
host function. However, should the execution proof that the full node delivers not include all required state entries? Why do we need this prefix key fetching here? Is it a bug after all?The text was updated successfully, but these errors were encountered: