-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
State difference at mainnet's block 3474317 #822
Comments
And BTW, this difference in 7 instructions may be related to one excessive loop iteration here:
There are exactly seven gas-counted instructions between 4536 and 4545. |
Contract itself for quick reference:
|
Log around that block:
|
|
I'd also suspected |
Any progress here? |
A list of syscalls this contract makes:
We know that we have exactly the same storage state before 3474317, so I can only suspect Find and iterations, but that was already tried. At the same time it's not hard to get a full trace of this contract execution from our VM just to see how those values are formed and maybe that would give some clue. Of course ideally we should compare this trace with C# VM trace, but that's a bit more complicated. |
I have checked the order of storage.Find iterator:
There is at least one function |
Any guaranteed ordering, true, but at the same time our current 3.0-compatible ordering is in fact compatible with neo-storage-audit at least up to this block (that's why I've not yet release go version of |
Side note: I'm not sure how storage key serialization affects |
FYI: neo-project/neo#946 and neo-project/neo#950, though it looks like 2.x doesn't have it. |
After I made |
Can you share state differences? Are they caused by the same ordering issue (do they react to any changes wrt this issue)? |
I think so, because it is a similar contract, it also uses iterator returned from "state": "Changed",
"key": "deb3fc4d7571d3ea0ca4ae9112baf7a9e1e375c86c6f636b7570735f2c51adddf51d59f100bd4d3678e228815b8d665f70b614030001",
"value": "002180040203b714030203b61403020640715b5bb5008201020113000640715b5bb50000" Their (note that the key is also different): "state": "Changed",
"key": "deb3fc4d7571d3ea0ca4ae9112baf7a9e1e375c86c6f636b7570735f2c51adddf51d59f100bd4d3678e228815b8d665f70b514030001",
"value": "002180040203b714030203b51403020640715b5bb5008201020113000640715b5bb50000" |
Well I have tried:
Combined 1, 2, 3 in every possible combination without any success. May be I am just tired and missed something. |
My biggest fear wrt this bug is inter-transaction or inter-block caching, I've not looked at C# node's caching system close enough to know whether and how some accesses by previous blocks/transactions affect caching and But we have a cache instance for transaction, I'd probably try to start with that. Log every put/get happening for our contracts, then look at |
It's gone and we have more interesting bugs now. |
It comes from contract 0x2ffaafd08e0ba04b2554a7a372b03fcc1e6271d4 (which I wasn't able to find the source for, unfortunately). And it relates to
transferWithLockupPeriod
invocations of it, one of which actually differs a bit in terms of gas spent (though it does successfully transfer the asset), C#:neo-go:
I thought that it might be because of different key order in
Storage.Find
results, but either it's not the case or the order is still not quite right after this change (it doesn't solve the problem):The key in question is easily parsed like this (notice "lockup_" prefix and some sort of sequencing is being done here in the fourth component (b51403/b61403/b71403/b81403 in different keys)):
Values:
The text was updated successfully, but these errors were encountered: