-
Notifications
You must be signed in to change notification settings - Fork 20.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: external evm benchmarks #24050
Conversation
98fc9ab
to
ef0ac5f
Compare
Comparison between
|
cf433bd
to
f4603b1
Compare
I changed the implementation to use Using one of the |
@holiman, please check if jumpdest analysis is cached for these runs. |
6dca006
to
d545a21
Compare
Yeah, so apparently the
|
It might be possible to fix this via a hack. Instead of
You make the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose Call
might be 'correcter', but I also think we should avoid including jumpdest analysis time in these benchmarks
d545a21
to
d766533
Compare
Maybe the way is to be have API with more explicit code analysis handling. |
Any resolution to this? |
Do you have any proposal? I suggested a way to hack around it -- do you think there's some better way to do it? I don't really have any good alternatives. |
Wrapping with another Alternative is to have more explicit code analysis cache management, i.e. analyze the code with one API outside of the loop and pass it to |
BTW, how do you collect "analysis" stats for benchmarks? |
I did one more commit which looks a bit better (and looks better IMHO) but in general the loop seems to be getting little faster but this is not visible in benchmarks doing more than just JUMPDEST.
|
The |
This makes the analysis become reused: diff --git a/core/vm/analysis.go b/core/vm/analysis.go
index 449cded2a8..5a4fe12d03 100644
--- a/core/vm/analysis.go
+++ b/core/vm/analysis.go
@@ -66,8 +66,11 @@ func (bits *bitvec) codeSegment(pos uint64) bool {
return ((*bits)[pos/8] & (0x80 >> (pos % 8))) == 0
}
+var AnalysisCount = uint64(0)
+
// codeBitmap collects data locations in code.
func codeBitmap(code []byte) bitvec {
+ AnalysisCount++
// The bitmap is 4 bytes longer than necessary, in case the code
// ends with a PUSH32, the algorithm will push zeroes onto the
// bitvector outside the bounds of the actual code.
diff --git a/tests/state_test.go b/tests/state_test.go
index 8ffa001c0b..9905a6cf9f 100644
--- a/tests/state_test.go
+++ b/tests/state_test.go
@@ -184,6 +184,11 @@ func runBenchmark(b *testing.B, t *StateTest) {
key := fmt.Sprintf("%s/%d/Run", subtest.Fork, subtest.Index)
b.Run(key, func(b *testing.B) {
+ vm.AnalysisCount = 0
+ defer func() {
+ b.ReportMetric(float64(vm.AnalysisCount), "analysis")
+ }()
+
vmconfig := vm.Config{}
config, eips, err := GetChainConfig(subtest.Fork)
if err != nil {
@@ -235,6 +240,11 @@ func runBenchmarkCall(b *testing.B, t *StateTest) {
key := fmt.Sprintf("%s/%d/Call", subtest.Fork, subtest.Index)
b.Run(key, func(b *testing.B) {
+ vm.AnalysisCount = 0
+ defer func() {
+ b.ReportMetric(float64(vm.AnalysisCount), "analysis")
+ }()
+
vmconfig := vm.Config{}
config, eips, err := GetChainConfig(subtest.Fork)
@@ -284,7 +294,8 @@ func runBenchmarkCall(b *testing.B, t *StateTest) {
context.BaseFee = baseFee
evm := vm.NewEVM(context, txContext, statedb, config, vmconfig)
- sender := vm.AccountRef(msg.From())
+ sender := vm.NewContract(vm.AccountRef(msg.From()), vm.AccountRef(msg.From()),
+ nil, 0)
b.ResetTimer()
for n := 0; n < b.N; n++ {
// Execute the message. |
I compared two implementation variants: Run (old) and Call (new). The difference is small and does not which variant is faster. So I'm for staying with "Call" variant.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
tests/state_test.go
Outdated
@@ -61,6 +68,7 @@ func TestState(t *testing.T) { | |||
for _, dir := range []string{ | |||
stateTestDir, | |||
legacyStateTestDir, | |||
benchmarksDir, // FIXME: This does not seem to work, but we want to test benchmarks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this fixme about? Do the tests in the benchmarks dir still not work? Something we need to look into or can this comment be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe it actually works as expected. Let me find it in the logs...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These run, however the "benchmark" dir name is omitted.
--- PASS: TestState (0.05s)
--- PASS: TestState/main/weierstrudel.json (0.05s)
--- PASS: TestState/main/weierstrudel.json/London/0/trie (0.00s)
--- PASS: TestState/main/weierstrudel.json/London/0/snap (0.00s)
--- PASS: TestState/main/weierstrudel.json/London/1/trie (0.00s)
--- PASS: TestState/main/weierstrudel.json/London/1/snap (0.00s)
--- PASS: TestState/main/weierstrudel.json/London/2/trie (0.00s)
--- PASS: TestState/main/weierstrudel.json/London/2/snap (0.00s)
--- PASS: TestState/main/weierstrudel.json/London/3/trie (0.01s)
--- PASS: TestState/main/weierstrudel.json/London/3/snap (0.01s)
--- PASS: TestState/main/weierstrudel.json/London/4/trie (0.01s)
--- PASS: TestState/main/weierstrudel.json/London/4/snap (0.01s)
--- PASS: TestState/main/blake2b_huff.json (0.17s)
8f52d41
to
8b5f3ec
Compare
* tests: add ipsilon/evm-benchmarks git submodule * tests: plug-in evm-benchmarks
* tests: add ipsilon/evm-benchmarks git submodule * tests: plug-in evm-benchmarks
I converted evmone's benchmarks to StateTest format and they are available at https://github.com/ipsilon/evm-benchmarks. Here added as a git submodule. We are still discussing how exactly extract and maintain the benchmark suite outside of the evmone project.
Then I copied a lot of code from
TestState()
to createBenchmarkState()
with the goal to use the lowest EVM execute function possible. I stopped atEVM.Call()
but I believe evenInterpreter.Run()
is possible.If some refactoring is done, some code can be shared between
TestState()
andBenchmarkState()
. But the test runner is quite convoluted withreflect
being used at some level...