Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kv: performance degradation in YCSB (A) between 20.1 and 20.2 #54515

Closed
dbist opened this issue Sep 17, 2020 · 15 comments · Fixed by #55214
Closed

kv: performance degradation in YCSB (A) between 20.1 and 20.2 #54515

dbist opened this issue Sep 17, 2020 · 15 comments · Fixed by #55214
Assignees
Labels
C-investigation Further steps needed to qualify. C-label will change. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.

Comments

@dbist
Copy link
Contributor

dbist commented Sep 17, 2020

What is your situation?
Seeing about 5-10k qps dip in performance on the same machine shapes and identical setup compared to 20.1.5

Select all that apply:

  • is there a difference between the performance you expect and the performance you observe?
  • do you want to improve the performance of your app?
  • are you surprised by your performance results?

Observed performance

What did you see? How did you measure it?

If you have already ran tests, include your test details here:

  • which test code do you use? YCSB workloada
  • how many clients per node? none, clients are external to the nodes
  • how many requests per client / per node? 128 threads across 3 clients

Application profile

Performance depends on the application. Please help us understand how you use CockroachDB before we can discuss performance.

  • What is the storage profile? Tested with Pebble and Rocks on 20.2 beta 1.
    • how many nodes? 9
    • how much storage? 6TB capacity
    • how much data? 5min load
    • replication factor? 3

Requested resolution

When/how would you consider this issue resolved?

Select all that applies:

  • I seek guidance as to how to tune my application or CockroachDB deployment.
  • I want CockroachDB to be optimized for my use case.

Repro steps are described in the following document

@blathers-crl
Copy link

blathers-crl bot commented Sep 17, 2020

Hi @dbist, please add a C-ategory label to your issue. Check out the label system docs.

While you're here, please consider adding an A- label to help keep our repository tidy.

🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is otan.

@petermattis
Copy link
Collaborator

@nvanbenschoten Can you triage for #kv? @dbist initially suspect Pebble was to blame for the regression, but he saw even worse performance on 20.2 beta 1 using RocksDB. So this seems to be outside of the #storage layer.

@nvanbenschoten nvanbenschoten self-assigned this Sep 17, 2020
@nvanbenschoten nvanbenschoten added the C-investigation Further steps needed to qualify. C-label will change. label Sep 17, 2020
@dbist dbist added the A-kv Anything in KV that doesn't belong in a more specific category. label Sep 17, 2020
@nvanbenschoten
Copy link
Member

Thanks for filing @dbist. I was able to reproduce your results. I was concerned that the reproduction steps switch hardware between tests ("The new IP list is as follows"), but even without that, the ~5% reduction in throughput is visible on this workload.

The other thing to note is that the reproduction steps use a uniform access distribution, which completely changes the behavior of YCSB-A. Such an access distribution eliminates contention, which is the limiting factor for YCSB-A under the zipfian distribution. So that goes to say that our nightly YCSB-A tests aren't going to be particularly relevant to this investigation. Some of the more basic KV workloads will be more relevant as they more effectively saturate hardware resources.

I began digging into this by looking at recent roachperf degredations. The most recent major drop in performance across many different workloads was on September 12th. I've since pinned this on 2d69b3d. That commit enabled jemalloc profiling on all roachtests. The comment added in that change says that this has "minimal performance impact", but our nightly testing begs to differ. Here's the impact on kv95/enc=false/nodes=3:

name                    old ops/sec  new ops/sec  delta
kv95/enc=false/nodes=3   39.4k ± 2%   36.4k ± 5%  -7.61%  (p=0.008 n=5+5)

name                    old p50(ms)  new p50(ms)  delta
kv95/enc=false/nodes=3    3.50 ± 0%    3.76 ± 2%  +7.43%  (p=0.016 n=4+5)

name                    old p99(ms)  new p99(ms)  delta
kv95/enc=false/nodes=3    26.4 ± 5%    28.1 ± 8%    ~     (p=0.190 n=5+5)

A 7.6% hit to throughput is pretty serious. I'm tempted to revert that change, as I'm not aware of any instance where these new profiles have helped in debugging. What do you think @petermattis?

Next, I'm going to look into the throughput dip around August 8th and the subsequent one around August 21st.

@nvanbenschoten
Copy link
Member

the throughput dip around August 8th

This was due to the revert back to Go 1.13. So the jump in throughput we see on kv95 between June 29th and August 8th was due to Go 1.14. We should expect to get this back once we upgrade again at the beginning of the next release cycle.

@nvanbenschoten
Copy link
Member

and the subsequent one around August 21st

I've believe I have pinned this regression down to 9039cb1:

name                    old ops/sec  new ops/sec  delta
kv95/enc=false/nodes=3   41.4k ± 3%   40.2k ± 3%  -2.84%  (p=0.002 n=10+10)

I haven't dug deep enough into that commit to understand where the regression is coming from, but after a brief look, I suspect it's due to some combination of increased heap allocation and increased lock contention. Concretely, I'm concerned with how mutex-heavy the appStats structure and child structures are, given that we'd expect every session that shares an application_name to contend on it (i.e. the entire kv workload). Some of the new hashing/fingerprinting routines added in that commit might also be problematic. @arulajmani would you be able to take a look into this? As is, this would probably be a release blocker.

Also, I wasn't explicit in #54515 (comment) that the jemalloc profiling could not be contributing to the YCSB regression, because it only affects roachtest results.

@nvanbenschoten nvanbenschoten added release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. and removed A-kv Anything in KV that doesn't belong in a more specific category. labels Sep 20, 2020
@petermattis
Copy link
Collaborator

Glad you tracked this down, @nvanbenschoten!

A 7.6% hit to throughput is pretty serious. I'm tempted to revert that change, as I'm not aware of any instance where these new profiles have helped in debugging. What do you think @petermattis?

That's a much more significant hit than I was expected. I'm not sure what @knz did for testing here. Perhaps we can disable jemalloc profiles for performance oriented roachtests. @knz can weigh in with thoughts about the importance of the profiles.

@knz
Copy link
Contributor

knz commented Sep 21, 2020

The question came up to troubleshoot failed bulk i.o tests and also failing fixture imports, when nodes crashed due to insufficient memory.

I agree we should revert this change in light of the new learning.

@knz
Copy link
Contributor

knz commented Sep 21, 2020

reverting in #54612

craig bot pushed a commit that referenced this issue Sep 21, 2020
54612: Revert "roachtest: enable jemalloc profiling in all roachtests" r=petermattis a=knz

This reverts commit 2d69b3d.

As discussed in #54515

Release note: None

Co-authored-by: Raphael 'kena' Poss <[email protected]>
@nvanbenschoten
Copy link
Member

@arulajmani I took a look at a few performance profiles to see whether anything stood out between 9039cb1 and its parent commit. I didn't see much in the mutex profiles, but both the heap and cpu profiles painted a pretty clear picture that the changes made in that commit come with a pretty large cost, relative to the rest of the workload.

For each profile, I filtered the samples down to those containing appStats somewhere within its callstack.

On the parent commit (3e01420), I see no heap allocations at all (alloc_objects) that pass this filter. In terms of CPU, I see 0.32% of CPU time spent within the appStats structure:
Screen Shot 2020-09-30 at 11 38 11 PM

On the commit in question (9039cb1), things are different. First, I see 3.05% of all allocated heap objects coming from within the appStats structure:
Screen Shot 2020-09-30 at 11 39 43 PM

I also see 1.34% of CPU time spent here:
Screen Shot 2020-09-30 at 11 38 20 PM

So it appears that our suspicion was correct that the appStats got a lot more expensive in the commit, and that this is likely the root cause of the regression.

From those profiles, it appears that ConstructStatementID is the biggest reason why maintaining the appStats is more expensive. The function seems pretty heavyweight. It allocates four times per call:

Active filters:
   focus=ConstructStatementID
Showing nodes accounting for 7443978, 2.90% of 256305045 total
----------------------------------------------------------+-------------
      flat  flat%   sum%        cum   cum%   calls calls% + context 	 	 
----------------------------------------------------------+-------------
                                           1638424   100% |   github.com/cockroachdb/cockroach/pkg/sql.constructStatementIDFromStmtKey /go/src/github.com/cockroachdb/cockroach/pkg/sql/app_stats.go:557
   1638424  0.64%  0.64%    1638424  0.64%                | github.com/cockroachdb/cockroach/pkg/roachpb.ConstructStatementID /go/src/github.com/cockroachdb/cockroach/pkg/roachpb/app_stats.go:35
----------------------------------------------------------+-------------
                                           3620957   100% |   github.com/cockroachdb/cockroach/pkg/sql.constructStatementIDFromStmtKey /go/src/github.com/cockroachdb/cockroach/pkg/sql/app_stats.go:557
   1409066  0.55%  1.19%    3620957  1.41%                | github.com/cockroachdb/cockroach/pkg/roachpb.ConstructStatementID /go/src/github.com/cockroachdb/cockroach/pkg/roachpb/app_stats.go:37
                                           1228837 33.94% |   fmt.Sprintf /usr/local/go/src/fmt/print.go:220
                                            983054 27.15% |   hash/fnv.(*sum128).Sum /usr/local/go/src/hash/fnv/fnv.go:200
----------------------------------------------------------+-------------
                                           1070469   100% |   github.com/cockroachdb/cockroach/pkg/sql.constructStatementIDFromStmtKey /go/src/github.com/cockroachdb/cockroach/pkg/sql/app_stats.go:557
   1070469  0.42%  1.61%    1070469  0.42%                | github.com/cockroachdb/cockroach/pkg/roachpb.ConstructStatementID /go/src/github.com/cockroachdb/cockroach/pkg/roachpb/app_stats.go:30
----------------------------------------------------------+-------------
                                           1114128   100% |   github.com/cockroachdb/cockroach/pkg/sql.constructStatementIDFromStmtKey /go/src/github.com/cockroachdb/cockroach/pkg/sql/app_stats.go:557
         0     0%  1.61%    1114128  0.43%                | github.com/cockroachdb/cockroach/pkg/roachpb.ConstructStatementID /go/src/github.com/cockroachdb/cockroach/pkg/roachpb/app_stats.go:29
                                           1114128   100% |   hash/fnv.New128 /go/src/github.com/cockroachdb/cockroach/pkg/sql/conn_executor_exec.go:29
----------------------------------------------------------+-------------

I think the key to avoiding this regression is going to be making ConstructStatementID more efficient. But before doing so, we should test the theory. One way to do so would be to turn the function into a no-op and see if that removes the regression.

@knz
Copy link
Contributor

knz commented Oct 1, 2020

I've looked at the code a little.

Assuming we can't re-organize the code to avoid re-computing an ID from scratch every time here's what we can do with ConstructStatementID:

  • use FNV64 instead of 128
  • inline all the things (note that NewFNV() returns a hash.Hash so the Write calls cannot be inlined).

I get this:


name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
func ConstructStatementID(anonymizedStmt string, failed bool, implicitTxn bool) StmtID {
        // Magic constants as suitable for a FNV-64 hash.
        s0 := uint64(14695981039346656037)

        for _, c := range anonymizedStmt {
                // These two assignments as well as those below should be moved to
                // a helper function to share the code.
                s0 *= 1099511628211
                s0 ^= uint64(c)
        }

        if failed {
                s0 *= 1099511628211
                s0 ^= uint64('F')
        } else {
                s0 *= 1099511628211
                s0 ^= uint64('S')
        }
        if implicitTxn {
                s0 *= 1099511628211
                s0 ^= uint64('I')
        } else {
                s0 *= 1099511628211
                s0 ^= uint64('E')
        }
        b := make([]byte, 16)
        const hex = "0123456789abcdef"
        for i := 0; i < 16; i++ {
                b[i] = hex[s0&0xf]
                s0 = s0 >> 4
        }
        return StmtID(*(*string)(unsafe.Pointer(&b)))
}

@knz
Copy link
Contributor

knz commented Oct 1, 2020

Benchmarking as usual with this:

func BenchmarkConstructStatementID(b *testing.B) {
        for i := 0; i < b.N; i++ {
                _ = ConstructStatementID("hello", false, true)
        }
}
make bench PKG=./pkg/roachpb BENCHES=BenchmarkConstructStatementID TESTFLAGS='-count 5 -benchmem -benchtime 2s'

before/after comparison with benchstat

(explaining for Arul in case you've never done that before)

@petermattis
Copy link
Collaborator

You could remove that final allocation by changing the type of roachpb.StmtID to [16]byte. Looks like StmtID is used as a key for a map, but that will work with a fixed length array.

@knz
Copy link
Contributor

knz commented Oct 1, 2020

You could remove that final allocation by changing the type of roachpb.StmtID to [16]byte.

It's used for a proto field. I'm not sure if it's possible to change the string in the proto to [16]byte.

@knz
Copy link
Contributor

knz commented Oct 1, 2020

Oh what we could do instead is use an uint64. The FNV hash computes an uint64, so no need for a string.

@arulajmani
Copy link
Collaborator

Thanks @nvanbenschoten for helping narrow down the allocations and @knz for the suggestions on how to improve this. I tried to validate the theory above by making ConstructStatmentID a no-op and I got the following result:

name             old ops/s   new ops/s   delta
kv95-throughput  36.7k ± 4%  37.0k ± 2%   ~     (p=0.796 n=9+9)

It seemed to have clawed some, but not all, of the regression. For comparison, these are the numbers on the previous commit(3e01420) and my commit (9039cb1 ) that I'm looking to close:

name             old ops/s   new ops/s   delta
kv95-throughput  37.7k ± 5%  36.7k ± 4%  -2.64%  (p=0.035 n=10+9)

The next step for me is to re-organize the code a bit so that we don't always have to compute the statement ID from scratch. Then, I'll take the other suggestions on this thread, run the numbers again, and report back.

arulajmani added a commit to arulajmani/cockroach that referenced this issue Oct 4, 2020
We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in cockroachdb#52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Re-organizes the code to only construct the statement IDs (deemed the
expensive operation) if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes cockroachdb#54515

Release note: None
arulajmani added a commit to arulajmani/cockroach that referenced this issue Oct 5, 2020
We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in cockroachdb#52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Re-organizes the code to only construct the statement IDs (deemed the
expensive operation) if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes cockroachdb#54515

Release note: None
arulajmani added a commit to arulajmani/cockroach that referenced this issue Oct 6, 2020
We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in cockroachdb#52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Re-organizes the code to only construct the statement IDs (deemed the
expensive operation) if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes cockroachdb#54515

Release note: None
arulajmani added a commit to arulajmani/cockroach that referenced this issue Oct 6, 2020
We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in cockroachdb#52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Reorganizing the code to only construct a statement ID from scratch
- if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes cockroachdb#54515

Release note: None
craig bot pushed a commit that referenced this issue Oct 7, 2020
55062: kvserver: remove unused parameter r=TheSamHuang a=tbg

Release note: None

55214: sql: fix performance regression due to increased hash allocations r=nvanbenschoten,knz a=arulajmani

We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in #52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Re-organizes the code to only construct the statement IDs (deemed the
expensive operation) if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes #54515

Release note: None

55277: server: create engines in NewServer r=irfansharif a=tbg

We were jumping through a number of hoops to create the engines only in
`(*Server).Start` since that seems to be the "idiomatic" place to start
moving parts. However, it creates a lot of complexity since various
callbacks have to be registered with access to engines. Move engine
creation to `NewServer`.

This unblocks #54936.

Release note: None


55280: roachtest: recognize GEOS dlls on more platforms r=otan a=knz

This makes roachtest work on the BSDs again.

Release note: None

Co-authored-by: Tobias Grieger <[email protected]>
Co-authored-by: arulajmani <[email protected]>
Co-authored-by: David Hartunian <[email protected]>
Co-authored-by: Raphael 'kena' Poss <[email protected]>
@craig craig bot closed this as completed in e587e13 Oct 7, 2020
arulajmani added a commit to arulajmani/cockroach that referenced this issue Oct 7, 2020
We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in cockroachdb#52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Reorganizing the code to only construct a statement ID from scratch
- if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes cockroachdb#54515

Release note: None
jayshrivastava pushed a commit that referenced this issue Oct 8, 2020
We recently increased allocations in the `appStats` structure when
adding support for transaction level statistics in #52704. This was
because the interactions with the `fnv` library were expensive in terms
of allocations. This patch aims to claw back the regression by:

- Using our own implementation of the FNV algorithm instead of the
library, which is significantly lighter weight (microbenchmarks below).
- Reorganizing the code to only construct a statement ID from scratch
- if required.

When comparing the difference between the commit that introduced the
regression and the changes proposed by this diff, I got the following
improvements on the KV workload:

```
name             old ops/s   new ops/s   delta
kv95-throughput  34.5k ± 6%  35.7k ± 4%  +3.42%  (p=0.023 n=10+10)

```

Microbenchmarks for the new hashing algorithm (written/run by @knz):
```
name                     old time/op    new time/op    delta
ConstructStatementID-32     405ns ±17%      39ns ±12%  -90.34%  (p=0.008 n=5+5)

name                     old alloc/op   new alloc/op   delta
ConstructStatementID-32      120B ± 0%       16B ± 0%  -86.67%  (p=0.008 n=5+5)

name                     old allocs/op  new allocs/op  delta
ConstructStatementID-32      6.00 ± 0%      1.00 ± 0%  -83.33%  (p=0.008 n=5+5)
```

Closes #54515

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-investigation Further steps needed to qualify. C-label will change. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants