-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: avoid acquiring raftMu in Replica.propose #24990
storage: avoid acquiring raftMu in Replica.propose #24990
Conversation
This is sweet - can't wait to hear whether it affects TPCC. |
Review status: 0 of 1 files reviewed at latest revision, all discussions resolved. pkg/storage/replica.go, line 564 at r1 (raw file):
#24920 is getting rid of this logic. You and Ben will have to work out the sequencing of these PRs. pkg/storage/replica.go, line 3209 at r1 (raw file):
It's interesting that you are seeing a benefit. Perhaps the rest of the system has evolved enough to make this a bottleneck now when it wasn't previously. Comments from Reviewable |
Review status: 0 of 1 files reviewed at latest revision, 2 unresolved discussions. Comments from Reviewable |
Reminder, we're in the stabilization period before the next 2.1 alpha release. We should consider the risk vs reward for this change and consider merging next week (after the 2.1 alpha has been cut) instead of this week. Review status: 0 of 1 files reviewed at latest revision, 2 unresolved discussions. Comments from Reviewable |
TFTR. I'll hold off on merging this until next week and will coordinate with #24920. |
This is also related to #19156. Thanks @nvanbenschoten! |
Review status: 0 of 1 files reviewed at latest revision, 2 unresolved discussions. pkg/storage/replica.go, line 528 at r1 (raw file):
This is a weird and subtle rule. I don't like introducing non-standard synchronization rules because they're hard to reason about. How confident are we that the old comment "It appears that we only need to hold Replica.raftMu when calling raft.NewRawNode. We could get fancier with the locking here to" is still true? The race detector can't help us here because pkg/storage/replica.go, line 3209 at r1 (raw file): Previously, petermattis (Peter Mattis) wrote…
I bet that comment came before we were fsyncing all log writes. I wonder if we could refactor Comments from Reviewable |
Review status: 0 of 1 files reviewed at latest revision, 3 unresolved discussions. pkg/storage/replica.go, line 3209 at r1 (raw file):
I did an audit a long time ago about long holds of Comments from Reviewable |
Review status: 0 of 1 files reviewed at latest revision, 3 unresolved discussions. pkg/storage/replica.go, line 3209 at r1 (raw file): Previously, petermattis (Peter Mattis) wrote…
As called out in #19156, Comments from Reviewable |
1008280
to
4b83913
Compare
Review status: 0 of 1 files reviewed at latest revision, 3 unresolved discussions. pkg/storage/replica.go, line 528 at r1 (raw file): Previously, bdarnell (Ben Darnell) wrote…
All external mutations to pkg/storage/replica.go, line 564 at r1 (raw file): Previously, petermattis (Peter Mattis) wrote…
#24920 is a much larger change, so that should go in first. It looks like it will be merged tomorrow. Comments from Reviewable |
4b83913
to
c943275
Compare
Review status: 0 of 1 files reviewed at latest revision, 3 unresolved discussions. pkg/storage/replica.go, line 528 at r1 (raw file): Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
It's certainly simpler, but is it safe? I didn't mean to say that we never need to hold raftMu when calling withRaftGroupLocked; I was asking whether even the lesser claim from that old comment (that we could avoid holding raftMu while calling withRaftGroupLocked if the group already exists) still holds. I think we need a fresh analysis of what exactly raftMu is needed for so we can figure out whether it's safe to make this change. I can't remember all the subtleties here and I'm not confident that the tests we have are enough to expose any issues. Comments from Reviewable |
Related to cockroachdb#9827. This addresses a TODO to avoid acquiring `Replica.raftMu` in `Replica.propose` in the common case where the replica's internal raft group is not null. This showed up in blocking profiles when running sysbench's `oltp_insert` workload. Specifically, the locking of `raftMu` in `Replica.propose` was responsible for **29.3%** of blocking in the profile. This change increases throughput of the sysbench workload by **11%** and reduces the reported average latency by **10%**. This was averaged over a few runs. Below is the output of a representative run before and after. Before ``` SQL statistics: queries performed: read: 0 write: 202253 other: 0 total: 202253 transactions: 202253 (3368.51 per sec.) queries: 202253 (3368.51 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) Throughput: events/s (eps): 3368.5102 time elapsed: 60.0423s total number of events: 202253 Latency (ms): min: 0.99 avg: 37.98 max: 141.98 95th percentile: 58.92 sum: 7681469.72 Threads fairness: events (avg/stddev): 1580.1016/4.11 execution time (avg/stddev): 60.0115/0.01 ``` After ``` SQL statistics: queries performed: read: 0 write: 228630 other: 0 total: 228630 transactions: 228630 (3808.46 per sec.) queries: 228630 (3808.46 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) Throughput: events/s (eps): 3808.4610 time elapsed: 60.0321s total number of events: 228630 Latency (ms): min: 3.59 avg: 33.60 max: 156.34 95th percentile: 51.02 sum: 7680843.03 Threads fairness: events (avg/stddev): 1786.1719/3.89 execution time (avg/stddev): 60.0066/0.01 ``` Release note (performance improvement): Reduce lock contention in Replica write path.
c943275
to
5ec656c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale)
pkg/storage/replica.go, line 528 at r1 (raw file):
Previously, bdarnell (Ben Darnell) wrote…
It's certainly simpler, but is it safe? I didn't mean to say that we never need to hold raftMu when calling withRaftGroupLocked; I was asking whether even the lesser claim from that old comment (that we could avoid holding raftMu while calling withRaftGroupLocked if the group already exists) still holds.
I think we need a fresh analysis of what exactly raftMu is needed for so we can figure out whether it's safe to make this change. I can't remember all the subtleties here and I'm not confident that the tests we have are enough to expose any issues.
I went back through and audited the use of Replica.mu
and Replica.raftMu
. I also audited all manipulation of Replica.mu.internalRaftGroup
. I'm now pretty convinced that the original version of this change was correct. I've reverted the code to that. If the internalRaftGroup
is non-nil and we hold Replica.mu
then withRaftGroupLocked
should be safe to call because anyone who resets it to nil (initRaftMuLockedReplicaMuLocked
, setReplicaIDRaftMuLockedMuLocked
, removeReplicaImpl
) also holds Replica.mu
and handles local proposals in the same critical section. Additionally, calling Propose
on the raft group should be safe even without Replica.raftMu
because it's fine if it ends up being concurrent with a call to handleRaftReadyRaftMuLocked
- either the raft.Ready
will contain the new proposals or it won't.
The only thing that looks fishy to me is that this will allow us to call Replica.unquiesceLocked()
without the raftMu
, but we seem to do that elsewhere as well. Do you have any insight into this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale)
pkg/storage/replica.go, line 528 at r1 (raw file):
Do you have any insight into this?
Not really. Unquiescing without raftMu seems about the same as Proposing without it.
TFTR. bors r+ |
Timed out |
bors r+ |
24990: storage: avoid acquiring raftMu in Replica.propose r=nvanbenschoten a=nvanbenschoten Related to #9827. This addresses a TODO to avoid acquiring `Replica.raftMu` in `Replica.propose` in the common case where the replica's internal raft group is not null. This showed up in blocking profiles when running sysbench's `oltp_insert` workload. Specifically, the locking of `raftMu` in `Replica.propose` was responsible for **29.3%** of blocking in the profile:  _raftMu locking in Replica.propose is responsible for the second large pillar_ This change increases throughput of the sysbench workload by **11%** and reduces the reported average latency by **10%**. This was averaged over a few runs. Below is the output of a representative run before and after. Before ``` SQL statistics: queries performed: read: 0 write: 202253 other: 0 total: 202253 transactions: 202253 (3368.51 per sec.) queries: 202253 (3368.51 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) Throughput: events/s (eps): 3368.5102 time elapsed: 60.0423s total number of events: 202253 Latency (ms): min: 0.99 avg: 37.98 max: 141.98 95th percentile: 58.92 sum: 7681469.72 Threads fairness: events (avg/stddev): 1580.1016/4.11 execution time (avg/stddev): 60.0115/0.01 ``` After ``` SQL statistics: queries performed: read: 0 write: 228630 other: 0 total: 228630 transactions: 228630 (3808.46 per sec.) queries: 228630 (3808.46 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) Throughput: events/s (eps): 3808.4610 time elapsed: 60.0321s total number of events: 228630 Latency (ms): min: 3.59 avg: 33.60 max: 156.34 95th percentile: 51.02 sum: 7680843.03 Threads fairness: events (avg/stddev): 1786.1719/3.89 execution time (avg/stddev): 60.0066/0.01 ``` After this change, the blocking profile shows that `Replica.propose` is no longer a problem:  Release note (performance improvement): Reduce lock contention in Replica write path. Co-authored-by: Nathan VanBenschoten <[email protected]>
Build succeeded |
Related to #9827.
This addresses a TODO to avoid acquiring
Replica.raftMu
inReplica.propose
in the common case where the replica's internal raft group is not null.
This showed up in blocking profiles when running sysbench's
oltp_insert
workload. Specifically, the locking of
raftMu
inReplica.propose
wasresponsible for 29.3% of blocking in the profile:
raftMu locking in Replica.propose is responsible for the second large pillar
This change increases throughput of the sysbench workload by 11% and reduces
the reported average latency by 10%. This was averaged over a few runs.
Below is the output of a representative run before and after.
Before
After
After this change, the blocking profile shows that
Replica.propose
is no longera problem:
Release note (performance improvement): Reduce lock contention in
Replica write path.