-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: remove slow path for MVCCResolveWriteIntentRange #81063
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r1, all commit messages.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @sumeerbhola)
pkg/storage/mvcc.go
line 3369 at r1 (raw file):
num++ } if max > 0 && num == max {
Do we want to move this up right before // Parse the MVCCMetadata to see if it is a relevant intent.
? Generally, resume spans start at the first key that could not be returned/processed, not at the key following the last key which was returned/processed. This avoids returning a resume span in cases where the limit exactly matches the number of keys in the span.
0bed344
to
40549d8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TFTR!
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @nvanbenschoten)
pkg/storage/mvcc.go
line 3369 at r1 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Do we want to move this up right before
// Parse the MVCCMetadata to see if it is a relevant intent.
? Generally, resume spans start at the first key that could not be returned/processed, not at the key following the last key which was returned/processed. This avoids returning a resume span in cases where the limit exactly matches the number of keys in the span.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed all commit messages.
Reviewable status:complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @nvanbenschoten)
pkg/storage/mvcc.go
line 3369 at r1 (raw file):
Previously, sumeerbhola wrote…
Done
We're still returning lastResolvedKey.Next()
. Do we want to? Can we return sepIter.Key().Key
(the key of the first unprocessed intent, if there is one)?
Also, can we move this after the if intent.Txn.ID != meta.Txn.ID {
block?
40549d8
to
eaecc76
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status:
complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @nvanbenschoten)
pkg/storage/mvcc.go
line 3369 at r1 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
We're still returning
lastResolvedKey.Next()
. Do we want to? Can we returnsepIter.Key().Key
(the key of the first unprocessed intent, if there is one)?Also, can we move this after the
if intent.Txn.ID != meta.Txn.ID {
block?
- I wanted to keep the same non-allocation behavior that was there before this change (may not be important).
- The max checking happens after the previous intent has been resolved and
sepIter
has been stepped once. So from a wasted work perspective when resuming, returninglastResolvedKey.Next()
orsepIter.Key().Key
are equivalent, since seeking to either will place the iter in the same position. - If we moved the max checking to after
if intent.Txn.ID != meta.Txn.ID
we would definitely want to usesepIter.Key().Key
to avoid wasting work when resuming. Given that there have been PRs (e.g.ExportRequest
usingResourceLimiter
) to bound the work performed in a KV request, I'm a bit wary of continuing doing work after the max has been reached. I realize that the current code is insufficient since there could already have been a large amount of iteration over other txns intents until we reached max.
The code prior to this refactor, for both the slow and fast path, would return a resume span even if the iterator was exhausted, which is why TestEvaluateBatch/ranged_intent_resolution_with_MaxSpanRequestKeys=3 failed. Now fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r2, 1 of 1 files at r3, all commit messages.
Reviewable status:complete! 1 of 0 LGTMs obtained (waiting on @sumeerbhola)
bors r=nvanbenschoten |
Build failed: |
Taking the liberty of re-borsing this, since I'm working on implementing MVCC range tombstone handling in bors retry |
Build failed: |
Failed on bors retry |
81063: storage: remove slow path for MVCCResolveWriteIntentRange r=nvanbenschoten a=sumeerbhola We no longer have physically interleaved intents, that needed the slow path. Release note: None Co-authored-by: sumeerbhola <[email protected]>
Build failed: |
Different flake, bors retry |
Build failed: |
|
We no longer have physically interleaved intents, that needed the slow path. Release note: None
eaecc76
to
add67e3
Compare
Maybe it was unrelated, since CI is green now. |
bors r=nvanbenschoten |
Build failed: |
flake in TestComposeGSSPython #76547 |
bors retry |
Build failed: |
another flake in TestComposeGSSPython |
bors retry |
Build succeeded: |
We no longer have physically interleaved intents, that needed the
slow path.
Release note: None