Skip to content

Commit

Permalink
kv: return uncertainty error on intent in uncertainty interval
Browse files Browse the repository at this point in the history
This commit is a partial revert of cockroachdb#40600 and cockroachdb#46830. It solves the same
problem that those PRs were solving, but in a different way.

Those two PRs were handling the case where a reading transaction
observes an intent in its uncertainty interval. Before those fixes, we
were not considering intents in a scan's uncertainty interval to be
uncertain. This had the potential to cause stale reads because an
unresolved intent doesn't indicate that its transaction hasn’t been
committed and is not a causal ancestor of the scan.

The approach the first PR took was to throw a WriteIntentError on
intents in a scan's uncertainty interval. Effectively, it made scans
consider intents in their uncertainty interval to be write-read
conflicts. This had the benefit that the scan would push the intent and
eventually resolve the conflict, either by aborting the intent, pushing
it out of the read's uncertainty interval, or waiting for it to commit.
In this last case (which is by the far the most common), the scan would
then throw an uncertainty error, because it now had a committed value in
its uncertainty interval.

The second PR introduced some awkward code to deal with the fallout from
this decision. Once we started trying to keep the lock table in sync
with the MVCC state, we needed to use latches to ensure proper
synchronization between any operations that could conflict because such
conflicts would influence the state of the lock table. This meant that
we needed to start holding latches across a read's uncertainty interval,
because intent writes in this interval were considered write-read
conflicts. This led to some moderately gross code and always felt a little
wrong.

Now that we are complicating the logic around uncertainty intervals even
further, this becomes even more of a burden. This motivates a reworking
of these interactions. This commit replaces the logic that considers
intents in a transaction's uncertainty interval to be write-read
conflicts for logic that considers such intents to be... uncertain.
Doing so means that a transaction will immediately refresh/restart
above the uncertain timestamp and will only then begin conflicting
with the intent.

This has a number of nice benefits:
1. it keeps all considerations of read uncertainty intervals down in
   MVCC iteration logic. The lock table no longer needs to be aware of it.
   This is a big win now and an even bigger win once we introduce synthetic
   timestamps.
2. readers that are almost certainly bound to hit an uncertainty error
   and need to restart will now do so sooner. In rare cases, this may
   result in wasted work. In the vast majority of cases, this will allow
   the reader to be more responsive to the commit of its conflict.
3. uncertainty errors will only be thrown for locks in the uncertainty
   interval of a read that are protecting a provisional write (intents).
   Before, any form of a lock in a read's uncertainty interval would be
   considered a write-read conflict, which was pessimistic and not needed
   for correctness.

   In a future with a fully segregated lock table, the change in semantic
   meaning here becomes even more clear. Instead of detecting the lock
   associated with an intent in a read's uncertainty interval and declaring
   a write-read conflict, the read will instead pass through the lock table
   untouched and will detect the provisional value associated with an intent
   and declaring uncertainty. This seems closer to what were actually trying
   to say about these interactions.

Before making this change, I intend to validate the hypothesis that it
will not affect performance (or may even slightly improve performance)
by running it on the YCSB benchmark suite.
  • Loading branch information
nvanbenschoten committed Jan 6, 2021
1 parent d87fa94 commit 5c037e6
Show file tree
Hide file tree
Showing 7 changed files with 108 additions and 229 deletions.
43 changes: 12 additions & 31 deletions pkg/kv/kvserver/batcheval/declare.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,39 +50,20 @@ func DefaultDeclareIsolatedKeys(
access = spanset.SpanReadOnly
if header.Txn != nil {
// For transactional reads, acquire read latches all the way up to
// the transaction's MaxTimestamp, because reads may observe locks
// the transaction's MaxTimestamp, because reads may observe writes
// all the way up to this timestamp.
//
// TODO(nvanbenschoten): this parallels similar logic in
// concurrency.Request.readConflictTimestamp, which indicates that
// there is almost certainly a better way to structure this. There
// are actually two issues here that lead to this duplication:
//
// 1. latch spans and lock spans are declared separately. While these
// concepts are not identical, it appears that lock spans are always
// a subset of latch spans, which means that we can probably unify
// the concepts more closely than we have thus far. This would
// probably also have positive performance implications, as the
// duplication mandates extra memory allocations.
//
// 2. latch spans can each be assigned unique MVCC timestamps but lock
// spans inherit the timestamp of their request's transaction (see
// lockTable and concurrency.Request.{read,write}ConflictTimestamp).
// This difference is strange and confusing. It's not clear that the
// generality of latches each being declared at their own timestamp
// is useful. There may be an emergent pattern that arises here when
// we unify latch and lock spans (see part 1) where latches that are
// in the lock span subset can inherit their request's transaction's
// timestamp and latches that are not are non-MVCC latches.
//
// Note that addressing these issues does not necessarily need to
// lead to the timestamp that MVCC spans are interpretted at being
// the same for the purposes of the latch manager and lock-table.
// For instance, once the lock-table is segregated and all logic
// relating to "lock discovery" is removed, we no longer need to
// acquire read latches up to a txn's max timestamp, just to its
// read timestamp. However, we will still need to search the
// lock-table up to a txn's max timestamp.
// It is critical that reads declare latches up through their
// uncertainty interval so that they are properly synchronized with
// earlier writes that may have a happened-before relationship with
// the read. These writes could not have completed and returned to
// the client until they were durable in the Range's Raft log.
// However, they may not have been applied to the replica's state
// machine by the time the write was acknowledged, because Raft
// entry application occur be asynchronously with respect to the
// writer (see AckCommittedEntriesBeforeApplication). Latching is
// the only mechanism that ensures that any observers of the write
// wait for the write apply before reading.
timestamp.Forward(header.Txn.MaxTimestamp)
}
}
Expand Down
1 change: 0 additions & 1 deletion pkg/kv/kvserver/concurrency/concurrency_manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,6 @@ func (r *Request) readConflictTimestamp() hlc.Timestamp {
ts := r.Timestamp
if r.Txn != nil {
ts = r.Txn.ReadTimestamp
ts.Forward(r.Txn.MaxTimestamp)
}
return ts
}
Expand Down
6 changes: 6 additions & 0 deletions pkg/kv/kvserver/concurrency/lock_table_waiter.go
Original file line number Diff line number Diff line change
Expand Up @@ -613,6 +613,12 @@ func (w *lockTableWaiterImpl) pushHeader(req Request) roachpb.Header {
// could race). Since the subsequent execution of the original request
// might mutate the transaction, make a copy here. See #9130.
h.Txn = req.Txn.Clone()
// We must push at least to req.readConflictTimestamp(), but for
// transactional requests we actually want to go all the way up to the
// top of the transaction's uncertainty interval. This allows us to not
// have to restart for uncertainty if the push succeeds and we come back
// and read.
h.Timestamp.Forward(req.Txn.MaxTimestamp)
}
return h
}
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# -------------------------------------------------------------
# A transactional (txn2) read-only request runs into replicated
# A transactional (txn2) read-only request runs into a replicated
# intent below its read timestamp. It informs the lock table and
# pushes the intent's transaction (txn1) above its uncertainty
# window. The push succeeds and the request is able to proceed.
Expand Down Expand Up @@ -59,20 +59,31 @@ reset namespace
----

# -------------------------------------------------------------
# A transactional (txn2) read-only request runs into replicated
# intent above its read timestamp but in its uncertainty window.
# It informs the lock table and pushes the intent's transaction
# (txn1) above its uncertainty window. The push succeeds and the
# request is able to proceed.
# A transactional (txn2) read-only request runs into a replicated
# intent below its read timestamp and informs the lock table.
#
# A second transactional (txn3) read-only request does not wait
# on the intent in the lock table because its read timestamp is
# below the intent timestamp. However, the intent's provisional
# value is in txn3's uncertainty window. The read-only request
# returns a ReadWithinUncertaintyIntervalError and does not
# inform the lock table of the intent.
#
# This causes txn3 to restart with a higher read timestamp. At
# this point, since txn1's intent is still present, it begins
# waiting in the lock table.
# -------------------------------------------------------------

new-txn name=txn1 ts=14,1 epoch=0
----

new-txn name=txn2 ts=12,1 epoch=0 maxts=15,1
new-txn name=txn2 ts=15,1 epoch=0
----

new-request name=req1 txn=txn2 ts=12,1
new-txn name=txn3 ts=12,1 epoch=0 maxts=15,1
----

new-request name=req1 txn=txn2 ts=15,1
get key=k
----

Expand Down Expand Up @@ -104,17 +115,63 @@ sequence req=req1
[3] sequence req1: pushing timestamp of txn 00000001 above 15.000000000,1
[3] sequence req1: blocked on select in concurrency_test.(*cluster).PushTransaction

on-txn-updated txn=txn1 status=pending ts=15,2
# txn3 does not wait on lock in lock table, but does end up throwing a
# ReadWithinUncertaintyIntervalError when it scans and finds txn1's
# provisional value in its uncertainty interval.
new-request name=req2 txn=txn3 ts=12,1
get key=k
----
[-] update txn: increasing timestamp of txn1
[3] sequence req1: resolving intent "k" for txn 00000001 with PENDING status

sequence req=req2
----
[4] sequence req2: sequencing request
[4] sequence req2: acquiring latches
[4] sequence req2: scanning lock table for conflicting locks
[4] sequence req2: sequencing complete, returned guard

finish req=req2
----
[-] finish req2: finishing request

# txn3 refreshes/restarts with a higher read timestamp than that of the
# value it saw in its uncertainty interval. It then retries its request.
on-txn-updated txn=txn3 status=pending ts=14,2
----
[-] update txn: increasing timestamp of txn3

new-request name=req2-retry txn=txn3 ts=14,2
get key=k
----

sequence req=req2-retry
----
[5] sequence req2-retry: sequencing request
[5] sequence req2-retry: acquiring latches
[5] sequence req2-retry: scanning lock table for conflicting locks
[5] sequence req2-retry: waiting in lock wait-queues
[5] sequence req2-retry: pushing timestamp of txn 00000001 above 15.000000000,1
[5] sequence req2-retry: blocked on select in concurrency_test.(*cluster).PushTransaction

# txn1 commits and lets both reads through.
on-txn-updated txn=txn1 status=committed
----
[-] update txn: committing txn1
[3] sequence req1: resolving intent "k" for txn 00000001 with COMMITTED status
[3] sequence req1: acquiring latches
[3] sequence req1: scanning lock table for conflicting locks
[3] sequence req1: sequencing complete, returned guard
[5] sequence req2-retry: resolving intent "k" for txn 00000001 with COMMITTED status
[5] sequence req2-retry: acquiring latches
[5] sequence req2-retry: scanning lock table for conflicting locks
[5] sequence req2-retry: sequencing complete, returned guard

finish req=req1
----
[-] finish req1: finishing request

finish req=req2-retry
----
[-] finish req2-retry: finishing request

reset namespace
----
17 changes: 10 additions & 7 deletions pkg/storage/pebble_mvcc_scanner.go
Original file line number Diff line number Diff line change
Expand Up @@ -374,20 +374,23 @@ func (p *pebbleMVCCScanner) getAndAdvance() bool {
}

ownIntent := p.txn != nil && p.meta.Txn.ID.Equal(p.txn.ID)
maxVisibleTS := p.ts
if p.checkUncertainty {
maxVisibleTS = p.txn.MaxTimestamp
}
otherIntentVisible := metaTS.LessEq(maxVisibleTS) || p.failOnMoreRecent
visibleIntent := metaTS.LessEq(p.ts) || p.failOnMoreRecent

if !ownIntent && !otherIntentVisible {
if !ownIntent && !visibleIntent {
// 8. The key contains an intent, but we're reading below the intent.
// Seek to the desired version, checking for uncertainty if necessary.
//
// Note that if we own the intent (i.e. we're reading transactionally)
// we want to read the intent regardless of our read timestamp and fall
// into case 8 below.
return p.seekVersion(maxVisibleTS, p.checkUncertainty)
if p.checkUncertainty {
if metaTS.LessEq(p.txn.MaxTimestamp) {
return p.uncertaintyError(metaTS)
}

return p.seekVersion(p.txn.MaxTimestamp, true)
}
return p.seekVersion(p.ts, false)
}

if p.inconsistent {
Expand Down
Loading

0 comments on commit 5c037e6

Please sign in to comment.