Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

schema changes: retries of view creation because of stats collection results in non-deterministic table ids #37751

Closed
cockroach-teamcity opened this issue May 23, 2019 · 18 comments · Fixed by #85366
Assignees
Labels
A-schema-descriptors Relating to SQL table/db descriptor handling. branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions)
Milestone

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented May 23, 2019

The following tests appear to have failed on master (test): TestLogic/local: TestLogic/local/dependencies, TestLogic/local, TestLogic

You may want to check for open issues.

#1303610:

TestLogic/local: TestLogic/local/dependencies
--- FAIL: test/TestLogic: TestLogic/local: TestLogic/local/dependencies (3.810s)

------- Stdout: -------
=== PAUSE TestLogic/local/dependencies
            logic.go:2316: 
                 
                testdata/logic_test/dependencies:111: SELECT * FROM crdb_internal.backward_dependencies WHERE descriptor_name LIKE 'moretest_%' ORDER BY descriptor_id, index_id, dependson_type, dependson_id, dependson_index_id
                expected:
                    descriptor_id  descriptor_name  index_id  column_id  dependson_id  dependson_type  dependson_index_id  dependson_name  dependson_details
                    62             moretest_v       NULL      NULL       61            view            NULL                NULL            NULL
                    
                but found (query options: "colnames") :
                    descriptor_id  descriptor_name  index_id  column_id  dependson_id  dependson_type  dependson_index_id  dependson_name  dependson_details
                    63             moretest_v       NULL      NULL       61            view            NULL                NULL            NULL
                    
                
            logic.go:2348: 
                testdata/logic_test/dependencies:117: error while processing
            logic.go:2349: testdata/logic_test/dependencies:117: too many errors encountered, skipping the rest of the input



TestLogic/local
--- FAIL: test/TestLogic: TestLogic/local (0.450s)




TestLogic
--- FAIL: test/TestLogic (571.650s)

------- Stdout: -------
    test_log_scope.go:81: test logs captured to: /tmp/logTestLogic691072255
    test_log_scope.go:62: use -show-logs to present logs inline




Please assign, take a look and update the issue accordingly.

Jira issue: CRDB-5663

@cockroach-teamcity cockroach-teamcity added this to the 19.2 milestone May 23, 2019
@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels May 23, 2019
@nvanbenschoten
Copy link
Member

I can repro this in about a minute of running

make stress PKG=./pkg/sql/logictest TESTS=TestLogic/local/dependencies

@nvanbenschoten
Copy link
Member

I190523 17:08:42.632682 1048 sql/event_log.go:135  [n1,client=127.0.0.1:52966,user=root] Event: "create_view", target: 62, info: {ViewName:test.public.moretest_v Statement:CREATE VIEW moretest_v AS SELECT v FROM test.public.moretest_t WHERE false User:root}
I190523 17:08:42.632709 1048 sql/event_log.go:135  [n1,client=127.0.0.1:52966,user=root] Event: "create_view", target: 63, info: {ViewName:test.public.moretest_v Statement:CREATE VIEW moretest_v AS SELECT v FROM test.public.moretest_t WHERE false User:root}

testdata/logic_test/dependencies:111: SELECT * FROM crdb_internal.backward_dependencies WHERE descriptor_name LIKE 'moretest_%' ORDER BY descriptor_id, index_id, dependson_type, dependson_id, dependson_index_id
expected:
    descriptor_id  descriptor_name  index_id  column_id  dependson_id  dependson_type  dependson_index_id  dependson_name  dependson_details
    62             moretest_v       NULL      NULL       61            view            NULL                NULL            NULL

but found (query options: "colnames") :
    descriptor_id  descriptor_name  index_id  column_id  dependson_id  dependson_type  dependson_index_id  dependson_name  dependson_details
    63             moretest_v       NULL      NULL       61            view            NULL                NULL            NULL

@nvanbenschoten
Copy link
Member

I190523 18:11:19.430255 1031 sql/create_view.go:180  [n1,client=127.0.0.1:53588,user=root] NEW LOG create view done 62 "sql txn" id=a34185b8 key=/Table/SystemConfigSpan/Start rw=true pri=0.01056497 stat=PENDING epo=0 ts=1558635079.427853029,1 orig=1558635079.422331802,0 max=1558635079.922331802,0 wto=false seq=3
I190523 18:11:19.436152 1031 sql/create_view.go:180  [n1,client=127.0.0.1:53588,user=root] NEW LOG create view done 63 "sql txn" id=a34185b8 key=/Table/SystemConfigSpan/Start rw=true pri=0.01056497 stat=PENDING epo=1 ts=1558635079.427853029,1 orig=1558635079.427853029,1 max=1558635079.922331802,0 wto=false seq=8
I190523 18:11:19.440500 1031 sql/event_log.go:135  [n1,client=127.0.0.1:53588,user=root] Event: "create_view", target: 62, info: {ViewName:test.public.moretest_v Statement:CREATE VIEW moretest_v AS SELECT v FROM test.public.moretest_t WHERE false User:root}
I190523 18:11:19.440567 1031 sql/event_log.go:135  [n1,client=127.0.0.1:53588,user=root] Event: "create_view", target: 63, info: {ViewName:test.public.moretest_v Statement:CREATE VIEW moretest_v AS SELECT v FROM test.public.moretest_t WHERE false User:root}

It looks like this is due to a transaction restart.

@nvanbenschoten
Copy link
Member

We're hitting a RETRY_SERIALIZABLE retry error. That means that this is not the same issue as #35977.

I190523 18:22:33.125995 1132 sql/create_view.go:180  [n1,client=127.0.0.1:40642,user=root] NEW LOG create view done 62 "sql txn" id=0a89301a key=/Table/SystemConfigSpan/Start rw=true pri=0.01447941 stat=PENDING epo=0 ts=1558635753.116826665,1 orig=1558635753.104986430,0 max=1558635753.604986430,0 wto=false seq=3
I190523 18:22:33.130027 1132 kv/txn_coord_sender.go:1092  [n1,client=127.0.0.1:40642,user=root,txn=0a89301a] NEW LOG bumping txn epoch "sql txn" id=0a89301a key=/Table/SystemConfigSpan/Start rw=true pri=0.01447941 stat=PENDING epo=1 ts=1558635753.116826665,1 orig=1558635753.116826665,1 max=1558635753.604986430,0 wto=false seq=8: TransactionRetryError: retry txn (RETRY_SERIALIZABLE): "sql txn" id=0a89301a key=/Table/SystemConfigSpan/Start rw=true pri=0.01447941 stat=PENDING epo=0 ts=1558635753.116826665,1 orig=1558635753.104986430,0 max=1558635753.104986430,0 wto=false seq=9
I190523 18:22:33.132274 1132 sql/create_view.go:180  [n1,client=127.0.0.1:40642,user=root] NEW LOG create view done 63 "sql txn" id=0a89301a key=/Table/SystemConfigSpan/Start rw=true pri=0.01447941 stat=PENDING epo=1 ts=1558635753.116826665,1 orig=1558635753.116826665,1 max=1558635753.604986430,0 wto=false seq=8

@nvanbenschoten
Copy link
Member

I'm seeing the transaction get its timestamp bumped by the timestamp cache when it goes to Put key /Table/3/1/61/2/1. This is a Put on the descriptor 61 in the descriptor table. So this means that some other process is reading the descriptor of the new view's (moretest_v) table (moretest_t) concurrently with the schema change that is creating the view.

@nvanbenschoten
Copy link
Member

The timestamp cache entry has a transaction ID, so this does not have to do with lease changes or closed timestamps.

@nvanbenschoten
Copy link
Member

The transaction that is performing this read is running under an internalExecutor with the create-stats operation name.

@RaduBerinde @rytaft has anything changed in this area lately? Are we creating stats more frequently or is there any reason why this might now be running in logic tests?

@nvanbenschoten
Copy link
Member

Disabling automatic statistics collection appears to fix this flake.

@RaduBerinde
Copy link
Member

The recent change in this area is #37611 - table creation now triggers a stats run.

danhhz added a commit to danhhz/cockroach that referenced this issue May 23, 2019
…stLogic

Background stats collection is enabled by default, but we've seen tests
flake with it on. When the issue manifests, it seems to be around a
schema change transaction getting pushed, which causes it to increment a
table ID twice instead of once, causing non-determinism.

In the short term, in TestLogic, we disable auto stats by default to
avoid the flakes.

In the long run, these tests should be running with default settings as
much as possible, so we likely want to address this. Two options are
either making schema changes more resilient to being pushed or possibly
making auto stats avoid pushing schema change transactions. There might
be other better alternatives than these.

Touches cockroachdb#37751

Release note: None
@andreimatei
Copy link
Contributor

I have some thoughts on the topic.
Here's my half-understanding-half-speculation about what's going on:

  • we create a table
  • we start creating a view
  • we start collecting stats
  • the view txn does a non-transactional increment of the id allocator key
  • the stats txn reads the table desc
  • the view txn tries to write the descriptor but can't write under the stats' read, and so is forced to refresh/restart
    • the view txn, as all schema change txn, is not eligible for a refresh because it has "observed its commit timestamp" by calling txn.CommitTimestamp(). This unfortunate method locks the timestamp in place; there can be no refresh that moves the timestamp forward. Why do we observe this timestamp? Because we want to put it in the update table descriptor as the ModifiedTime.
  • the view txn restarts (we probably to an automatic retry at the sql level) and now succeeds but in the process it allocates a new table (view) id

First question is - why is the id allocation non-transactional? It turns out that I have made it so in 74f5728.
The reason the commit lists is because of ORMs which don't like txn retries for schema changes (I think it was motivated at the time in part by the Jepsen test suite which couldn't handle these retries). And so the point of making the id alloc non-txn was to isolate the contentious part of different schema change txns, so that the broaders txns wouldn't have to conflict.

Now, the auto-stats collection seems to have reintroduced some retries for schema changes, and so I guess that defeats the purpose of making that id allocation non-transactional and presumably is again a problem for Jepsen and tools/ORMs. So I think the problem here might be bigger than just tests that hardcode txn ids. (In the case of logic tests I'm guessing we're able to perform auto-retries but in the wild we often can't and that trips up tools).

In my opinion, the root of all evil here is the existence of the txn.CommitTimestamp() method

func (txn *Txn) CommitTimestamp() hlc.Timestamp {

which prevents the refreshes. Were it not for this, the view txn would presumably refresh just fine and succeed the first time.
I've always hated the use of that function for schema changes (which I think might also be the only use) because the information that it provides - the "modification time" of a table descriptor - can be "trivially" deduced from KV: it's the MVCC timestamp of the key in question. No need to record it explicitly. I believe a ScanRequest even carries the mvcc timestamps in it; it's just that they get lost towards SQL.
I think this is a good opportunity to kill this wart. Giving SQL access to these timestamps would also open the door to all sort of features - such as ones requested here: #19749

If I can snipe someone to do this (and assuming the theory checks out), I think it would be better than papering over these id allocations somehow - like for example through remembering allocated but unused ids as Nathan and other have suggested.
Curious to see if the cat licks it up.

cc @dt

@andreimatei
Copy link
Contributor

also cc @danhhz

@knz
Copy link
Contributor

knz commented May 24, 2019

I loved reading the analyses.

I think the requirement that DDL does not encounter retries is still there, that still really causes problems with 3rd party tools. Retries should still be minimized for DDL (or at least occur as little as possible, thus not "statically probably" when stat collection is enabled).

The fact that ID increments are non-transactional is a red herring in this discussion -- we can work around that easily in tests, and it is largely invisible to clients.

Andrei I like your idea to simplify the DDL handling code in SQL to minimize the way it blocks a txn from being auto-pushed. If it can use MVCC timestamps and achieve that (and thus avoid a few more retries), that would be a good thing.

craig bot pushed a commit that referenced this issue May 24, 2019
37779: testlogic: temporarily switch auto stats default to off to deflake TestLogic r=danhhz a=danhhz

Background stats collection is enabled by default, but we've seen tests
flake with it on. When the issue manifests, it seems to be around a
schema change transaction getting pushed, which causes it to increment a
table ID twice instead of once, causing non-determinism.

In the short term, in TestLogic, we disable auto stats by default to
avoid the flakes.

In the long run, these tests should be running with default settings as
much as possible, so we likely want to address this. Two options are
either making schema changes more resilient to being pushed or possibly
making auto stats avoid pushing schema change transactions. There might
be other better alternatives than these.

Touches #37751

Release note: None

Co-authored-by: Daniel Harrison <[email protected]>
@nvanbenschoten
Copy link
Member

@andreimatei do you want to keep this open and reorient it around #37751 (comment)?

@andreimatei
Copy link
Contributor

Yeah. I believe it's oriented already :P

@andreimatei andreimatei added the A-schema-descriptors Relating to SQL table/db descriptor handling. label Jul 11, 2019
@tbg tbg added the branch-master Failures and bugs on the master branch. label Jan 22, 2020
@thoszhang
Copy link
Contributor

@ajwerner Do you think this is fixed by #40793? Maybe we should turn stats collection back on for logic tests.

@ajwerner
Copy link
Contributor

ajwerner commented May 5, 2020

Certainly possible. We should at least experiment with it.

@ajwerner ajwerner self-assigned this May 5, 2020
@jlinder jlinder added the T-sql-schema-deprecated Use T-sql-foundations instead label Jun 16, 2021
craig bot pushed a commit that referenced this issue Aug 1, 2022
84865: kvserver: always return NLHE on lease acquisition timeouts r=nvanbenschoten a=erikgrinaker

In ab74b97 we added internal timeouts for lease acquisitions. These
were wrapped in `RunWithTimeout()`, as mandated for context timeouts.
However, this would mask the returned `NotLeaseHolderError` as a
`TimeoutError`, preventing the DistSender from retrying it and instead
propagating it out to the client. Additionally, context cancellation
errors from the actual RPC call were never wrapped as a
`NotLeaseHolderError` in the first place.

This ended up only happening in a very specific scenario where the outer
timeout added to the client context did not trigger, but the inner
timeout for the coalesced request context did trigger while the lease
request was in flight. Accidentally, the outer `RunWithTimeout()` call
did not return the `roachpb.Error` from the closure but instead passed
it via a captured variable, bypassing the error wrapping.

This patch replaces the `RunWithTimeout()` calls with regular
`context.WithTimeout()` calls to avoid the error wrapping, and returns a
`NotLeaseHolderError` from `requestLease()` if the RPC request fails and
the context was cancelled (presumably causing the error). Another option
would be to extract an NLHE from the error chain, but this would require
correct propagation of the structured error chain across RPC boundaries,
so out of an abundance of caution and with an eye towards backports, we
instead choose to return a bare `NotLeaseHolderError`.

The empty lease in the returned error prevents the DistSender from
updating its caches on context cancellation.

Resolves #84258.
Resolves #85115.

Release note (bug fix): Fixed a bug where clients could sometimes
receive errors due to lease acquisition timeouts of the form
`operation "storage.pendingLeaseRequest: requesting lease" timed out after 6s`.

84946: distsql: make the number of DistSQL runners dynamic r=yuzefovich a=yuzefovich

**distsql: make the number of DistSQL runners dynamic**

This commit improves the infrastructure around a pool of "DistSQL
runners" that are used for issuing SetupFlow RPCs in parallel.
Previously, we had a hard-coded number of 16 goroutines which was
probably insufficient in many cases. This commit makes it so that we use
the default value of `4 x N(cpus)` to make it proportional to how beefy
the node is (under the expectation that the larger the node is, the more
distributed queries it will be handling). The choice of the four as the
multiple was made so that we get the previous default on machines with
4 CPUs.

Additionally, this commit introduces a mechanism to dynamically adjust
the number of runners based on a cluster setting. Whenever the setting
is reduced, some of the workers are stopped, if the setting is
increased, then new workers are spun up accordingly. This coordinator
listens on two channels: one about the server quescing, and another
about the new target pool size. Whenever a new target size is received,
the coordinator will spin up / shut down one worker at a time until that
target size is achieved. The worker, however, doesn't access the server
quescing channel and, instead, relies on the coordinator to tell it to
exit (either by closing the channel when quescing or sending a single
message when the target size is decreased).

Fixes: #84459.

Release note: None

**distsql: change the flow setup code a bit**

Previously, when setting up a distributed plan, we would wait for all
SetupFlow RPCs to come back before setting up the flow on the gateway.
Most likely (in the happy scenario) all those RPCs would be successful,
so we can parallelize the happy path a bit by setting up the local flow
while the RPCs are in-flight which is what this commit does. This seems
especially beneficial given the change in the previous commit to
increase the number of DistSQL runners for beefy machines - we are now
more likely to issue SetupFlow RPCs asynchronously.

Release note: None

85091: flowinfra: disable queueing mechanism of the flow scheduler by default r=yuzefovich a=yuzefovich

This commit disables the queueing mechanism of the flow scheduler as
part of the effort to remove that queueing altogether during 23.1
release cycle. To get there though we choose a conservative approach of
introducing a cluster setting that determines whether the queueing is
enabled or not, and if it is disabled, then we effectively a treating
`sql.distsql.max_running_flows` limit as infinite. By default, the
queueing is now disabled since recent experiments have shown that the
admission control does a good job of protecting the nodes from the
influx of remote flows.

Addresses: #34229.

Release note: None

85134: sql: allow NULL in create view definition r=mgartner a=rafiss

fixes #84000

Release note (sql change): CREATE VIEW statements can now have a
constant NULL column definition. The resulting column is of type TEXT.

85178: kvserver: record batch requests with no gateway r=kvoli a=kvoli

Previously, batch requests with no `GatewayNodeID` would not be
accounted for on the QPS of a replica. By extension, the store QPS would
also not aggregate this missing QPS over replicas it holds. This patch
introduces tracking for all requests, regardless of the `GatewayNodeID`.

This was done to as follow the workload lease transfers consider the
per-locality counts, therefore untagged localities were not useful. This
has since been updated to ignore filter out localities directly, so it
is not necessary to exclude them anymore.

`leaseholderStats`, which previously tracked the QPS, and `writeStats`
tracking the mvcc keys written, have also been removed. They are
duplicated in `batchRequest` and `writeKeys` respectively, within the
`loadStats` of a replica.

resolves #85157

Release note: None

85355: sql: improve physical planning of window functions r=yuzefovich a=yuzefovich

**sql: remove shouldNotDistribute recommendation**

It doesn't seem to be used much.

Release note: None

**sql: improve physical planning of window functions**

This commit improves the physical planning of window functions in
several ways.

First, the optimizer is updated so that all window functions with a
PARTITION BY clause are constructed first followed by the remaining
window functions without PARTITION BY. This is needed by the execution
which can only evaluate functions with PARTITION BY in the distributed
fashion - as a result of this change, we are now more likely to get
partial distributed execution (previously things depended on the order
in which window functions were mentioned in the query).

Second, the physical planner now thinks that we "should distribute" the
plan if it finds at least one window function with PARTITION BY clause.
Previously, we didn't make any recommendation about the distribution
based on the presence of the window functions (i.e. we relied on the
rest of the plan to do so), but they can be quite computation-intensive,
so whenever we can distribute the execution, we should do so.

Additionally, this commit removes some of the code in the physical
planner which tries to find window functions with the same PARTITION BY
and ORDER BY clauses - that code has been redundant for long time given
that the optimizer does that too.

Release note: None

85366: sql,logictest,descidgen: abstract descriptor ID generation, make deterministic in logictests r=ajwerner a=ajwerner

The first commit adds an interface for descriptor ID generation and propagates the interface from the ExecCfg into the EvalContext. There are some minor refactoring to avoid propagating an ExecCfg further up the stack by making the parameters more specific. The second commit adds a testing knob to use a transactional implementation in the EvalContext.

Fixes #37751
Fixes #69226

85406: schemachanger: check explain diagrams during rollback test r=postamar a=postamar

This commit enriches the declarative schema changer integration tests by
making data-driven EXPLAIN output assertions easier to add as
a complement to otherwise unrelated tests. In particular, this commit
improves the rollback test to check the explained rollback plan for each
post-commit revertible stage. This should make it easier to debug bad
rule definitions which otherwise would manifest themselves as causing
the schema change to hang during the rollback.

Release note: None

85414: colflow: fix a recent flake r=yuzefovich a=yuzefovich

In 0866ddc we merged a change that
relied on the assumption that the allocator passed to the parallel
unordered synchronizer was not used by anyone else, but this assumption
was broken in a test and is now fixed.

Fixes: #85360.

Release note: None

Co-authored-by: Erik Grinaker <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
Co-authored-by: Austen McClernon <[email protected]>
Co-authored-by: Andrew Werner <[email protected]>
Co-authored-by: Marius Posta <[email protected]>
@craig craig bot closed this as completed in 85ce24d Aug 1, 2022
@exalate-issue-sync exalate-issue-sync bot added T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) and removed T-sql-schema-deprecated Use T-sql-foundations instead labels May 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-schema-descriptors Relating to SQL table/db descriptor handling. branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions)
Projects
None yet
9 participants