Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql/tests: TestErrorDuringExtendedProtocolCommit failed #101673

Closed
cockroach-teamcity opened this issue Apr 17, 2023 · 9 comments
Closed

sql/tests: TestErrorDuringExtendedProtocolCommit failed #101673

cockroach-teamcity opened this issue Apr 17, 2023 · 9 comments
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions)
Milestone

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Apr 17, 2023

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 9e9d9d1cc4337038bd901e9f7bb3d2b8817f8cc1:

I230417 18:56:30.135019 7810 upgrade/upgrademanager/manager.go:283  [T1,n1] 710  running permanent upgrade for version 22.2-98
I230417 18:56:30.135211 9653 server/autoconfig/auto_config.go:97  [T1,n1,job=AUTO CONFIG RUNNER id=102] 711  using provider with type acprovider.NoTaskProvider
I230417 18:56:30.135270 9653 server/autoconfig/auto_config.go:74  [T1,n1,job=AUTO CONFIG RUNNER id=102] 712  waiting for environment activation...
I230417 18:56:30.135294 9653 server/autoconfig/auto_config.go:74  [T1,n1,job=AUTO CONFIG RUNNER id=102] 713  waiting for environment activation...
I230417 18:56:30.142168 9677 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:40341,class=default,rpc] 714  connection is now ready
I230417 18:56:30.158868 7810 jobs/registry.go:608  [T1,n1] 715  active version is 22.2-98
I230417 18:56:30.166662 7810 upgrade/upgrademanager/manager.go:742  [T1,n1] 716  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230417 18:56:30.166736 9290 jobs/adopt.go:261  [T1,n1] 717  job 857497239994499073: resuming execution
I230417 18:56:30.179861 9691 jobs/registry.go:1538  [T1,n1] 718  MIGRATION job 857497239994499073: stepping through state running
I230417 18:56:30.194021 9760 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=857497239994499073,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 719 ={"Timestamp":1681757790190059624,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230417 18:56:30.199206 9794 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=857497239994499073,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 720 ={"Timestamp":1681757790194358757,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230417 18:56:30.202006 9799 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=857497239994499073,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 721 ={"Timestamp":1681757790199472773,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230417 18:56:30.204519 9722 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=857497239994499073,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 722 ={"Timestamp":1681757790202293875,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230417 18:56:30.206111 9691 jobs/registry.go:1538  [T1,n1] 723  MIGRATION job 857497239994499073: stepping through state succeeded
I230417 18:56:30.225345 7810 jobs/wait.go:145  [T1,n1] 724  waited for 1 [857497239994499073] queued jobs to complete 58.636756ms
I230417 18:56:30.225402 7810 server/server_sql.go:1618  [T1,n1] 725  done ensuring all necessary startup migrations have run
I230417 18:56:30.225665 9843 jobs/job_scheduler.go:407  [T1,n1] 726  waiting 4m0s before scheduled jobs daemon start
I230417 18:56:30.225997 9849 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 727  tenant-settings-watcher: established range feed cache
I230417 18:56:30.226591 9851 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 728  tenant-capability-watcher: established range feed cache
I230417 18:56:30.227742 9851 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 729  received results of a full table scan for tenant capabilities
I230417 18:56:30.227928 9738 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 730  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230417 18:56:30.228229 9822 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 731  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230417 18:56:30.229124 9847 server/auto_upgrade.go:78  [T1,n1] 732  no need to upgrade, cluster already at the newest version
I230417 18:56:30.230852 6138 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 733  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230417 18:56:30.231906 9845 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 734  warmed privileges for virtual tables in 6.201753ms
I230417 18:56:30.260768 9787 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r4/1:/System{/tsd-tse}] 735  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230417 18:56:30.277540 8071 3@pebble/event.go:734  [n1,s1,pebble] 736  [JOB 7] WAL created 000006
I230417 18:56:30.280228 7810 1@server/server_sql.go:1736  [T1,n1] 737  serving sql connections
I230417 18:56:30.281203 7810 testutils/testcluster/testcluster.go:1357  [-] 738  WaitForFullReplication
I230417 18:56:30.281229 7810 testutils/testcluster/testcluster.go:1361  [-] 739  WaitForFullReplication took: 259ns
    autocommit_extended_protocol_test.go:187: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:187
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230417 18:56:30.292348 7810 testutils/testcluster/testcluster.go:145  [-] 740  TestCluster quiescing nodes
I230417 18:56:30.292652 7987 server/start_listen.go:103  [T1,n1] 741  server shutting down: instructing cmux to stop accepting
W230417 18:56:30.292681 9601 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 742  failed to gc transaction record: could not GC completed transaction anchored at /Table/15/1/857497239751557121/0: node unavailable; try another peer
W230417 18:56:30.292994 8268 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 743  exiting heartbeat loop
W230417 18:56:30.293045 8268 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 744  exiting heartbeat loop with error: node unavailable; try another peer
I230417 18:56:30.293042 9677 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:40341,class=default,rpc] 745  connection heartbeat loop ended with err: <nil>
E230417 18:56:30.293076 8268 server/server_sql.go:508  [T1,n1] 746  failed to run update of instance with new session ID: node unavailable; try another peer
I230417 18:56:30.293338 8658 jobs/registry.go:1538  [T1,n1] 747  AUTO SPAN CONFIG RECONCILIATION job 857497237218000897: stepping through state succeeded
E230417 18:56:30.293462 8658 jobs/registry.go:943  [T1,n1] 748  error getting live session: node unavailable; try another peer
E230417 18:56:30.298756 8405 sql/stats/automatic_stats.go:629  [T1,n1] 749  failed to get tables for automatic stats: get-tables: node unavailable; try another peer
I230417 18:56:30.298970 8405 sql/stats/automatic_stats.go:550  [T1,n1] 750  quiescing auto stats refresher
W230417 18:56:30.300336 9653 jobs/adopt.go:518  [T1,n1] 751  could not clear job claim: clear-job-claim: failed to send RPC: sending to all replicas failed; last error: ba: Scan [/Table/15/1/102,/Table/15/1/103), [txn: 390a2564], [can-forward-ts] RPC error: node unavailable; try another peer
I230417 18:56:30.303946 7810 server/server_controller_orchestration.go:263  [T1,n1] 752  server controller shutting down ungracefully
I230417 18:56:30.304001 7810 server/server_controller_orchestration.go:274  [T1,n1] 753  waiting for tenant servers to report stopped
W230417 18:56:30.304080 7810 server/server_sql.go:1696  [T1,n1] 754  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.29s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

Same failure on other branches

/cc @cockroachdb/sql-queries

This test on roachdash | Improve this report!

Jira issue: CRDB-27083

@cockroach-teamcity cockroach-teamcity added branch-release-23.1.0 C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Apr 17, 2023
@cockroach-teamcity cockroach-teamcity added this to the 23.1 milestone Apr 17, 2023
@blathers-crl blathers-crl bot added T-sql-queries SQL Queries Team T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions) labels Apr 17, 2023
@DrewKimball
Copy link
Collaborator

Should #100925 be backported to 23.1.0? I don't think it made it in.

@mgartner
Copy link
Collaborator

cc @rafiss

@cockroach-teamcity
Copy link
Member Author

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 66bb3e9207c3fbf0cdbd3b55f37f839e4987117b:

I230419 17:57:37.878033 9563 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:39195,class=default,rpc] 702  connection is now ready
I230419 17:57:37.885534 7775 jobs/wait.go:145  [T1,n1] 703  waited for 1 [858051896277172225] queued jobs to complete 59.71751ms
I230419 17:57:37.885783 7775 upgrade/upgrademanager/manager.go:283  [T1,n1] 704  running permanent upgrade for version 22.2-98
I230419 17:57:37.886020 8639 server/autoconfig/auto_config.go:97  [T1,n1,job=AUTO CONFIG RUNNER id=102] 705  using provider with type acprovider.NoTaskProvider
I230419 17:57:37.886077 8639 server/autoconfig/auto_config.go:74  [T1,n1,job=AUTO CONFIG RUNNER id=102] 706  waiting for environment activation...
I230419 17:57:37.886106 8639 server/autoconfig/auto_config.go:74  [T1,n1,job=AUTO CONFIG RUNNER id=102] 707  waiting for environment activation...
I230419 17:57:37.909926 7775 jobs/registry.go:608  [T1,n1] 708  active version is 22.2-98
I230419 17:57:37.920249 7775 upgrade/upgrademanager/manager.go:742  [T1,n1] 709  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230419 17:57:37.920361 9596 jobs/adopt.go:261  [T1,n1] 710  job 858051896560320513: resuming execution
I230419 17:57:37.935893 9600 jobs/registry.go:1538  [T1,n1] 711  MIGRATION job 858051896560320513: stepping through state running
I230419 17:57:37.951633 9704 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=858051896560320513,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 712 ={"Timestamp":1681927057946923427,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230419 17:57:37.957253 9746 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=858051896560320513,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 713 ={"Timestamp":1681927057952015768,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230419 17:57:37.960459 9723 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=858051896560320513,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 714 ={"Timestamp":1681927057957639979,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230419 17:57:37.963642 9766 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=858051896560320513,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 715 ={"Timestamp":1681927057960820060,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230419 17:57:37.965761 9600 jobs/registry.go:1538  [T1,n1] 716  MIGRATION job 858051896560320513: stepping through state succeeded
I230419 17:57:37.985475 7775 jobs/wait.go:145  [T1,n1] 717  waited for 1 [858051896560320513] queued jobs to complete 65.164273ms
I230419 17:57:37.985528 7775 server/server_sql.go:1618  [T1,n1] 718  done ensuring all necessary startup migrations have run
I230419 17:57:37.985856 9811 jobs/job_scheduler.go:407  [T1,n1] 719  waiting 2m0s before scheduled jobs daemon start
I230419 17:57:37.986087 9817 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 720  tenant-settings-watcher: established range feed cache
I230419 17:57:37.986657 9819 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 721  tenant-capability-watcher: established range feed cache
I230419 17:57:37.988013 9801 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 722  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230419 17:57:37.988122 9819 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 723  received results of a full table scan for tenant capabilities
I230419 17:57:37.988342 9804 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 724  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230419 17:57:37.990374 9815 server/auto_upgrade.go:78  [T1,n1] 725  no need to upgrade, cluster already at the newest version
I230419 17:57:37.994096 9892 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 726  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230419 17:57:37.996882 9813 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 727  warmed privileges for virtual tables in 10.968369ms
I230419 17:57:38.024952 8086 3@pebble/event.go:734  [n1,s1,pebble] 728  [JOB 7] WAL created 000006
I230419 17:57:38.028384 7775 1@server/server_sql.go:1736  [T1,n1] 729  serving sql connections
I230419 17:57:38.029680 7775 testutils/testcluster/testcluster.go:1357  [-] 730  WaitForFullReplication
I230419 17:57:38.029725 7775 testutils/testcluster/testcluster.go:1361  [-] 731  WaitForFullReplication took: 364ns
    autocommit_extended_protocol_test.go:187: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:187
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230419 17:57:38.041348 7775 testutils/testcluster/testcluster.go:145  [-] 732  TestCluster quiescing nodes
E230419 17:57:38.041380 8358 sql/stats/automatic_stats.go:629  [T1,n1] 733  failed to get tables for automatic stats: injected error
I230419 17:57:38.041453 8358 sql/stats/automatic_stats.go:550  [T1,n1] 734  quiescing auto stats refresher
W230419 17:57:38.041630 9741 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 735  failed to gc transaction record: could not GC completed transaction anchored at /Table/15/1/858051896560320513/0: node unavailable; try another peer
I230419 17:57:38.041910 9563 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:39195,class=default,rpc] 736  connection heartbeat loop ended with err: <nil>
I230419 17:57:38.041944 7782 server/start_listen.go:103  [T1,n1] 737  server shutting down: instructing cmux to stop accepting
W230419 17:57:38.042101 8342 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 738  exiting heartbeat loop
W230419 17:57:38.042144 8342 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 739  exiting heartbeat loop with error: node unavailable; try another peer
E230419 17:57:38.042174 8342 server/server_sql.go:508  [T1,n1] 740  failed to run update of instance with new session ID: node unavailable; try another peer
I230419 17:57:38.042463 8655 jobs/registry.go:1538  [T1,n1] 741  AUTO SPAN CONFIG RECONCILIATION job 858051893457027073: stepping through state succeeded
E230419 17:57:38.042622 8655 jobs/registry.go:943  [T1,n1] 742  error getting live session: node unavailable; try another peer
W230419 17:57:38.047515 8639 jobs/adopt.go:518  [T1,n1] 743  could not clear job claim: clear-job-claim: failed to send RPC: sending to all replicas failed; last error: ba: Scan [/Table/15/1/102,/Table/15/1/103), [txn: 829f5048], [can-forward-ts] RPC error: node unavailable; try another peer
I230419 17:57:38.052212 7775 server/server_controller_orchestration.go:263  [T1,n1] 744  server controller shutting down ungracefully
I230419 17:57:38.052250 7775 server/server_controller_orchestration.go:274  [T1,n1] 745  waiting for tenant servers to report stopped
W230419 17:57:38.052331 7775 server/server_sql.go:1696  [T1,n1] 746  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.31s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

Same failure on other branches

This test on roachdash | Improve this report!

@cockroach-teamcity
Copy link
Member Author

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 360923fbbf41db7a30e0b5af66b7b2fbf04fd290:

I230501 17:24:23.371815 7884 jobs/registry.go:616  [T1,n1] 728  active version is 23.1
I230501 17:24:23.379580 7884 upgrade/upgrademanager/manager.go:742  [T1,n1] 729  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230501 17:24:23.379662 9918 jobs/adopt.go:261  [T1,n1] 730  job 861442747105771521: resuming execution
I230501 17:24:23.392819 9950 jobs/registry.go:1584  [T1,n1] 731  MIGRATION job 861442747105771521: stepping through state running
I230501 17:24:23.406763 9989 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=861442747105771521,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 732 ={"Timestamp":1682961863401563188,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230501 17:24:23.411922 9980 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=861442747105771521,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 733 ={"Timestamp":1682961863407300622,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230501 17:24:23.415136 9994 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=861442747105771521,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 734 ={"Timestamp":1682961863412189346,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230501 17:24:23.417976 9966 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=861442747105771521,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 735 ={"Timestamp":1682961863415549778,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230501 17:24:23.419575 9950 jobs/registry.go:1584  [T1,n1] 736  MIGRATION job 861442747105771521: stepping through state succeeded
I230501 17:24:23.440773 7884 jobs/wait.go:145  [T1,n1] 737  waited for 1 [861442747105771521] queued jobs to complete 61.143803ms
I230501 17:24:23.440965 7884 upgrade/upgrademanager/manager.go:283  [T1,n1] 738  running permanent upgrade for version 22.2-102
I230501 17:24:23.473439 7884 jobs/registry.go:616  [T1,n1] 739  active version is 23.1
I230501 17:24:23.483453 7884 upgrade/upgrademanager/manager.go:742  [T1,n1] 740  running Upgrade to 22.2-102: "create sql activity updater job"
I230501 17:24:23.483560 10037 jobs/adopt.go:261  [T1,n1] 741  job 861442747403468801: resuming execution
I230501 17:24:23.497035 10042 jobs/registry.go:1584  [T1,n1] 742  MIGRATION job 861442747403468801: stepping through state running
I230501 17:24:23.513978 10042 jobs/registry.go:1584  [T1,n1] 743  MIGRATION job 861442747403468801: stepping through state succeeded
I230501 17:24:23.532654 7884 jobs/wait.go:145  [T1,n1] 744  waited for 1 [861442747403468801] queued jobs to complete 49.124889ms
I230501 17:24:23.532721 7884 server/server_sql.go:1615  [T1,n1] 745  done ensuring all necessary startup migrations have run
I230501 17:24:23.532969 10162 jobs/job_scheduler.go:407  [T1,n1] 746  waiting 4m0s before scheduled jobs daemon start
I230501 17:24:23.533342 10168 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 747  tenant-settings-watcher: established range feed cache
I230501 17:24:23.534075 10170 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 748  tenant-capability-watcher: established range feed cache
I230501 17:24:23.535171 8744 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 749  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230501 17:24:23.535550 9672 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 750  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230501 17:24:23.535597 10170 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 751  received results of a full table scan for tenant capabilities
I230501 17:24:23.536701 10166 server/auto_upgrade.go:78  [T1,n1] 752  no need to upgrade, cluster already at the newest version
I230501 17:24:23.538898 10216 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 753  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230501 17:24:23.539710 10164 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 754  warmed privileges for virtual tables in 6.670774ms
I230501 17:24:23.582376 8313 3@pebble/event.go:734  [n1,s1,pebble] 755  [JOB 7] WAL created 000006
I230501 17:24:23.584951 7884 1@server/server_sql.go:1733  [T1,n1] 756  serving sql connections
I230501 17:24:23.586216 7884 testutils/testcluster/testcluster.go:1357  [-] 757  WaitForFullReplication
I230501 17:24:23.586259 7884 testutils/testcluster/testcluster.go:1361  [-] 758  WaitForFullReplication took: 261ns
    autocommit_extended_protocol_test.go:197: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230501 17:24:23.598091 7884 testutils/testcluster/testcluster.go:145  [-] 759  TestCluster quiescing nodes
I230501 17:24:23.598249 8605 sql/stats/automatic_stats.go:550  [T1,n1] 760  quiescing auto stats refresher
I230501 17:24:23.598499 8229 server/start_listen.go:103  [T1,n1] 761  server shutting down: instructing cmux to stop accepting
W230501 17:24:23.598768 8518 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 762  exiting heartbeat loop
I230501 17:24:23.598879 9904 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:43871,class=default,rpc] 763  connection heartbeat loop ended with err: <nil>
W230501 17:24:23.598891 8518 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 765  exiting heartbeat loop with error: node unavailable; try another peer
E230501 17:24:23.598938 8518 server/server_sql.go:507  [T1,n1] 767  failed to run update of instance with new session ID: node unavailable; try another peer
W230501 17:24:23.598920 8922 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 766  failed to gc transaction record: could not GC completed transaction anchored at /Table/3/1/102/2/1: node unavailable; try another peer
I230501 17:24:23.598895 8863 jobs/registry.go:1584  [T1,n1] 764  AUTO SPAN CONFIG RECONCILIATION job 861442744329338881: stepping through state succeeded
E230501 17:24:23.599090 8863 jobs/registry.go:989  [T1,n1] 768  error getting live session: node unavailable; try another peer
W230501 17:24:23.604076 9833 jobs/adopt.go:518  [T1,n1] 769  could not clear job claim: clear-job-claim: failed to send RPC: sending to all replicas failed; last error: ba: Scan [/Table/15/1/102,/Table/15/1/103), [txn: 9d283fec], [can-forward-ts] RPC error: node unavailable; try another peer
I230501 17:24:23.609864 7884 server/server_controller_orchestration.go:263  [T1,n1] 770  server controller shutting down ungracefully
I230501 17:24:23.609897 7884 server/server_controller_orchestration.go:274  [T1,n1] 771  waiting for tenant servers to report stopped
W230501 17:24:23.609960 7884 server/server_sql.go:1693  [T1,n1] 772  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.36s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

Same failure on other branches

This test on roachdash | Improve this report!

@rafiss
Copy link
Collaborator

rafiss commented May 1, 2023

That PR is now backported to 23.1.0 in #102145 but that fix was incomplete

#102187 deflakes more of the test. But since we're being more selective about backports now, I didn't backport it to 23.1.0.

@mgartner mgartner removed the T-sql-queries SQL Queries Team label May 2, 2023
@cockroach-teamcity
Copy link
Member Author

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 5efa6ff20f56d2ce8111b710190d1b378adb26d9:

I230503 17:39:36.065299 7993 upgrade/upgrademanager/manager.go:742  [T1,n1] 726  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230503 17:39:36.065374 9885 jobs/adopt.go:261  [T1,n1] 727  job 862011968822575105: resuming execution
I230503 17:39:36.079467 9840 jobs/registry.go:1584  [T1,n1] 728  MIGRATION job 862011968822575105: stepping through state running
I230503 17:39:36.092949 9941 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862011968822575105,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 729 ={"Timestamp":1683135576087954413,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230503 17:39:36.097144 9905 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862011968822575105,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 730 ={"Timestamp":1683135576093207787,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230503 17:39:36.099233 9529 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r21/1:/Table/{19-20}] 731  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230503 17:39:36.100381 9972 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862011968822575105,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 732 ={"Timestamp":1683135576097464533,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230503 17:39:36.103668 8697 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862011968822575105,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 733 ={"Timestamp":1683135576100697900,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230503 17:39:36.105489 9840 jobs/registry.go:1584  [T1,n1] 734  MIGRATION job 862011968822575105: stepping through state succeeded
I230503 17:39:36.106901 9917 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:44191,class=default,rpc] 735  connection is now ready
I230503 17:39:36.124815 7993 jobs/wait.go:145  [T1,n1] 736  waited for 1 [862011968822575105] queued jobs to complete 59.460967ms
I230503 17:39:36.125038 7993 upgrade/upgrademanager/manager.go:283  [T1,n1] 737  running permanent upgrade for version 22.2-102
I230503 17:39:36.149721 7993 jobs/registry.go:616  [T1,n1] 738  active version is 23.1
I230503 17:39:36.159213 7993 upgrade/upgrademanager/manager.go:742  [T1,n1] 739  running Upgrade to 22.2-102: "create sql activity updater job"
I230503 17:39:36.159297 10030 jobs/adopt.go:261  [T1,n1] 740  job 862011969126662145: resuming execution
I230503 17:39:36.172588 10038 jobs/registry.go:1584  [T1,n1] 741  MIGRATION job 862011969126662145: stepping through state running
I230503 17:39:36.190152 10038 jobs/registry.go:1584  [T1,n1] 742  MIGRATION job 862011969126662145: stepping through state succeeded
I230503 17:39:36.207695 7993 jobs/wait.go:145  [T1,n1] 743  waited for 1 [862011969126662145] queued jobs to complete 48.418708ms
I230503 17:39:36.207746 7993 server/server_sql.go:1615  [T1,n1] 744  done ensuring all necessary startup migrations have run
I230503 17:39:36.208038 10049 jobs/job_scheduler.go:407  [T1,n1] 745  waiting 2m0s before scheduled jobs daemon start
I230503 17:39:36.208344 10135 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 746  tenant-settings-watcher: established range feed cache
I230503 17:39:36.208877 10137 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 747  tenant-capability-watcher: established range feed cache
I230503 17:39:36.210066 10137 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 748  received results of a full table scan for tenant capabilities
I230503 17:39:36.210049 10148 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 749  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230503 17:39:36.210373 8636 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 750  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230503 17:39:36.211437 10133 server/auto_upgrade.go:78  [T1,n1] 751  no need to upgrade, cluster already at the newest version
I230503 17:39:36.215583 10181 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 752  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230503 17:39:36.216891 10131 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 753  warmed privileges for virtual tables in 8.789053ms
I230503 17:39:36.245808 8316 3@pebble/event.go:734  [n1,s1,pebble] 754  [JOB 7] WAL created 000006
I230503 17:39:36.248971 7993 1@server/server_sql.go:1733  [T1,n1] 755  serving sql connections
I230503 17:39:36.249883 7993 testutils/testcluster/testcluster.go:1357  [-] 756  WaitForFullReplication
I230503 17:39:36.249912 7993 testutils/testcluster/testcluster.go:1361  [-] 757  WaitForFullReplication took: 183ns
    autocommit_extended_protocol_test.go:197: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230503 17:39:36.261701 7993 testutils/testcluster/testcluster.go:145  [-] 758  TestCluster quiescing nodes
I230503 17:39:36.261786 8645 sql/stats/automatic_stats.go:550  [T1,n1] 759  quiescing auto stats refresher
W230503 17:39:36.261917 6108 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 760  exiting heartbeat loop
W230503 17:39:36.261975 6108 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 761  exiting heartbeat loop with error: node unavailable; try another peer
E230503 17:39:36.262007 6108 server/server_sql.go:507  [T1,n1] 762  failed to run update of instance with new session ID: node unavailable; try another peer
W230503 17:39:36.262013 10061 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 763  failed to gc transaction record: could not GC completed transaction anchored at /Table/15/1/862011969126662145/0: node unavailable; try another peer
E230503 17:39:36.262266 9832 jobs/registry.go:989  [T1,n1] 764  error getting live session: node unavailable; try another peer
I230503 17:39:36.262298 9917 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:44191,class=default,rpc] 765  connection heartbeat loop ended with err: <nil>
I230503 17:39:36.262338 8071 server/start_listen.go:103  [T1,n1] 766  server shutting down: instructing cmux to stop accepting
I230503 17:39:36.262457 8793 jobs/registry.go:1584  [T1,n1] 767  AUTO SPAN CONFIG RECONCILIATION job 862011966009475073: stepping through state succeeded
I230503 17:39:36.268048 7993 server/server_controller_orchestration.go:263  [T1,n1] 768  server controller shutting down ungracefully
I230503 17:39:36.268081 7993 server/server_controller_orchestration.go:274  [T1,n1] 769  waiting for tenant servers to report stopped
W230503 17:39:36.268158 7993 server/server_sql.go:1693  [T1,n1] 770  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.27s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

@cockroach-teamcity
Copy link
Member Author

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 5efa6ff20f56d2ce8111b710190d1b378adb26d9:

I230504 18:01:59.407836 7609 upgrade/upgrademanager/manager.go:742  [T1,n1] 730  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230504 18:01:59.407920 9345 jobs/adopt.go:261  [T1,n1] 731  job 862299486227791873: resuming execution
I230504 18:01:59.420731 9407 jobs/registry.go:1584  [T1,n1] 732  MIGRATION job 862299486227791873: stepping through state running
I230504 18:01:59.433139 9511 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862299486227791873,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 733 ={"Timestamp":1683223319428571638,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230504 18:01:59.437588 9493 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862299486227791873,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 734 ={"Timestamp":1683223319433443518,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230504 18:01:59.440447 9499 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862299486227791873,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 735 ={"Timestamp":1683223319437872915,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230504 18:01:59.443943 9524 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=862299486227791873,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 736 ={"Timestamp":1683223319441052608,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230504 18:01:59.447245 9407 jobs/registry.go:1584  [T1,n1] 737  MIGRATION job 862299486227791873: stepping through state succeeded
I230504 18:01:59.465494 7609 jobs/wait.go:145  [T1,n1] 738  waited for 1 [862299486227791873] queued jobs to complete 57.59881ms
I230504 18:01:59.465677 7609 upgrade/upgrademanager/manager.go:283  [T1,n1] 739  running permanent upgrade for version 22.2-102
I230504 18:01:59.467409 9591 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r9/1:/Table/{5-6}] 740  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230504 18:01:59.475073 9596 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:36203,class=default,rpc] 741  connection is now ready
I230504 18:01:59.483922 7609 jobs/registry.go:616  [T1,n1] 742  active version is 23.1
I230504 18:01:59.491741 7609 upgrade/upgrademanager/manager.go:742  [T1,n1] 743  running Upgrade to 22.2-102: "create sql activity updater job"
I230504 18:01:59.491840 9605 jobs/adopt.go:261  [T1,n1] 744  job 862299486505271297: resuming execution
I230504 18:01:59.517573 9544 jobs/registry.go:1584  [T1,n1] 745  MIGRATION job 862299486505271297: stepping through state running
I230504 18:01:59.533985 9544 jobs/registry.go:1584  [T1,n1] 746  MIGRATION job 862299486505271297: stepping through state succeeded
I230504 18:01:59.551465 7609 jobs/wait.go:145  [T1,n1] 747  waited for 1 [862299486505271297] queued jobs to complete 59.660744ms
I230504 18:01:59.551508 7609 server/server_sql.go:1615  [T1,n1] 748  done ensuring all necessary startup migrations have run
I230504 18:01:59.551750 9704 jobs/job_scheduler.go:407  [T1,n1] 749  waiting 2m0s before scheduled jobs daemon start
I230504 18:01:59.552051 9710 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 750  tenant-settings-watcher: established range feed cache
I230504 18:01:59.555389 9712 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 751  tenant-capability-watcher: established range feed cache
I230504 18:01:59.556874 9708 server/auto_upgrade.go:78  [T1,n1] 752  no need to upgrade, cluster already at the newest version
I230504 18:01:59.557473 9706 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 753  warmed privileges for virtual tables in 5.556977ms
I230504 18:01:59.558004 9712 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 754  received results of a full table scan for tenant capabilities
I230504 18:01:59.558084 9753 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 755  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230504 18:01:59.558319 9671 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 756  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230504 18:01:59.558525 9672 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 757  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230504 18:01:59.580776 7896 3@pebble/event.go:734  [n1,s1,pebble] 758  [JOB 7] WAL created 000006
I230504 18:01:59.582791 7609 1@server/server_sql.go:1733  [T1,n1] 759  serving sql connections
I230504 18:01:59.583936 7609 testutils/testcluster/testcluster.go:1357  [-] 760  WaitForFullReplication
I230504 18:01:59.583967 7609 testutils/testcluster/testcluster.go:1361  [-] 761  WaitForFullReplication took: 252ns
    autocommit_extended_protocol_test.go:197: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230504 18:01:59.594455 7609 testutils/testcluster/testcluster.go:145  [-] 762  TestCluster quiescing nodes
I230504 18:01:59.594630 7871 sql/stats/automatic_stats.go:550  [T1,n1] 763  quiescing auto stats refresher
W230504 18:01:59.594870 9071 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 764  failed to gc transaction record: could not GC completed transaction anchored at /Table/15/1/862299485249568769/0: node unavailable; try another peer
I230504 18:01:59.594994 7832 server/start_listen.go:103  [T1,n1] 765  server shutting down: instructing cmux to stop accepting
W230504 18:01:59.595642 8156 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 766  exiting heartbeat loop
W230504 18:01:59.595679 8156 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 767  exiting heartbeat loop with error: node unavailable; try another peer
E230504 18:01:59.595703 8156 server/server_sql.go:507  [T1,n1] 768  failed to run update of instance with new session ID: node unavailable; try another peer
I230504 18:01:59.595876 8376 jobs/registry.go:1584  [T1,n1] 769  AUTO SPAN CONFIG RECONCILIATION job 862299483319762945: stepping through state succeeded
E230504 18:01:59.595986 8376 jobs/registry.go:989  [T1,n1] 770  error getting live session: node unavailable; try another peer
I230504 18:01:59.596208 9596 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:36203,class=default,rpc] 771  connection heartbeat loop ended with err: <nil>
I230504 18:01:59.600629 7609 server/server_controller_orchestration.go:263  [T1,n1] 772  server controller shutting down ungracefully
I230504 18:01:59.600662 7609 server/server_controller_orchestration.go:274  [T1,n1] 773  waiting for tenant servers to report stopped
W230504 18:01:59.600736 7609 server/server_sql.go:1693  [T1,n1] 774  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.23s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

@cockroach-teamcity
Copy link
Member Author

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 358e0d87912365b8976c55ab9b3292e999cf720d:

I230509 17:47:26.560679 7685 upgrade/upgrademanager/manager.go:742  [T1,n1] 728  running Upgrade to 22.2-98: "change TTL for SQL Stats system tables"
I230509 17:47:26.560805 9514 jobs/adopt.go:261  [T1,n1] 729  job 863712203604262913: resuming execution
I230509 17:47:26.574892 9372 jobs/registry.go:1584  [T1,n1] 730  MIGRATION job 863712203604262913: stepping through state running
I230509 17:47:26.587246 9542 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=863712203604262913,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 731 ={"Timestamp":1683654446583880734,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230509 17:47:26.591875 9435 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=863712203604262913,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 732 ={"Timestamp":1683654446587526915,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230509 17:47:26.595136 9572 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=863712203604262913,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 733 ={"Timestamp":1683654446592134453,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230509 17:47:26.598664 9551 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=863712203604262913,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 734 ={"Timestamp":1683654446595362255,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230509 17:47:26.600214 9372 jobs/registry.go:1584  [T1,n1] 735  MIGRATION job 863712203604262913: stepping through state succeeded
I230509 17:47:26.616694 7685 jobs/wait.go:145  [T1,n1] 736  waited for 1 [863712203604262913] queued jobs to complete 55.918314ms
I230509 17:47:26.616858 7685 upgrade/upgrademanager/manager.go:283  [T1,n1] 737  running permanent upgrade for version 22.2-102
I230509 17:47:26.631619 7685 jobs/registry.go:616  [T1,n1] 738  active version is 23.1
I230509 17:47:26.638528 7685 upgrade/upgrademanager/manager.go:742  [T1,n1] 739  running Upgrade to 22.2-102: "create sql activity updater job"
I230509 17:47:26.638605 9585 jobs/adopt.go:261  [T1,n1] 740  job 863712203954225153: resuming execution
I230509 17:47:26.652268 9650 jobs/registry.go:1584  [T1,n1] 741  MIGRATION job 863712203954225153: stepping through state running
I230509 17:47:26.652763 9638 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:33929,class=default,rpc] 742  connection is now ready
I230509 17:47:26.678649 9650 jobs/registry.go:1584  [T1,n1] 743  MIGRATION job 863712203954225153: stepping through state succeeded
I230509 17:47:26.700206 7685 jobs/wait.go:145  [T1,n1] 744  waited for 1 [863712203954225153] queued jobs to complete 61.625022ms
I230509 17:47:26.700252 7685 server/server_sql.go:1615  [T1,n1] 745  done ensuring all necessary startup migrations have run
I230509 17:47:26.700439 9166 jobs/job_scheduler.go:407  [T1,n1] 746  waiting 2m0s before scheduled jobs daemon start
I230509 17:47:26.700753 9748 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 747  tenant-settings-watcher: established range feed cache
I230509 17:47:26.701229 9750 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 748  tenant-capability-watcher: established range feed cache
I230509 17:47:26.702609 9750 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 749  received results of a full table scan for tenant capabilities
I230509 17:47:26.702837 9781 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 750  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230509 17:47:26.704904 9722 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 751  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230509 17:47:26.704949 9746 server/auto_upgrade.go:78  [T1,n1] 752  no need to upgrade, cluster already at the newest version
I230509 17:47:26.705370 9723 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 753  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230509 17:47:26.706391 9168 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 754  warmed privileges for virtual tables in 5.838664ms
I230509 17:47:26.749385 7880 3@pebble/event.go:734  [n1,s1,pebble] 755  [JOB 7] WAL created 000006
I230509 17:47:26.753853 7685 1@server/server_sql.go:1733  [T1,n1] 756  serving sql connections
I230509 17:47:26.754935 7685 testutils/testcluster/testcluster.go:1357  [-] 757  WaitForFullReplication
I230509 17:47:26.754966 7685 testutils/testcluster/testcluster.go:1361  [-] 758  WaitForFullReplication took: 271ns
    autocommit_extended_protocol_test.go:197: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230509 17:47:26.769150 7685 testutils/testcluster/testcluster.go:145  [-] 759  TestCluster quiescing nodes
I230509 17:47:26.769310 8244 sql/stats/automatic_stats.go:550  [T1,n1] 760  quiescing auto stats refresher
W230509 17:47:26.769501 9036 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 761  failed to gc transaction record: could not GC completed transaction anchored at /Table/3/1/102/2/1: node unavailable; try another peer
I230509 17:47:26.769603 7860 server/start_listen.go:103  [T1,n1] 762  server shutting down: instructing cmux to stop accepting
W230509 17:47:26.769904 8206 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 763  exiting heartbeat loop
W230509 17:47:26.769956 8206 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 764  exiting heartbeat loop with error: node unavailable; try another peer
E230509 17:47:26.769981 8206 server/server_sql.go:507  [T1,n1] 765  failed to run update of instance with new session ID: node unavailable; try another peer
I230509 17:47:26.770077 9638 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:33929,class=default,rpc] 766  connection heartbeat loop ended with err: <nil>
I230509 17:47:26.770145 8402 jobs/registry.go:1584  [T1,n1] 767  AUTO SPAN CONFIG RECONCILIATION job 863712200825569281: stepping through state succeeded
E230509 17:47:26.770358 8402 jobs/registry.go:989  [T1,n1] 768  error getting live session: node unavailable; try another peer
W230509 17:47:26.774847 9447 jobs/adopt.go:518  [T1,n1] 769  could not clear job claim: clear-job-claim: failed to send RPC: sending to all replicas failed; last error: ba: Scan [/Table/15/1/102,/Table/15/1/103), [txn: 75208d33], [can-forward-ts] RPC error: node unavailable; try another peer
I230509 17:47:26.780371 7685 server/server_controller_orchestration.go:263  [T1,n1] 770  server controller shutting down ungracefully
I230509 17:47:26.780424 7685 server/server_controller_orchestration.go:274  [T1,n1] 771  waiting for tenant servers to report stopped
W230509 17:47:26.780552 7685 server/server_sql.go:1693  [T1,n1] 772  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.24s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

@cockroach-teamcity
Copy link
Member Author

sql/tests.TestErrorDuringExtendedProtocolCommit failed with artifacts on release-23.1.0 @ 358e0d87912365b8976c55ab9b3292e999cf720d:

I230512 17:12:47.504153 9473 jobs/registry.go:1584  [T1,n1] 727  MIGRATION job 864554737560748033: stepping through state running
I230512 17:12:47.514555 9509 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=864554737560748033,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 728 ={"Timestamp":1683911567511443625,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":42,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230512 17:12:47.518390 9525 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=864554737560748033,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 729 ={"Timestamp":1683911567514795550,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_statistics CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":43,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_statistics","Options":["\"gc.ttlseconds\" = 3600"]}
I230512 17:12:47.521038 9533 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=864554737560748033,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 730 ={"Timestamp":1683911567518609560,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.statement_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":60,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.statement_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230512 17:12:47.523305 9542 1@util/log/event_log.go:32  [T1,n1,job=MIGRATION id=864554737560748033,upgrade=22.2-98,intExec=set-SQLStatsTables-TTL] 731 ={"Timestamp":1683911567521283401,"EventType":"set_zone_config","Statement":"ALTER TABLE \"\".system.transaction_activity CONFIGURE ZONE USING \"gc.ttlseconds\" = $1","Tag":"CONFIGURE ZONE","User":"root","DescriptorID":61,"ApplicationName":"$ internal-set-SQLStatsTables-TTL","PlaceholderValues":["3600"],"Target":"TABLE system.public.transaction_activity","Options":["\"gc.ttlseconds\" = 3600"]}
I230512 17:12:47.524701 9473 jobs/registry.go:1584  [T1,n1] 732  MIGRATION job 864554737560748033: stepping through state succeeded
I230512 17:12:47.539452 7732 jobs/wait.go:145  [T1,n1] 733  waited for 1 [864554737560748033] queued jobs to complete 47.346281ms
I230512 17:12:47.539593 7732 upgrade/upgrademanager/manager.go:283  [T1,n1] 734  running permanent upgrade for version 22.2-102
I230512 17:12:47.557481 7732 jobs/registry.go:616  [T1,n1] 735  active version is 23.1
I230512 17:12:47.564735 7732 upgrade/upgrademanager/manager.go:742  [T1,n1] 736  running Upgrade to 22.2-102: "create sql activity updater job"
I230512 17:12:47.564890 9612 jobs/adopt.go:261  [T1,n1] 737  job 864554737793859585: resuming execution
I230512 17:12:47.579335 9520 jobs/registry.go:1584  [T1,n1] 738  MIGRATION job 864554737793859585: stepping through state running
I230512 17:12:47.594329 9520 jobs/registry.go:1584  [T1,n1] 739  MIGRATION job 864554737793859585: stepping through state succeeded
I230512 17:12:47.611502 7732 jobs/wait.go:145  [T1,n1] 740  waited for 1 [864554737793859585] queued jobs to complete 46.627343ms
I230512 17:12:47.611552 7732 server/server_sql.go:1615  [T1,n1] 741  done ensuring all necessary startup migrations have run
I230512 17:12:47.611806 9658 jobs/job_scheduler.go:407  [T1,n1] 742  waiting 2m0s before scheduled jobs daemon start
I230512 17:12:47.612029 9664 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 743  tenant-settings-watcher: established range feed cache
I230512 17:12:47.612685 9682 kv/kvclient/rangefeed/rangefeedcache/watcher.go:335  [T1,n1] 744  tenant-capability-watcher: established range feed cache
I230512 17:12:47.613689 9682 multitenant/tenantcapabilities/tenantcapabilitieswatcher/watcher.go:149  [T1,n1] 745  received results of a full table scan for tenant capabilities
I230512 17:12:47.613827 9701 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r12/1:/Table/{8-11}] 746  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230512 17:12:47.614904 9676 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r58/1:/Table/5{7-8}] 747  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230512 17:12:47.615263 9662 server/auto_upgrade.go:78  [T1,n1] 748  no need to upgrade, cluster already at the newest version
I230512 17:12:47.617932 9719 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r52/1:/Table/5{1-2}] 749  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230512 17:12:47.626903 9660 sql/syntheticprivilegecache/cache.go:199  [T1,n1] 750  warmed privileges for virtual tables in 15.020066ms
I230512 17:12:47.653286 7880 3@pebble/event.go:734  [n1,s1,pebble] 751  [JOB 7] WAL created 000006
I230512 17:12:47.655404 7732 1@server/server_sql.go:1733  [T1,n1] 752  serving sql connections
I230512 17:12:47.656343 7732 testutils/testcluster/testcluster.go:1357  [-] 753  WaitForFullReplication
I230512 17:12:47.656362 7732 testutils/testcluster/testcluster.go:1361  [-] 754  WaitForFullReplication took: 339ns
I230512 17:12:47.657842 9751 13@kv/kvserver/replicate_queue.go:839  [T1,n1,replicate,s1,r49/1:/Table/4{7-8}] 755  error processing replica: 0 of 1 live stores are able to take a new replica for the range (1 already has a voter, 0 already have a non-voter); likely not enough nodes in cluster
I230512 17:12:47.661416 9738 2@rpc/context.go:2565  [T1,n1,rnode=1,raddr=127.0.0.1:43857,class=default,rpc] 756  connection is now ready
    autocommit_extended_protocol_test.go:197: 
        	Error Trace:	github.com/cockroachdb/cockroach/pkg/sql/tests/autocommit_extended_protocol_test.go:197
        	Error:      	An error is expected but got nil.
        	Test:       	TestErrorDuringExtendedProtocolCommit
I230512 17:12:47.677421 7732 testutils/testcluster/testcluster.go:145  [-] 757  TestCluster quiescing nodes
W230512 17:12:47.677657 8522 kv/kvserver/intentresolver/intent_resolver.go:826  [-] 758  failed to gc transaction record: could not GC completed transaction anchored at /Table/15/1/864554734961590273/0: node unavailable; try another peer
I230512 17:12:47.678156 4990 server/start_listen.go:103  [T1,n1] 759  server shutting down: instructing cmux to stop accepting
I230512 17:12:47.678245 9738 rpc/context.go:2302  [T1,n1,rnode=1,raddr=127.0.0.1:43857,class=default,rpc] 760  connection heartbeat loop ended with err: <nil>
I230512 17:12:47.678932 8434 jobs/registry.go:1584  [T1,n1] 761  AUTO SPAN CONFIG RECONCILIATION job 864554734938750977: stepping through state succeeded
W230512 17:12:47.678987 8047 sql/sqlliveness/slinstance/slinstance.go:334  [T1,n1] 762  exiting heartbeat loop
W230512 17:12:47.679009 8047 sql/sqlliveness/slinstance/slinstance.go:321  [T1,n1] 763  exiting heartbeat loop with error: node unavailable; try another peer
E230512 17:12:47.679085 8047 server/server_sql.go:507  [T1,n1] 764  failed to run update of instance with new session ID: node unavailable; try another peer
E230512 17:12:47.679147 8434 jobs/registry.go:989  [T1,n1] 765  error getting live session: node unavailable; try another peer
E230512 17:12:47.680594 8267 sql/stats/automatic_stats.go:629  [T1,n1] 766  failed to get tables for automatic stats: get-tables: node unavailable; try another peer
I230512 17:12:47.680722 8267 sql/stats/automatic_stats.go:550  [T1,n1] 767  quiescing auto stats refresher
W230512 17:12:47.683641 9400 jobs/adopt.go:518  [T1,n1] 768  could not clear job claim: clear-job-claim: failed to send RPC: sending to all replicas failed; last error: ba: Scan [/Table/15/1/102,/Table/15/1/103), [txn: 93c86663], [can-forward-ts] RPC error: node unavailable; try another peer
I230512 17:12:47.691871 7732 server/server_controller_orchestration.go:263  [T1,n1] 769  server controller shutting down ungracefully
I230512 17:12:47.691912 7732 server/server_controller_orchestration.go:274  [T1,n1] 770  waiting for tenant servers to report stopped
W230512 17:12:47.691982 7732 server/server_sql.go:1693  [T1,n1] 771  server shutdown without a prior graceful drain
--- FAIL: TestErrorDuringExtendedProtocolCommit (1.13s)

Parameters: TAGS=bazel,gss

Help

See also: How To Investigate a Go Test Failure (internal)

Same failure on other branches

This test on roachdash | Improve this report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-sql-foundations SQL Foundations Team (formerly SQL Schema + SQL Sessions)
Projects
None yet
Development

No branches or pull requests

4 participants