Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

teamcity: failed tests on master: testrace/TestSystemZoneConfigs #19201

Closed
cockroach-teamcity opened this issue Oct 12, 2017 · 1 comment
Closed
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.

Comments

@cockroach-teamcity
Copy link
Member

The following tests appear to have failed:

#375665:

--- FAIL: testrace/TestSystemZoneConfigs (0.000s)
Race detected!

------- Stdout: -------
W171012 01:36:59.142028 82779 server/status/runtime.go:109  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171012 01:36:59.148520 82779 server/config.go:507  [n?] 1 storage engine initialized
I171012 01:36:59.148573 82779 server/config.go:510  [n?] RocksDB cache size: 128 MiB
I171012 01:36:59.148607 82779 server/config.go:510  [n?] store 0: in-memory, size 0 B
I171012 01:36:59.187541 82779 server/node.go:347  [n?] **** cluster 05078b23-e5fe-4577-b70d-3143ce805e42 has been created
I171012 01:36:59.187688 82779 server/server.go:882  [n?] **** add additional nodes by specifying --join=127.0.0.1:37687
I171012 01:36:59.207090 82779 storage/store.go:1209  [n1,s1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I171012 01:36:59.207842 82779 server/node.go:472  [n1] initialized store [n1,s1]: disk (capacity=512 MiB, available=512 MiB, used=0 B, logicalBytes=3.2 KiB), ranges=1, leases=0, writes=0.00, bytesPerReplica={p10=3234.00 p25=3234.00 p50=3234.00 p75=3234.00 p90=3234.00}, writesPerReplica={p10=0.00 p25=0.00 p50=0.00 p75=0.00 p90=0.00}
I171012 01:36:59.208137 82779 server/node.go:329  [n1] node ID 1 initialized
I171012 01:36:59.208439 82779 gossip/gossip.go:331  [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:37687" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:3 > 
I171012 01:36:59.209136 82779 storage/stores.go:332  [n1] read 0 node addresses from persistent storage
I171012 01:36:59.209843 82779 server/node.go:617  [n1] connecting to gossip network to verify cluster ID...
I171012 01:36:59.210016 82779 server/node.go:642  [n1] node connected via gossip and verified as part of cluster "05078b23-e5fe-4577-b70d-3143ce805e42"
I171012 01:36:59.210286 82779 server/node.go:414  [n1] node=1: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I171012 01:36:59.212870 82779 sql/executor.go:408  [n1] creating distSQLPlanner with address {tcp 127.0.0.1:37687}
I171012 01:36:59.229198 83194 storage/replica_command.go:2792  [split,n1,s1,r1/1:/M{in-ax}] initiating a split of this range at key /System/"" [r2]
I171012 01:36:59.243600 82779 server/server.go:987  [n1] starting https server at 127.0.0.1:42553
I171012 01:36:59.243749 82779 server/server.go:988  [n1] starting grpc/postgres server at 127.0.0.1:37687
I171012 01:36:59.243787 82779 server/server.go:989  [n1] advertising CockroachDB node at 127.0.0.1:37687
W171012 01:36:59.244508 82779 sql/jobs/registry.go:167  [n1] unable to get node liveness: node not in the liveness table
I171012 01:36:59.442221 83194 storage/replica_command.go:2792  [split,n1,s1,r2/1:/{System/-Max}] initiating a split of this range at key /System/NodeLiveness [r3]
I171012 01:36:59.555036 82779 sql/event_log.go:103  [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN "uniqueID" SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
I171012 01:36:59.597169 82783 storage/queue.go:723  [n1,replicate] purgatory is now empty
I171012 01:36:59.599993 83194 storage/replica_command.go:2792  [split,n1,s1,r3/1:/{System/NodeL���-Max}] initiating a split of this range at key /System/NodeLivenessMax [r4]
W171012 01:36:59.668085 83344 storage/intent_resolver.go:335  [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=0139a4d0 key=/Local/Range/System/NodeLiveness/RangeDescriptor rw=true pri=0.00711395 iso=SERIALIZABLE stat=PENDING epo=0 ts=1507772219.600147112,0 orig=1507772219.600147112,0 max=1507772219.600147112,0 wto=false rop=false seq=3
I171012 01:36:59.686313 83194 storage/replica_command.go:2792  [split,n1,s1,r4/1:/{System/NodeL���-Max}] initiating a split of this range at key /System/tsd [r5]
I171012 01:36:59.702342 82779 sql/lease.go:341  [n1] publish: descID=12 (eventlog) version=2 mtime=2017-10-12 01:36:59.699640079 +0000 UTC
I171012 01:36:59.811468 83194 storage/replica_command.go:2792  [split,n1,s1,r5/1:/{System/tsd-Max}] initiating a split of this range at key /System/"tse" [r6]
I171012 01:36:59.923187 83194 storage/replica_command.go:2792  [split,n1,s1,r6/1:/{System/tse-Max}] initiating a split of this range at key /Table/SystemConfigSpan/Start [r7]
I171012 01:37:00.005935 83194 storage/replica_command.go:2792  [split,n1,s1,r7/1:/{Table/System���-Max}] initiating a split of this range at key /Table/11 [r8]
W171012 01:37:00.009320 83476 storage/intent_resolver.go:335  [n1,s1,r7/1:/{Table/System���-Max}]: failed to push during intent resolution: failed to push "sql txn implicit" id=cd4d7d8b key=/Table/SystemConfigSpan/Start rw=true pri=0.01023882 iso=SERIALIZABLE stat=PENDING epo=0 ts=1507772219.895308221,0 orig=1507772219.895308221,0 max=1507772219.895308221,0 wto=false rop=false seq=9
I171012 01:37:00.035506 82779 sql/event_log.go:103  [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:diagnostics.reporting.enabled Value:true User:node}
I171012 01:37:00.122271 83194 storage/replica_command.go:2792  [split,n1,s1,r8/1:/{Table/11-Max}] initiating a split of this range at key /Table/12 [r9]
W171012 01:37:00.189011 83570 storage/intent_resolver.go:335  [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=9881d2ff key=/Local/Range/Table/11/RangeDescriptor rw=true pri=0.01913020 iso=SERIALIZABLE stat=PENDING epo=0 ts=1507772220.122434356,0 orig=1507772220.122434356,0 max=1507772220.122434356,0 wto=false rop=false seq=3
I171012 01:37:00.244641 83194 storage/replica_command.go:2792  [split,n1,s1,r9/1:/{Table/12-Max}] initiating a split of this range at key /Table/13 [r10]
I171012 01:37:00.280963 82779 sql/event_log.go:103  [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:version Value:1.1-3 User:node}
I171012 01:37:00.343829 83194 storage/replica_command.go:2792  [split,n1,s1,r10/1:/{Table/13-Max}] initiating a split of this range at key /Table/14 [r11]
I171012 01:37:00.405233 82779 sql/event_log.go:103  [n1] Event: "set_cluster_setting", target: 0, info: {SettingName:trace.debug.enable Value:false User:node}
W171012 01:37:00.422405 83596 storage/intent_resolver.go:335  [n1,s1,r1/1:/{Min-System/}]: failed to push during intent resolution: failed to push "split" id=21b80354 key=/Local/Range/Table/13/RangeDescriptor rw=true pri=0.02482547 iso=SERIALIZABLE stat=PENDING epo=0 ts=1507772220.344007451,0 orig=1507772220.344007451,0 max=1507772220.344007451,0 wto=false rop=false seq=3
I171012 01:37:00.452123 82779 server/server.go:1131  [n1] done ensuring all necessary migrations have run
I171012 01:37:00.452317 82779 server/server.go:1133  [n1] serving sql connections
I171012 01:37:00.482341 83194 storage/replica_command.go:2792  [split,n1,s1,r11/1:/{Table/14-Max}] initiating a split of this range at key /Table/15 [r12]
I171012 01:37:00.511819 83601 sql/event_log.go:103  [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:37687} Attrs: Locality: ServerVersion:1.1-3} ClusterID:05078b23-e5fe-4577-b70d-3143ce805e42 StartedAt:1507772219210056462 LastUp:1507772219210056462}
I171012 01:37:00.580900 83194 storage/replica_command.go:2792  [split,n1,s1,r12/1:/{Table/15-Max}] initiating a split of this range at key /Table/16 [r13]
I171012 01:37:00.688024 83194 storage/replica_command.go:2792  [split,n1,s1,r13/1:/{Table/16-Max}] initiating a split of this range at key /Table/17 [r14]
I171012 01:37:00.800664 83194 storage/replica_command.go:2792  [split,n1,s1,r14/1:/{Table/17-Max}] initiating a split of this range at key /Table/18 [r15]
I171012 01:37:00.855336 83194 storage/replica_command.go:2792  [split,n1,s1,r15/1:/{Table/18-Max}] initiating a split of this range at key /Table/19 [r16]
W171012 01:37:01.011461 82779 server/status/runtime.go:109  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
W171012 01:37:01.016848 82779 gossip/gossip.go:1277  [n?] no incoming or outgoing connections
I171012 01:37:01.034647 82779 server/config.go:507  [n?] 1 storage engine initialized
I171012 01:37:01.034784 82779 server/config.go:510  [n?] RocksDB cache size: 128 MiB
I171012 01:37:01.034821 82779 server/config.go:510  [n?] store 0: in-memory, size 0 B
I171012 01:37:01.034978 82779 server/server.go:884  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171012 01:37:01.076247 83903 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:37687
I171012 01:37:01.076894 83988 gossip/server.go:232  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:50082}
I171012 01:37:01.081654 82779 storage/stores.go:332  [n?] read 0 node addresses from persistent storage
I171012 01:37:01.082032 82779 storage/stores.go:351  [n?] wrote 1 node addresses to persistent storage
I171012 01:37:01.082403 82779 server/node.go:617  [n?] connecting to gossip network to verify cluster ID...
I171012 01:37:01.082640 82779 server/node.go:642  [n?] node connected via gossip and verified as part of cluster "05078b23-e5fe-4577-b70d-3143ce805e42"
I171012 01:37:01.084601 83937 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.091152 83936 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.096502 82779 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.102835 82779 server/node.go:322  [n?] new node allocated ID 2
I171012 01:37:01.103891 82779 gossip/gossip.go:331  [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:50082" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:3 > 
I171012 01:37:01.104627 82779 server/node.go:400  [n2] node=2: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171012 01:37:01.105060 82779 server/node.go:414  [n2] node=2: started with [] engine(s) and attributes []
I171012 01:37:01.107664 82779 sql/executor.go:408  [n2] creating distSQLPlanner with address {tcp 127.0.0.1:50082}
I171012 01:37:01.127744 84004 storage/stores.go:351  [n1] wrote 1 node addresses to persistent storage
I171012 01:37:01.128706 82779 server/server.go:987  [n2] starting https server at 127.0.0.1:57584
I171012 01:37:01.128864 82779 server/server.go:988  [n2] starting grpc/postgres server at 127.0.0.1:50082
I171012 01:37:01.128934 82779 server/server.go:989  [n2] advertising CockroachDB node at 127.0.0.1:50082
W171012 01:37:01.129274 82779 sql/jobs/registry.go:167  [n2] unable to get node liveness: node not in the liveness table
I171012 01:37:01.161264 82779 server/server.go:1131  [n2] done ensuring all necessary migrations have run
I171012 01:37:01.161491 82779 server/server.go:1133  [n2] serving sql connections
I171012 01:37:01.170783 83855 server/node.go:598  [n2] bootstrapped store [n2,s2]
W171012 01:37:01.235337 82779 server/status/runtime.go:109  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
W171012 01:37:01.239769 82779 gossip/gossip.go:1277  [n?] no incoming or outgoing connections
I171012 01:37:01.243194 82779 server/config.go:507  [n?] 1 storage engine initialized
I171012 01:37:01.243302 82779 server/config.go:510  [n?] RocksDB cache size: 128 MiB
I171012 01:37:01.243336 82779 server/config.go:510  [n?] store 0: in-memory, size 0 B
I171012 01:37:01.243497 82779 server/server.go:884  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171012 01:37:01.285637 83884 sql/event_log.go:103  [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:50082} Attrs: Locality: ServerVersion:1.1-3} ClusterID:05078b23-e5fe-4577-b70d-3143ce805e42 StartedAt:1507772221104835839 LastUp:1507772221104835839}
I171012 01:37:01.319839 84166 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:37687
I171012 01:37:01.321488 84000 gossip/server.go:232  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:46079}
I171012 01:37:01.328802 82779 storage/stores.go:332  [n?] read 0 node addresses from persistent storage
I171012 01:37:01.329060 82779 server/node.go:617  [n?] connecting to gossip network to verify cluster ID...
I171012 01:37:01.329197 82779 server/node.go:642  [n?] node connected via gossip and verified as part of cluster "05078b23-e5fe-4577-b70d-3143ce805e42"
I171012 01:37:01.330480 84201 storage/stores.go:351  [n?] wrote 1 node addresses to persistent storage
I171012 01:37:01.330747 84201 storage/stores.go:351  [n?] wrote 2 node addresses to persistent storage
I171012 01:37:01.332546 84001 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.337912 84203 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.342656 82779 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.349616 82779 server/node.go:322  [n?] new node allocated ID 3
I171012 01:37:01.350182 82779 gossip/gossip.go:331  [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:46079" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:3 > 
I171012 01:37:01.350837 82779 server/node.go:400  [n3] node=3: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171012 01:37:01.351280 82779 server/node.go:414  [n3] node=3: started with [] engine(s) and attributes []
I171012 01:37:01.353583 82779 sql/executor.go:408  [n3] creating distSQLPlanner with address {tcp 127.0.0.1:46079}
I171012 01:37:01.359087 83966 storage/stores.go:351  [n1] wrote 2 node addresses to persistent storage
I171012 01:37:01.363967 84213 storage/stores.go:351  [n2] wrote 2 node addresses to persistent storage
I171012 01:37:01.399498 84227 server/node.go:598  [n3] bootstrapped store [n3,s3]
I171012 01:37:01.449290 82779 server/server.go:987  [n3] starting https server at 127.0.0.1:42311
I171012 01:37:01.449443 82779 server/server.go:988  [n3] starting grpc/postgres server at 127.0.0.1:46079
I171012 01:37:01.449482 82779 server/server.go:989  [n3] advertising CockroachDB node at 127.0.0.1:46079
W171012 01:37:01.449754 82779 sql/jobs/registry.go:167  [n3] unable to get node liveness: node not in the liveness table
I171012 01:37:01.461847 82779 server/server.go:1131  [n3] done ensuring all necessary migrations have run
I171012 01:37:01.462007 82779 server/server.go:1133  [n3] serving sql connections
I171012 01:37:01.503692 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r12/1:/Table/1{5-6}] generated preemptive snapshot d30b59cd at index 19
I171012 01:37:01.532441 84371 sql/event_log.go:103  [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:46079} Attrs: Locality: ServerVersion:1.1-3} ClusterID:05078b23-e5fe-4577-b70d-3143ce805e42 StartedAt:1507772221351018558 LastUp:1507772221351018558}
W171012 01:37:01.582707 82779 server/status/runtime.go:109  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I171012 01:37:01.582913 83302 storage/store.go:3506  [replicate,n1,s1,r12/1:/Table/1{5-6}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 5ms
I171012 01:37:01.584108 84368 storage/replica_raftstorage.go:732  [n3,s3,r12/?:{-}] applying preemptive snapshot at index 19 (id=d30b59cd, encoded size=3623, 1 rocksdb batches, 9 log entries)
I171012 01:37:01.586870 84368 storage/replica_raftstorage.go:738  [n3,s3,r12/?:/Table/1{5-6}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
W171012 01:37:01.587318 82779 gossip/gossip.go:1277  [n?] no incoming or outgoing connections
I171012 01:37:01.591352 83302 storage/replica_command.go:3736  [replicate,n1,s1,r12/1:/Table/1{5-6}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r12:/Table/1{5-6} [(n1,s1):1, next=2]
I171012 01:37:01.609068 82779 server/config.go:507  [n?] 1 storage engine initialized
I171012 01:37:01.609370 82779 server/config.go:510  [n?] RocksDB cache size: 128 MiB
I171012 01:37:01.609416 82779 server/config.go:510  [n?] store 0: in-memory, size 0 B
I171012 01:37:01.609593 82779 server/server.go:884  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171012 01:37:01.624224 84482 storage/replica.go:3071  [n1,s1,r12/1:/Table/1{5-6}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171012 01:37:01.630650 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] generated preemptive snapshot 0cacaaf6 at index 17
I171012 01:37:01.642996 84504 storage/raft_transport.go:455  [n3] raft transport stream to node 1 established
I171012 01:37:01.688195 84379 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:37687
I171012 01:37:01.689846 84432 gossip/server.go:232  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:52618}
I171012 01:37:01.695769 82779 storage/stores.go:332  [n?] read 0 node addresses from persistent storage
I171012 01:37:01.695988 82779 server/node.go:617  [n?] connecting to gossip network to verify cluster ID...
I171012 01:37:01.697701 84491 storage/stores.go:351  [n?] wrote 1 node addresses to persistent storage
I171012 01:37:01.698049 84491 storage/stores.go:351  [n?] wrote 2 node addresses to persistent storage
I171012 01:37:01.698345 84491 storage/stores.go:351  [n?] wrote 3 node addresses to persistent storage
I171012 01:37:01.699771 82779 server/node.go:642  [n?] node connected via gossip and verified as part of cluster "05078b23-e5fe-4577-b70d-3143ce805e42"
I171012 01:37:01.701876 84526 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.709791 84525 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.711966 83195 storage/store.go:3506  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 7, rate-limit: 8.0 MiB/sec, 3ms
I171012 01:37:01.717015 84532 storage/replica_raftstorage.go:732  [n2,s2,r6/?:{-}] applying preemptive snapshot at index 17 (id=0cacaaf6, encoded size=4688, 1 rocksdb batches, 7 log entries)
I171012 01:37:01.722800 84532 storage/replica_raftstorage.go:738  [n2,s2,r6/?:/{System/tse-Table/System���}] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I171012 01:37:01.728185 82779 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:01.736042 83195 storage/replica_command.go:3736  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, next=2]
I171012 01:37:01.744335 82779 server/node.go:322  [n?] new node allocated ID 4
I171012 01:37:01.744964 82779 gossip/gossip.go:331  [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:52618" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:3 > 
I171012 01:37:01.745720 82779 server/node.go:400  [n4] node=4: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171012 01:37:01.746208 82779 server/node.go:414  [n4] node=4: started with [] engine(s) and attributes []
I171012 01:37:01.754027 84580 storage/stores.go:351  [n1] wrote 3 node addresses to persistent storage
I171012 01:37:01.756802 84581 storage/stores.go:351  [n3] wrote 3 node addresses to persistent storage
I171012 01:37:01.759029 82779 sql/executor.go:408  [n4] creating distSQLPlanner with address {tcp 127.0.0.1:52618}
I171012 01:37:01.781454 82779 server/server.go:987  [n4] starting https server at 127.0.0.1:34827
I171012 01:37:01.781582 82779 server/server.go:988  [n4] starting grpc/postgres server at 127.0.0.1:52618
I171012 01:37:01.781634 82779 server/server.go:989  [n4] advertising CockroachDB node at 127.0.0.1:52618
W171012 01:37:01.781929 82779 sql/jobs/registry.go:167  [n4] unable to get node liveness: node not in the liveness table
I171012 01:37:01.783779 84539 storage/stores.go:351  [n2] wrote 3 node addresses to persistent storage
I171012 01:37:01.808456 84593 storage/replica.go:3071  [n1,s1,r6/1:/{System/tse-Table/System���}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171012 01:37:01.810664 82779 server/server.go:1131  [n4] done ensuring all necessary migrations have run
I171012 01:37:01.810836 82779 server/server.go:1133  [n4] serving sql connections
I171012 01:37:01.812559 84474 server/node.go:598  [n4] bootstrapped store [n4,s4]
I171012 01:37:01.823343 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot 1e2e3ca5 at index 17
I171012 01:37:01.827998 83302 storage/store.go:3506  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n3,s3):?: kv pairs: 10, log entries: 7, rate-limit: 8.0 MiB/sec, 4ms
I171012 01:37:01.838253 84754 storage/replica_raftstorage.go:732  [n3,s3,r14/?:{-}] applying preemptive snapshot at index 17 (id=1e2e3ca5, encoded size=2858, 1 rocksdb batches, 7 log entries)
I171012 01:37:01.839379 84717 storage/raft_transport.go:455  [n2] raft transport stream to node 1 established
I171012 01:37:01.854213 84754 storage/replica_raftstorage.go:738  [n3,s3,r14/?:/Table/1{7-8}] applied preemptive snapshot in 16ms [clear=0ms batch=0ms entries=15ms commit=0ms]
I171012 01:37:01.872383 83302 storage/replica_command.go:3736  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2]
W171012 01:37:01.908339 82779 server/status/runtime.go:109  Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
W171012 01:37:01.912855 82779 gossip/gossip.go:1277  [n?] no incoming or outgoing connections
I171012 01:37:01.914028 84347 storage/store.go:2165  [replicaGC,n3,s3,r14/?:/Table/1{7-8}] removing replica
I171012 01:37:01.915083 84347 storage/replica.go:763  [replicaGC,n3,s3,r14/?:/Table/1{7-8}] removed 9 (0+9) keys in 0ms [clear=0ms commit=0ms]
I171012 01:37:01.915601 82779 server/config.go:507  [n?] 1 storage engine initialized
I171012 01:37:01.915739 82779 server/config.go:510  [n?] RocksDB cache size: 128 MiB
I171012 01:37:01.915771 82779 server/config.go:510  [n?] store 0: in-memory, size 0 B
I171012 01:37:01.915923 82779 server/server.go:884  [n?] no stores bootstrapped and --join flag specified, awaiting init command.
I171012 01:37:01.938110 84456 sql/event_log.go:103  [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:52618} Attrs: Locality: ServerVersion:1.1-3} ClusterID:05078b23-e5fe-4577-b70d-3143ce805e42 StartedAt:1507772221745952357 LastUp:1507772221745952357}
I171012 01:37:01.968358 83302 storage/replica_command.go:3736  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, next=2]
I171012 01:37:02.011200 84851 storage/replica.go:3071  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171012 01:37:02.028860 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot 23701fbb at index 74
I171012 01:37:02.040295 83198 storage/replica_raftstorage.go:526  [raftsnapshot,n1,s1,r14/1:/Table/1{7-8}] generated Raft snapshot 1747b6d9 at index 21
I171012 01:37:02.046044 84812 gossip/server.go:232  [n1] received initial cluster-verification connection from {tcp 127.0.0.1:50719}
I171012 01:37:02.046060 83198 storage/store.go:3506  [raftsnapshot,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n3,s3):2: kv pairs: 12, log entries: 11, rate-limit: 8.0 MiB/sec, 5ms
I171012 01:37:02.046584 84762 gossip/client.go:129  [n?] started gossip client to 127.0.0.1:37687
I171012 01:37:02.046873 84853 storage/replica_raftstorage.go:732  [n3,s3,r14/2:{-}] applying Raft snapshot at index 21 (id=1747b6d9, encoded size=4959, 1 rocksdb batches, 11 log entries)
I171012 01:37:02.050070 84853 storage/replica_raftstorage.go:738  [n3,s3,r14/2:/Table/1{7-8}] applied Raft snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171012 01:37:02.060204 82779 storage/stores.go:332  [n?] read 0 node addresses from persistent storage
I171012 01:37:02.062769 82779 storage/stores.go:351  [n?] wrote 4 node addresses to persistent storage
I171012 01:37:02.062912 82779 server/node.go:617  [n?] connecting to gossip network to verify cluster ID...
I171012 01:37:02.063052 82779 server/node.go:642  [n?] node connected via gossip and verified as part of cluster "05078b23-e5fe-4577-b70d-3143ce805e42"
I171012 01:37:02.065851 84917 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:02.073449 84916 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:02.078163 82779 kv/dist_sender.go:355  [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I171012 01:37:02.094147 82779 server/node.go:322  [n?] new node allocated ID 5
I171012 01:37:02.095699 82779 gossip/gossip.go:331  [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:50719" > attrs:<> locality:<> ServerVersion:<major_val:1 minor_val:1 patch:0 unstable:3 > 
I171012 01:37:02.098924 82779 server/node.go:400  [n5] node=5: asynchronously bootstrapping engine(s) [<no-attributes>=<in-mem>]
I171012 01:37:02.099331 82779 server/node.go:414  [n5] node=5: started with [] engine(s) and attributes []
I171012 01:37:02.104678 82779 sql/executor.go:408  [n5] creating distSQLPlanner with address {tcp 127.0.0.1:50719}
I171012 01:37:02.105175 83302 storage/store.go:3506  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n4,s4):?: kv pairs: 47, log entries: 64, rate-limit: 8.0 MiB/sec, 10ms
I171012 01:37:02.111605 84812 gossip/server.go:282  [n1] refusing gossip from node 5 (max 3 conns); forwarding to 4 ({tcp 127.0.0.1:52618})
I171012 01:37:02.113161 84762 gossip/client.go:134  [n5] closing client to node 1 (127.0.0.1:37687): received forward from node 1 to 4 (127.0.0.1:52618)
I171012 01:37:02.113549 84931 storage/stores.go:351  [n1] wrote 4 node addresses to persistent storage
I171012 01:37:02.115035 84833 gossip/gossip.go:1291  [n5] node has connected to cluster via gossip
I171012 01:37:02.115518 84833 storage/stores.go:351  [n5] wrote 4 node addresses to persistent storage
I171012 01:37:02.118182 84881 storage/stores.go:351  [n2] wrote 4 node addresses to persistent storage
I171012 01:37:02.120498 85010 storage/stores.go:351  [n4] wrote 4 node addresses to persistent storage
I171012 01:37:02.120895 84947 storage/stores.go:351  [n3] wrote 4 node addresses to persistent storage
I171012 01:37:02.145726 84861 storage/replica_raftstorage.go:732  [n4,s4,r1/?:{-}] applying preemptive snapshot at index 74 (id=23701fbb, encoded size=35507, 1 rocksdb batches, 64 log entries)
I171012 01:37:02.178796 82779 server/server.go:987  [n5] starting https server at 127.0.0.1:57447
I171012 01:37:02.179001 82779 server/server.go:988  [n5] starting grpc/postgres server at 127.0.0.1:50719
I171012 01:37:02.179070 82779 server/server.go:989  [n5] advertising CockroachDB node at 127.0.0.1:50719
W171012 01:37:02.179436 82779 sql/jobs/registry.go:167  [n5] unable to get node liveness: node not in the liveness table
I171012 01:37:02.214485 84794 server/node.go:598  [n5] bootstrapped store [n5,s5]
I171012 01:37:02.216353 82779 server/server.go:1131  [n5] done ensuring all necessary migrations have run
I171012 01:37:02.216489 82779 server/server.go:1133  [n5] serving sql connections
I171012 01:37:02.217621 84861 storage/replica_raftstorage.go:738  [n4,s4,r1/?:/{Min-System/}] applied preemptive snapshot in 72ms [clear=0ms batch=0ms entries=47ms commit=4ms]
I171012 01:37:02.235367 83302 storage/replica_command.go:3736  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2]
I171012 01:37:02.260749 84875 gossip/client.go:129  [n5] started gossip client to 127.0.0.1:52618
I171012 01:37:02.278367 84585 storage/store.go:2165  [replicaGC,n4,s4,r1/?:/{Min-System/}] removing replica
I171012 01:37:02.279728 84585 storage/replica.go:763  [replicaGC,n4,s4,r1/?:/{Min-System/}] removed 27 (17+10) keys in 1ms [clear=0ms commit=1ms]
I171012 01:37:02.306825 83302 storage/replica_command.go:3736  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r1:/{Min-System/} [(n1,s1):1, next=2]
I171012 01:37:02.332978 84886 sql/event_log.go:103  [n5] Event: "node_join", target: 5, info: {Descriptor:{NodeID:5 Address:{NetworkField:tcp AddressField:127.0.0.1:50719} Attrs: Locality: ServerVersion:1.1-3} ClusterID:05078b23-e5fe-4577-b70d-3143ce805e42 StartedAt:1507772222099129197 LastUp:1507772222099129197}
I171012 01:37:02.363702 85131 storage/replica.go:3071  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
I171012 01:37:02.372933 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot 468e1fc8 at index 22
I171012 01:37:02.402807 85102 storage/raft_transport.go:455  [n4] raft transport stream to node 1 established
I171012 01:37:02.408382 83198 storage/replica_raftstorage.go:526  [raftsnapshot,n1,s1,r1/1:/{Min-System/}] generated Raft snapshot f43c760d at index 78
I171012 01:37:02.414819 83198 storage/store.go:3506  [raftsnapshot,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n4,s4):2: kv pairs: 51, log entries: 68, rate-limit: 8.0 MiB/sec, 5ms
I171012 01:37:02.418250 85173 storage/replica_raftstorage.go:732  [n4,s4,r1/2:{-}] applying Raft snapshot at index 78 (id=f43c760d, encoded size=38276, 1 rocksdb batches, 68 log entries)
I171012 01:37:02.443866 83195 storage/store.go:3506  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n5,s5):?: kv pairs: 11, log entries: 12, rate-limit: 8.0 MiB/sec, 7ms
I171012 01:37:02.446249 85173 storage/replica_raftstorage.go:738  [n4,s4,r1/2:/{Min-System/}] applied Raft snapshot in 28ms [clear=0ms batch=0ms entries=26ms commit=0ms]
I171012 01:37:02.446297 85192 storage/replica_raftstorage.go:732  [n5,s5,r14/?:{-}] applying preemptive snapshot at index 22 (id=468e1fc8, encoded size=4651, 1 rocksdb batches, 12 log entries)
I171012 01:37:02.455961 85192 storage/replica_raftstorage.go:738  [n5,s5,r14/?:/Table/1{7-8}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=8ms commit=0ms]
I171012 01:37:02.467084 83195 storage/replica_command.go:3736  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n5,s5):3): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, (n3,s3):2, next=3]
I171012 01:37:02.508882 85232 storage/replica.go:3071  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n5,s5):3): updated=[(n1,s1):1 (n3,s3):2 (n5,s5):3] next=4
I171012 01:37:02.517693 83195 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.20, overfull-threshold=5
I171012 01:37:02.520486 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r11/1:/Table/1{4-5}] generated preemptive snapshot 0ebe0fe6 at index 19
I171012 01:37:02.524209 83302 storage/store.go:3506  [replicate,n1,s1,r11/1:/Table/1{4-5}] streamed snapshot to (n4,s4):?: kv pairs: 10, log entries: 9, rate-limit: 8.0 MiB/sec, 3ms
I171012 01:37:02.525545 85175 storage/replica_raftstorage.go:732  [n4,s4,r11/?:{-}] applying preemptive snapshot at index 19 (id=0ebe0fe6, encoded size=3620, 1 rocksdb batches, 9 log entries)
I171012 01:37:02.530320 85175 storage/replica_raftstorage.go:738  [n4,s4,r11/?:/Table/1{4-5}] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171012 01:37:02.540510 83302 storage/replica_command.go:3736  [replicate,n1,s1,r11/1:/Table/1{4-5}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r11:/Table/1{4-5} [(n1,s1):1, next=2]
I171012 01:37:02.543314 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.40, overfull-threshold=5
I171012 01:37:02.545591 85285 storage/raft_transport.go:455  [n5] raft transport stream to node 1 established
I171012 01:37:02.602695 85314 storage/replica.go:3071  [n1,s1,r11/1:/Table/1{4-5}] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
I171012 01:37:02.612430 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.40, overfull-threshold=5
I171012 01:37:02.615027 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r1/1:/{Min-System/}] generated preemptive snapshot fbc4c944 at index 82
I171012 01:37:02.621968 83195 storage/store.go:3506  [replicate,n1,s1,r1/1:/{Min-System/}] streamed snapshot to (n5,s5):?: kv pairs: 53, log entries: 72, rate-limit: 8.0 MiB/sec, 6ms
I171012 01:37:02.625672 85289 storage/replica_raftstorage.go:732  [n5,s5,r1/?:{-}] applying preemptive snapshot at index 82 (id=fbc4c944, encoded size=38867, 1 rocksdb batches, 72 log entries)
I171012 01:37:02.650733 85289 storage/replica_raftstorage.go:738  [n5,s5,r1/?:/{Min-System/}] applied preemptive snapshot in 24ms [clear=0ms batch=0ms entries=16ms commit=2ms]
I171012 01:37:02.655893 83195 storage/replica_command.go:3736  [replicate,n1,s1,r1/1:/{Min-System/}] change replicas (ADD_REPLICA (n5,s5):3): read existing descriptor r1:/{Min-System/} [(n1,s1):1, (n4,s4):2, next=3]
I171012 01:37:02.690625 85363 storage/replica.go:3071  [n1,s1,r1/1:/{Min-System/}] proposing ADD_REPLICA((n5,s5):3): updated=[(n1,s1):1 (n4,s4):2 (n5,s5):3] next=4
I171012 01:37:02.705244 83195 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.60, overfull-threshold=5
I171012 01:37:02.707848 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.60, overfull-threshold=5
I171012 01:37:02.712905 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot 09b23a74 at index 47
I171012 01:37:02.724607 83302 storage/store.go:3506  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n3,s3):?: kv pairs: 136, log entries: 37, rate-limit: 8.0 MiB/sec, 11ms
I171012 01:37:02.754819 85321 storage/replica_raftstorage.go:732  [n3,s3,r10/?:{-}] applying preemptive snapshot at index 47 (id=09b23a74, encoded size=50116, 1 rocksdb batches, 37 log entries)
I171012 01:37:02.796979 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.60, overfull-threshold=5
I171012 01:37:02.804384 85321 storage/replica_raftstorage.go:738  [n3,s3,r10/?:/Table/1{3-4}] applied preemptive snapshot in 49ms [clear=0ms batch=0ms entries=48ms commit=1ms]
I171012 01:37:02.807738 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.60, overfull-threshold=5
I171012 01:37:02.810941 83302 storage/replica_command.go:3736  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, next=2]
I171012 01:37:02.857019 85369 storage/replica.go:3071  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171012 01:37:02.863308 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r2/1:/System/{-NodeLive���}] generated preemptive snapshot f06a0ac1 at index 21
I171012 01:37:02.879903 83302 storage/store.go:3506  [replicate,n1,s1,r2/1:/System/{-NodeLive���}] streamed snapshot to (n4,s4):?: kv pairs: 9, log entries: 11, rate-limit: 8.0 MiB/sec, 16ms
I171012 01:37:02.885131 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.80, overfull-threshold=6
I171012 01:37:02.885986 85399 storage/replica_raftstorage.go:732  [n4,s4,r2/?:{-}] applying preemptive snapshot at index 21 (id=f06a0ac1, encoded size=53509, 1 rocksdb batches, 11 log entries)
I171012 01:37:02.896782 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=4.80, overfull-threshold=6
I171012 01:37:02.899234 85399 storage/replica_raftstorage.go:738  [n4,s4,r2/?:/System/{-NodeLive���}] applied preemptive snapshot in 13ms [clear=0ms batch=0ms entries=12ms commit=1ms]
I171012 01:37:02.906795 83302 storage/replica_command.go:3736  [replicate,n1,s1,r2/1:/System/{-NodeLive���}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, next=2]
I171012 01:37:02.955281 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.00, overfull-threshold=6
I171012 01:37:02.955664 85427 storage/replica.go:3071  [n1,s1,r2/1:/System/{-NodeLive���}] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
I171012 01:37:02.965914 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.00, overfull-threshold=6
I171012 01:37:02.971358 83195 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.00, overfull-threshold=6
I171012 01:37:02.972834 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r14/1:/Table/1{7-8}] generated preemptive snapshot c31c537d at index 25
I171012 01:37:02.983713 83195 storage/store.go:3506  [replicate,n1,s1,r14/1:/Table/1{7-8}] streamed snapshot to (n2,s2):?: kv pairs: 12, log entries: 15, rate-limit: 2.0 MiB/sec, 10ms
I171012 01:37:02.988357 85460 storage/replica_raftstorage.go:732  [n2,s2,r14/?:{-}] applying preemptive snapshot at index 25 (id=c31c537d, encoded size=5911, 1 rocksdb batches, 15 log entries)
I171012 01:37:02.991366 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.00, overfull-threshold=6
I171012 01:37:02.997480 85460 storage/replica_raftstorage.go:738  [n2,s2,r14/?:/Table/1{7-8}] applied preemptive snapshot in 9ms [clear=0ms batch=0ms entries=8ms commit=0ms]
I171012 01:37:03.011210 83195 storage/replica_command.go:3736  [replicate,n1,s1,r14/1:/Table/1{7-8}] change replicas (ADD_REPLICA (n2,s2):4): read existing descriptor r14:/Table/1{7-8} [(n1,s1):1, (n3,s3):2, (n5,s5):3, next=4]
I171012 01:37:03.026025 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.20, overfull-threshold=6
I171012 01:37:03.055979 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r14/1:/Table/1{7-8}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.20, overfull-threshold=6
I171012 01:37:03.072687 85475 storage/replica.go:3071  [n1,s1,r14/1:/Table/1{7-8}] proposing ADD_REPLICA((n2,s2):4): updated=[(n1,s1):1 (n3,s3):2 (n5,s5):3 (n2,s2):4] next=5
I171012 01:37:03.094850 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r5/1:/System/ts{d-e}] generated preemptive snapshot 70e6b68a at index 28
I171012 01:37:03.094996 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.20, overfull-threshold=6
I171012 01:37:03.142952 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.20, overfull-threshold=6
I171012 01:37:03.162120 83302 storage/store.go:3506  [replicate,n1,s1,r5/1:/System/ts{d-e}] streamed snapshot to (n3,s3):?: kv pairs: 1421, log entries: 3, rate-limit: 8.0 MiB/sec, 66ms
I171012 01:37:03.164871 85506 storage/replica_raftstorage.go:732  [n3,s3,r5/?:{-}] applying preemptive snapshot at index 28 (id=70e6b68a, encoded size=201878, 1 rocksdb batches, 3 log entries)
I171012 01:37:03.200543 85506 storage/replica_raftstorage.go:738  [n3,s3,r5/?:/System/ts{d-e}] applied preemptive snapshot in 35ms [clear=3ms batch=0ms entries=30ms commit=1ms]
I171012 01:37:03.204516 83302 storage/replica_command.go:3736  [replicate,n1,s1,r5/1:/System/ts{d-e}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r5:/System/ts{d-e} [(n1,s1):1, next=2]
I171012 01:37:03.210882 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.279140 85467 storage/replica.go:3071  [n1,s1,r5/1:/System/ts{d-e}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171012 01:37:03.310134 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] generated preemptive snapshot 72bc7360 at index 21
I171012 01:37:03.319927 83302 storage/store.go:3506  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] streamed snapshot to (n5,s5):?: kv pairs: 11, log entries: 11, rate-limit: 8.0 MiB/sec, 9ms
I171012 01:37:03.322243 85554 storage/replica_raftstorage.go:732  [n5,s5,r6/?:{-}] applying preemptive snapshot at index 21 (id=72bc7360, encoded size=6094, 1 rocksdb batches, 11 log entries)
I171012 01:37:03.325688 85554 storage/replica_raftstorage.go:738  [n5,s5,r6/?:/{System/tse-Table/System���}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I171012 01:37:03.341404 83302 storage/replica_command.go:3736  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n5,s5):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n2,s2):2, next=3]
I171012 01:37:03.345601 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.406464 85081 storage/store.go:2165  [replicaGC,n5,s5,r6/?:/{System/tse-Table/System���}] removing replica
I171012 01:37:03.408058 85081 storage/replica.go:763  [replicaGC,n5,s5,r6/?:/{System/tse-Table/System���}] removed 9 (0+9) keys in 0ms [clear=0ms commit=0ms]
I171012 01:37:03.448182 83302 storage/replica_command.go:3736  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] change replicas (ADD_REPLICA (n5,s5):3): read existing descriptor r6:/{System/tse-Table/SystemConfigSpan/Start} [(n1,s1):1, (n2,s2):2, next=3]
I171012 01:37:03.467182 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.523354 85310 storage/replica.go:3071  [n1,s1,r6/1:/{System/tse-Table/System���}] proposing ADD_REPLICA((n5,s5):3): updated=[(n1,s1):1 (n2,s2):2 (n5,s5):3] next=4
I171012 01:37:03.534112 83302 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.540688 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.542514 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r4/1:/System/{NodeLive���-tsd}] generated preemptive snapshot 0a252d51 at index 46
I171012 01:37:03.549265 83195 storage/store.go:3506  [replicate,n1,s1,r4/1:/System/{NodeLive���-tsd}] streamed snapshot to (n4,s4):?: kv pairs: 39, log entries: 7, rate-limit: 8.0 MiB/sec, 6ms
I171012 01:37:03.565024 85618 storage/replica_raftstorage.go:732  [n4,s4,r4/?:{-}] applying preemptive snapshot at index 46 (id=0a252d51, encoded size=131852, 1 rocksdb batches, 7 log entries)
I171012 01:37:03.590564 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.597557 83198 storage/replica_raftstorage.go:526  [raftsnapshot,n1,s1,r6/1:/{System/tse-Table/System���}] generated Raft snapshot 548359d4 at index 26
I171012 01:37:03.606887 83198 storage/store.go:3506  [raftsnapshot,n1,s1,r6/1:/{System/tse-Table/System���}] streamed snapshot to (n5,s5):3: kv pairs: 12, log entries: 16, rate-limit: 8.0 MiB/sec, 9ms
I171012 01:37:03.607686 85551 storage/replica_raftstorage.go:732  [n5,s5,r6/3:{-}] applying Raft snapshot at index 26 (id=548359d4, encoded size=8011, 1 rocksdb batches, 16 log entries)
I171012 01:37:03.609759 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.40, overfull-threshold=6
I171012 01:37:03.614305 85551 storage/replica_raftstorage.go:738  [n5,s5,r6/3:/{System/tse-Table/System���}] applied Raft snapshot in 5ms [clear=0ms batch=0ms entries=4ms commit=0ms]
I171012 01:37:03.617542 85618 storage/replica_raftstorage.go:738  [n4,s4,r4/?:/System/{NodeLive���-tsd}] applied preemptive snapshot in 52ms [clear=0ms batch=0ms entries=49ms commit=2ms]
I171012 01:37:03.624195 83195 storage/replica_command.go:3736  [replicate,n1,s1,r4/1:/System/{NodeLive���-tsd}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2]
I171012 01:37:03.643409 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.80, overfull-threshold=7
I171012 01:37:03.668408 84585 storage/store.go:2165  [replicaGC,n4,s4,r4/?:/System/{NodeLive���-tsd}] removing replica
I171012 01:37:03.670653 84585 storage/replica.go:763  [replicaGC,n4,s4,r4/?:/System/{NodeLive���-tsd}] removed 27 (19+8) keys in 2ms [clear=0ms commit=2ms]
I171012 01:37:03.677105 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.60, overfull-threshold=6
I171012 01:37:03.710083 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.60, overfull-threshold=6
I171012 01:37:03.712227 83195 storage/replica_command.go:3736  [replicate,n1,s1,r4/1:/System/{NodeLive���-tsd}] change replicas (ADD_REPLICA (n4,s4):2): read existing descriptor r4:/System/{NodeLivenessMax-tsd} [(n1,s1):1, next=2]
I171012 01:37:03.754530 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.60, overfull-threshold=6
I171012 01:37:03.769445 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.60, overfull-threshold=6
I171012 01:37:03.770081 85661 storage/replica.go:3071  [n1,s1,r4/1:/System/{NodeLive���-tsd}] proposing ADD_REPLICA((n4,s4):2): updated=[(n1,s1):1 (n4,s4):2] next=3
I171012 01:37:03.787608 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r15/1:/Table/1{8-9}] generated preemptive snapshot 2855925f at index 16
I171012 01:37:03.795595 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.60, overfull-threshold=6
I171012 01:37:03.797236 83302 storage/store.go:3506  [replicate,n1,s1,r15/1:/Table/1{8-9}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 6, rate-limit: 8.0 MiB/sec, 6ms
I171012 01:37:03.804281 83198 storage/replica_raftstorage.go:526  [raftsnapshot,n1,s1,r4/1:/System/{NodeLive���-tsd}] generated Raft snapshot 8a2ee091 at index 50
I171012 01:37:03.812456 83198 storage/store.go:3506  [raftsnapshot,n1,s1,r4/1:/System/{NodeLive���-tsd}] streamed snapshot to (n4,s4):2: kv pairs: 41, log entries: 11, rate-limit: 8.0 MiB/sec, 7ms
I171012 01:37:03.812600 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=5.60, overfull-threshold=6
I171012 01:37:03.814111 85590 storage/replica_raftstorage.go:732  [n2,s2,r15/?:{-}] applying preemptive snapshot at index 16 (id=2855925f, encoded size=2739, 1 rocksdb batches, 6 log entries)
I171012 01:37:03.816837 85590 storage/replica_raftstorage.go:738  [n2,s2,r15/?:/Table/1{8-9}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171012 01:37:03.817533 85592 storage/replica_raftstorage.go:732  [n4,s4,r4/2:{-}] applying Raft snapshot at index 50 (id=8a2ee091, encoded size=134257, 1 rocksdb batches, 11 log entries)
I171012 01:37:03.826443 83302 storage/replica_command.go:3736  [replicate,n1,s1,r15/1:/Table/1{8-9}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r15:/Table/1{8-9} [(n1,s1):1, next=2]
I171012 01:37:03.835135 85592 storage/replica_raftstorage.go:738  [n4,s4,r4/2:/System/{NodeLive���-tsd}] applied Raft snapshot in 17ms [clear=0ms batch=0ms entries=16ms commit=1ms]
I171012 01:37:03.842386 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.00, overfull-threshold=7
I171012 01:37:03.874739 85698 storage/replica.go:3071  [n1,s1,r15/1:/Table/1{8-9}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171012 01:37:03.884237 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r8/1:/Table/1{1-2}] generated preemptive snapshot ba8de8f7 at index 16
I171012 01:37:03.904118 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.00, overfull-threshold=7
I171012 01:37:03.909865 83302 storage/store.go:3506  [replicate,n1,s1,r8/1:/Table/1{1-2}] streamed snapshot to (n2,s2):?: kv pairs: 10, log entries: 6, rate-limit: 8.0 MiB/sec, 17ms
I171012 01:37:03.911510 85688 storage/replica_raftstorage.go:732  [n2,s2,r8/?:{-}] applying preemptive snapshot at index 16 (id=ba8de8f7, encoded size=4056, 1 rocksdb batches, 6 log entries)
I171012 01:37:03.914213 85688 storage/replica_raftstorage.go:738  [n2,s2,r8/?:/Table/1{1-2}] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I171012 01:37:03.923925 83302 storage/replica_command.go:3736  [replicate,n1,s1,r8/1:/Table/1{1-2}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r8:/Table/1{1-2} [(n1,s1):1, next=2]
I171012 01:37:03.964904 85738 storage/replica.go:3071  [n1,s1,r8/1:/Table/1{1-2}] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I171012 01:37:03.966731 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.20, overfull-threshold=7
I171012 01:37:03.969237 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.20, overfull-threshold=7
I171012 01:37:03.976329 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r10/1:/Table/1{3-4}] generated preemptive snapshot d51f9200 at index 67
I171012 01:37:03.983517 83195 storage/store.go:3506  [replicate,n1,s1,r10/1:/Table/1{3-4}] streamed snapshot to (n2,s2):?: kv pairs: 182, log entries: 57, rate-limit: 8.0 MiB/sec, 6ms
I171012 01:37:03.986053 85721 storage/replica_raftstorage.go:732  [n2,s2,r10/?:{-}] applying preemptive snapshot at index 67 (id=d51f9200, encoded size=77668, 1 rocksdb batches, 57 log entries)
I171012 01:37:03.996618 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.20, overfull-threshold=7
I171012 01:37:04.006503 85721 storage/replica_raftstorage.go:738  [n2,s2,r10/?:/Table/1{3-4}] applied preemptive snapshot in 20ms [clear=0ms batch=0ms entries=19ms commit=0ms]
I171012 01:37:04.012447 83195 storage/replica_command.go:3736  [replicate,n1,s1,r10/1:/Table/1{3-4}] change replicas (ADD_REPLICA (n2,s2):3): read existing descriptor r10:/Table/1{3-4} [(n1,s1):1, (n3,s3):2, next=3]
I171012 01:37:04.030082 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.40, overfull-threshold=7
I171012 01:37:04.053783 85750 storage/replica.go:3071  [n1,s1,r10/1:/Table/1{3-4}] proposing ADD_REPLICA((n2,s2):3): updated=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I171012 01:37:04.061248 83195 storage/allocator_scorer.go:623  [replicate,n1,s1,r10/1:/Table/1{3-4}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.40, overfull-threshold=7
I171012 01:37:04.064539 83302 storage/replica_raftstorage.go:526  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] generated preemptive snapshot 3e7ac766 at index 25
I171012 01:37:04.078767 83302 storage/store.go:3506  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] streamed snapshot to (n3,s3):?: kv pairs: 15, log entries: 15, rate-limit: 8.0 MiB/sec, 13ms
I171012 01:37:04.080894 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.40, overfull-threshold=7
I171012 01:37:04.088465 85763 storage/replica_raftstorage.go:732  [n3,s3,r3/?:{-}] applying preemptive snapshot at index 25 (id=3e7ac766, encoded size=7228, 1 rocksdb batches, 15 log entries)
I171012 01:37:04.092345 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.40, overfull-threshold=7
I171012 01:37:04.105448 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r10/1:/Table/1{3-4}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.40, overfull-threshold=7
I171012 01:37:04.108724 85763 storage/replica_raftstorage.go:738  [n3,s3,r3/?:/System/NodeLiveness{-Max}] applied preemptive snapshot in 20ms [clear=0ms batch=0ms entries=18ms commit=0ms]
I171012 01:37:04.112335 83302 storage/replica_command.go:3736  [replicate,n1,s1,r3/1:/System/NodeLiveness{-Max}] change replicas (ADD_REPLICA (n3,s3):2): read existing descriptor r3:/System/NodeLiveness{-Max} [(n1,s1):1, next=2]
I171012 01:37:04.137732 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.60, overfull-threshold=7
I171012 01:37:04.148909 85708 storage/replica.go:3071  [n1,s1,r3/1:/System/NodeLiveness{-Max}] proposing ADD_REPLICA((n3,s3):2): updated=[(n1,s1):1 (n3,s3):2] next=3
I171012 01:37:04.155894 83195 storage/replica_raftstorage.go:526  [replicate,n1,s1,r2/1:/System/{-NodeLive���}] generated preemptive snapshot 3b087d61 at index 25
I171012 01:37:04.165589 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r10/1:/Table/1{3-4}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.60, overfull-threshold=7
I171012 01:37:04.169190 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.60, overfull-threshold=7
I171012 01:37:04.171100 83195 storage/store.go:3506  [replicate,n1,s1,r2/1:/System/{-NodeLive���}] streamed snapshot to (n5,s5):?: kv pairs: 10, log entries: 15, rate-limit: 8.0 MiB/sec, 14ms
I171012 01:37:04.175812 85756 storage/replica_raftstorage.go:732  [n5,s5,r2/?:{-}] applying preemptive snapshot at index 25 (id=3b087d61, encoded size=54892, 1 rocksdb batches, 15 log entries)
I171012 01:37:04.189831 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r1/1:/{Min-System/}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.60, overfull-threshold=7
I171012 01:37:04.190344 85756 storage/replica_raftstorage.go:738  [n5,s5,r2/?:/System/{-NodeLive���}] applied preemptive snapshot in 14ms [clear=0ms batch=0ms entries=13ms commit=0ms]
I171012 01:37:04.205056 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r10/1:/Table/1{3-4}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.80, overfull-threshold=8
I171012 01:37:04.211799 83195 storage/replica_command.go:3736  [replicate,n1,s1,r2/1:/System/{-NodeLive���}] change replicas (ADD_REPLICA (n5,s5):3): read existing descriptor r2:/System/{-NodeLiveness} [(n1,s1):1, (n4,s4):2, next=3]
I171012 01:37:04.233328 83201 storage/allocator_scorer.go:623  [replicate,n1,s1,r6/1:/{System/tse-Table/System���}] s1: should-rebalance(ranges-overfull): rangeCount=16, mean=6.80, overfull-threshold=8
I171012 01:37:04.245140 85081 storage/store.go:2165  [replicaGC,n5,s5,r2/?:/System/{-NodeLive���}] removing replica
I171012 01:37:04.246214 85081 storage/replica.go:763  [replicaGC,n5,s5,r2/?:/System/{-NodeLive���}] removed 8 (0+8) keys in 0ms [clear=0ms commit=0ms]
I171012 01:37:04.277375 83195 storage/replica_command.go:3736  [repl

Please assign, take a look and update the issue accordingly.

@cockroach-teamcity cockroach-teamcity added O-robot Originated from a bot. C-test-failure Broken test (automatically or manually discovered). labels Oct 12, 2017
@a-robinson
Copy link
Contributor

#19180

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants