[2019-02-09T06:28:45,968][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] before test [2019-02-09T06:28:45,968][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] [RareClusterStateIT#testDelayedMappingPropagationOnReplica [seed=[7F2D32832D1B9B3B:353F34C7DFCA0D8] #35]]: setting up test [2019-02-09T06:28:45,969][INFO ][o.e.t.InternalTestCluster] [testDelayedMappingPropagationOnReplica] Setup InternalTestCluster [TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster] with seed [8965F527D4A280AA] using [0] dedicated masters, [0] (data) nodes and [0] coord only nodes (min_master_nodes are [auto-managed]) [2019-02-09T06:28:45,969][DEBUG][o.e.t.InternalTestCluster] [testDelayedMappingPropagationOnReplica] Reset test cluster with transport client ratio: [0.0] [2019-02-09T06:28:45,969][DEBUG][o.e.t.InternalTestCluster] [testDelayedMappingPropagationOnReplica] Cluster hasn't changed - moving out - nodes: [[]] nextNodeId: [0] numSharedNodes: [0] [2019-02-09T06:28:45,969][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] [RareClusterStateIT#testDelayedMappingPropagationOnReplica [seed=[7F2D32832D1B9B3B:353F34C7DFCA0D8] #35]]: all set up test [2019-02-09T06:28:45,971][DEBUG][o.e.e.NodeEnvironment ] [testDelayedMappingPropagationOnReplica] using node location [[NodePath{path=/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/0, indicesPath=/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/0/indices, fileStore=/ (/dev/nvme0n1p2), majorDeviceNumber=259, minorDeviceNumber=2}]], local_lock_id [0] [2019-02-09T06:28:45,972][DEBUG][o.e.e.NodeEnvironment ] [testDelayedMappingPropagationOnReplica] node data locations details: -> /tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/0, free_space [206.6gb], usable_space [182.8gb], total_space [467.9gb], mount [/ (/dev/nvme0n1p2)], type [ext4] [2019-02-09T06:28:45,972][INFO ][o.e.e.NodeEnvironment ] [testDelayedMappingPropagationOnReplica] heap size [7.8gb], compressed ordinary object pointers [true] [2019-02-09T06:28:45,972][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] node name [node_t0], node ID [gm38AL3aTZmKYs3giOXGJw] [2019-02-09T06:28:45,972][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] version[8.0.0], pid[27110], build[unknown/unknown/Unknown/Unknown], OS[Linux/4.18.0-15-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13-Ubuntu-3ubuntu3.18.10.1] [2019-02-09T06:28:45,972][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] JVM arguments [-ea, -Dtests.seed=7F2D32832D1B9B3B, -Didea.test.cyclic.buffer.size=1048576, -Dfile.encoding=UTF-8] [2019-02-09T06:28:45,972][DEBUG][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] using config [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/config], data [[/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data]], logs [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/logs], plugins [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/plugins] [2019-02-09T06:28:45,972][DEBUG][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/plugins] directory does not exist. [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] no modules loaded [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.index.mapper.MockFieldFilterPlugin] [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.node.NodeMocksPlugin] [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.ESIntegTestCase$AssertActionNamePlugin] [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.ESIntegTestCase$TestSeedPlugin] [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.MockHttpTransport$TestPlugin] [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.store.MockFSIndexStore$TestPlugin] [2019-02-09T06:28:45,972][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.transport.nio.MockNioTransportPlugin] [2019-02-09T06:28:45,972][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [force_merge], size [1], queue size [unbounded] [2019-02-09T06:28:45,972][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [fetch_shard_started], core [1], max [2], keep alive [5m] [2019-02-09T06:28:45,972][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [listener], size [1], queue size [unbounded] [2019-02-09T06:28:45,972][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [refresh], core [1], max [1], keep alive [5m] [2019-02-09T06:28:45,972][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [generic], core [4], max [128], keep alive [30s] [2019-02-09T06:28:45,972][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [warmer], core [1], max [1], keep alive [5m] [2019-02-09T06:28:45,973][DEBUG][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] [testDelayedMappingPropagationOnReplica] thread pool [node_t0/search] will adjust queue by [50] when determining automatic queue size [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [search], size [2], queue size [1k] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [flush], core [1], max [1], keep alive [5m] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [fetch_shard_store], core [1], max [2], keep alive [5m] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [management], core [1], max [5], keep alive [5m] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [get], size [1], queue size [1k] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [analyze], size [1], queue size [16] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [write], size [1], queue size [200] [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [snapshot], core [1], max [1], keep alive [5m] [2019-02-09T06:28:45,973][DEBUG][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] [testDelayedMappingPropagationOnReplica] thread pool [node_t0/search_throttled] will adjust queue by [50] when determining automatic queue size [2019-02-09T06:28:45,973][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [search_throttled], size [1], queue size [100] [2019-02-09T06:28:45,973][DEBUG][o.e.s.ScriptService ] [testDelayedMappingPropagationOnReplica] using script cache with max_size [100], expire [0s] [2019-02-09T06:28:45,975][DEBUG][o.e.m.j.JvmGcMonitorService] [testDelayedMappingPropagationOnReplica] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10] [2019-02-09T06:28:45,975][DEBUG][o.e.m.o.OsService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:45,975][DEBUG][o.e.m.p.ProcessService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:45,976][DEBUG][o.e.m.j.JvmService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:45,976][DEBUG][o.e.m.f.FsService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:45,976][DEBUG][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] [testDelayedMappingPropagationOnReplica] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active] [2019-02-09T06:28:45,976][DEBUG][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] [testDelayedMappingPropagationOnReplica] using [cluster_concurrent_rebalance] with [2] [2019-02-09T06:28:45,976][DEBUG][o.e.c.r.a.d.ThrottlingAllocationDecider] [testDelayedMappingPropagationOnReplica] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4] [2019-02-09T06:28:45,977][DEBUG][o.e.i.IndicesQueryCache ] [testDelayedMappingPropagationOnReplica] using [node] query cache with size [804.5mb] max filter count [10000] [2019-02-09T06:28:45,977][DEBUG][o.e.i.IndexingMemoryController] [testDelayedMappingPropagationOnReplica] using indexing buffer size [804.5mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s] [2019-02-09T06:28:45,979][DEBUG][o.e.g.GatewayMetaState ] [testDelayedMappingPropagationOnReplica] took 0s to load state [2019-02-09T06:28:45,979][DEBUG][o.e.d.SettingsBasedSeedHostsProvider] [testDelayedMappingPropagationOnReplica] using initial hosts [] [2019-02-09T06:28:45,979][INFO ][o.e.d.DiscoveryModule ] [testDelayedMappingPropagationOnReplica] using discovery type [zen] and seed hosts providers [settings, file] [2019-02-09T06:28:46,010][DEBUG][o.e.i.r.RecoverySettings ] [testDelayedMappingPropagationOnReplica] using max_bytes_per_sec[9mb] [2019-02-09T06:28:46,016][DEBUG][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] initializing HTTP handlers ... [2019-02-09T06:28:46,017][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] initialized [2019-02-09T06:28:46,019][DEBUG][o.e.e.NodeEnvironment ] [testDelayedMappingPropagationOnReplica] using node location [[NodePath{path=/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/1, indicesPath=/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/1/indices, fileStore=/ (/dev/nvme0n1p2), majorDeviceNumber=259, minorDeviceNumber=2}]], local_lock_id [1] [2019-02-09T06:28:46,019][DEBUG][o.e.e.NodeEnvironment ] [testDelayedMappingPropagationOnReplica] node data locations details: -> /tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/1, free_space [206.6gb], usable_space [182.8gb], total_space [467.9gb], mount [/ (/dev/nvme0n1p2)], type [ext4] [2019-02-09T06:28:46,019][INFO ][o.e.e.NodeEnvironment ] [testDelayedMappingPropagationOnReplica] heap size [7.8gb], compressed ordinary object pointers [true] [2019-02-09T06:28:46,020][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] node name [node_t1], node ID [18eWrETQRQm3w1D52tHH-A] [2019-02-09T06:28:46,020][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] version[8.0.0], pid[27110], build[unknown/unknown/Unknown/Unknown], OS[Linux/4.18.0-15-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13-Ubuntu-3ubuntu3.18.10.1] [2019-02-09T06:28:46,020][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] JVM arguments [-ea, -Dtests.seed=7F2D32832D1B9B3B, -Didea.test.cyclic.buffer.size=1048576, -Dfile.encoding=UTF-8] [2019-02-09T06:28:46,020][DEBUG][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] using config [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/config], data [[/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data]], logs [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/logs], plugins [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/plugins] [2019-02-09T06:28:46,020][DEBUG][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/plugins] directory does not exist. [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] no modules loaded [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.index.mapper.MockFieldFilterPlugin] [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.node.NodeMocksPlugin] [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.ESIntegTestCase$AssertActionNamePlugin] [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.ESIntegTestCase$TestSeedPlugin] [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.MockHttpTransport$TestPlugin] [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.test.store.MockFSIndexStore$TestPlugin] [2019-02-09T06:28:46,020][INFO ][o.e.p.PluginsService ] [testDelayedMappingPropagationOnReplica] loaded plugin [org.elasticsearch.transport.nio.MockNioTransportPlugin] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [force_merge], size [1], queue size [unbounded] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [fetch_shard_started], core [1], max [6], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [listener], size [2], queue size [unbounded] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [refresh], core [1], max [2], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [generic], core [4], max [128], keep alive [30s] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [warmer], core [1], max [2], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] [testDelayedMappingPropagationOnReplica] thread pool [node_t1/search] will adjust queue by [50] when determining automatic queue size [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [search], size [5], queue size [1k] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [flush], core [1], max [2], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [fetch_shard_store], core [1], max [6], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [management], core [1], max [5], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [get], size [3], queue size [1k] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [analyze], size [1], queue size [16] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [write], size [3], queue size [200] [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [snapshot], core [1], max [2], keep alive [5m] [2019-02-09T06:28:46,020][DEBUG][o.e.c.u.c.QueueResizingEsThreadPoolExecutor] [testDelayedMappingPropagationOnReplica] thread pool [node_t1/search_throttled] will adjust queue by [50] when determining automatic queue size [2019-02-09T06:28:46,020][DEBUG][o.e.t.ThreadPool ] [testDelayedMappingPropagationOnReplica] created thread pool: name [search_throttled], size [1], queue size [100] [2019-02-09T06:28:46,021][DEBUG][o.e.s.ScriptService ] [testDelayedMappingPropagationOnReplica] using script cache with max_size [1381], expire [0s] [2019-02-09T06:28:46,023][DEBUG][o.e.m.j.JvmGcMonitorService] [testDelayedMappingPropagationOnReplica] enabled [true], interval [1s], gc_threshold [{default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}, old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}}], overhead [50, 25, 10] [2019-02-09T06:28:46,023][DEBUG][o.e.m.o.OsService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:46,023][DEBUG][o.e.m.p.ProcessService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:46,024][DEBUG][o.e.m.j.JvmService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:46,024][DEBUG][o.e.m.f.FsService ] [testDelayedMappingPropagationOnReplica] using refresh_interval [1s] [2019-02-09T06:28:46,024][DEBUG][o.e.c.r.a.d.ClusterRebalanceAllocationDecider] [testDelayedMappingPropagationOnReplica] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active] [2019-02-09T06:28:46,024][DEBUG][o.e.c.r.a.d.ConcurrentRebalanceAllocationDecider] [testDelayedMappingPropagationOnReplica] using [cluster_concurrent_rebalance] with [2] [2019-02-09T06:28:46,024][DEBUG][o.e.c.r.a.d.ThrottlingAllocationDecider] [testDelayedMappingPropagationOnReplica] using node_concurrent_outgoing_recoveries [2], node_concurrent_incoming_recoveries [2], node_initial_primaries_recoveries [4] [2019-02-09T06:28:46,025][DEBUG][o.e.i.IndicesQueryCache ] [testDelayedMappingPropagationOnReplica] using [node] query cache with size [804.5mb] max filter count [10000] [2019-02-09T06:28:46,025][DEBUG][o.e.i.IndexingMemoryController] [testDelayedMappingPropagationOnReplica] using indexing buffer size [804.5mb] with indices.memory.shard_inactive_time [5m], indices.memory.interval [5s] [2019-02-09T06:28:46,027][DEBUG][o.e.g.GatewayMetaState ] [testDelayedMappingPropagationOnReplica] took 0s to load state [2019-02-09T06:28:46,027][DEBUG][o.e.d.SettingsBasedSeedHostsProvider] [testDelayedMappingPropagationOnReplica] using initial hosts [] [2019-02-09T06:28:46,027][INFO ][o.e.d.DiscoveryModule ] [testDelayedMappingPropagationOnReplica] using discovery type [zen] and seed hosts providers [settings, file] [2019-02-09T06:28:46,058][DEBUG][o.e.i.r.RecoverySettings ] [testDelayedMappingPropagationOnReplica] using max_bytes_per_sec[48mb] [2019-02-09T06:28:46,064][DEBUG][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] initializing HTTP handlers ... [2019-02-09T06:28:46,065][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] initialized [2019-02-09T06:28:46,065][DEBUG][o.e.t.InternalTestCluster] [testDelayedMappingPropagationOnReplica] configuring discovery with [] at [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/config] [2019-02-09T06:28:46,065][INFO ][o.e.n.Node ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] starting ... [2019-02-09T06:28:46,065][INFO ][o.e.n.Node ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] starting ... [2019-02-09T06:28:46,066][DEBUG][o.e.t.TcpTransport ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] binding server bootstrap to: [::1, 127.0.0.1] [2019-02-09T06:28:46,066][DEBUG][o.e.t.TcpTransport ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] binding server bootstrap to: [::1, 127.0.0.1] [2019-02-09T06:28:46,067][DEBUG][o.e.t.TcpTransport ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] Bound profile [default] to address {[::1]:34695} [2019-02-09T06:28:46,067][DEBUG][o.e.t.TcpTransport ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] Bound profile [default] to address {[::1]:43791} [2019-02-09T06:28:46,067][DEBUG][o.e.t.TcpTransport ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] Bound profile [default] to address {127.0.0.1:44153} [2019-02-09T06:28:46,067][DEBUG][o.e.t.TcpTransport ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] Bound profile [default] to address {127.0.0.1:40829} [2019-02-09T06:28:46,067][INFO ][o.e.t.TransportService ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] publish_address {127.0.0.1:44153}, bound_addresses {[::1]:43791}, {127.0.0.1:44153} [2019-02-09T06:28:46,067][INFO ][o.e.t.TransportService ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] publish_address {127.0.0.1:40829}, bound_addresses {[::1]:34695}, {127.0.0.1:40829} [2019-02-09T06:28:46,067][DEBUG][o.e.t.InternalTestCluster] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] configuring discovery with [127.0.0.1:44153, 127.0.0.1:40829] at [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/config] [2019-02-09T06:28:46,067][DEBUG][o.e.t.InternalTestCluster] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] configuring discovery with [127.0.0.1:44153, 127.0.0.1:40829] at [/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/config] [2019-02-09T06:28:46,067][DEBUG][o.e.d.SeedHostsResolver ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] using max_concurrent_resolvers [10], resolver timeout [5s] [2019-02-09T06:28:46,067][DEBUG][o.e.c.c.Coordinator ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] startInitialJoin: coordinator becoming CANDIDATE in term 0 (was null, lastKnownLeader was [Optional.empty]) [2019-02-09T06:28:46,068][DEBUG][o.e.n.Node ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] waiting to join the cluster. timeout [30s] [2019-02-09T06:28:46,069][DEBUG][o.e.d.FileBasedSeedHostsProvider] [node_t1] seed addresses: [] [2019-02-09T06:28:46,070][DEBUG][o.e.d.SeedHostsResolver ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] using max_concurrent_resolvers [10], resolver timeout [5s] [2019-02-09T06:28:46,070][DEBUG][o.e.c.c.Coordinator ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] startInitialJoin: coordinator becoming CANDIDATE in term 0 (was null, lastKnownLeader was [Optional.empty]) [2019-02-09T06:28:46,070][DEBUG][o.e.n.Node ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] waiting to join the cluster. timeout [30s] [2019-02-09T06:28:46,070][DEBUG][o.e.d.FileBasedSeedHostsProvider] [node_t0] seed addresses: [127.0.0.1:40829] [2019-02-09T06:28:46,072][DEBUG][o.e.t.ConnectionManager ] [node_t0] connected to node [{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}] [2019-02-09T06:28:46,072][INFO ][o.e.c.c.Coordinator ] [node_t0] setting initial configuration to VotingConfiguration{gm38AL3aTZmKYs3giOXGJw,18eWrETQRQm3w1D52tHH-A} [2019-02-09T06:28:46,074][DEBUG][o.e.c.c.ElectionSchedulerFactory] [node_t0] scheduling scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=45, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} [2019-02-09T06:28:46,074][DEBUG][o.e.t.ConnectionManager ] [node_t1] connected to node [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] [2019-02-09T06:28:46,119][DEBUG][o.e.c.c.ElectionSchedulerFactory] [node_t0] scheduleNextElection{gracePeriod=0s, thisAttempt=0, maxDelayMillis=100, delayMillis=45, ElectionScheduler{attempt=1, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} starting election [2019-02-09T06:28:46,119][DEBUG][o.e.c.c.ElectionSchedulerFactory] [node_t0] scheduling scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=697, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} [2019-02-09T06:28:46,119][DEBUG][o.e.c.c.PreVoteCollector ] [node_t0] PreVotingRound{preVotesReceived=[], electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, currentTerm=0}, isClosed=false} requesting pre-votes from [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, {node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}] [2019-02-09T06:28:46,119][DEBUG][o.e.c.c.PreVoteCollector ] [node_t0] PreVotingRound{preVotesReceived=[{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}], electionStarted=false, preVoteRequest=PreVoteRequest{sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, currentTerm=0}, isClosed=false} added PreVoteResponse{currentTerm=0, lastAcceptedTerm=0, lastAcceptedVersion=0} from {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, no quorum yet [2019-02-09T06:28:46,120][DEBUG][o.e.c.c.PreVoteCollector ] [node_t0] PreVotingRound{preVotesReceived=[{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}], electionStarted=true, preVoteRequest=PreVoteRequest{sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, currentTerm=0}, isClosed=false} added PreVoteResponse{currentTerm=0, lastAcceptedTerm=0, lastAcceptedVersion=0} from {node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, starting election [2019-02-09T06:28:46,120][DEBUG][o.e.c.c.Coordinator ] [node_t0] starting election with StartJoinRequest{term=1,node={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} [2019-02-09T06:28:46,120][DEBUG][o.e.c.c.Coordinator ] [node_t0] joinLeaderInTerm: for [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] with term 1 [2019-02-09T06:28:46,120][DEBUG][o.e.c.c.CoordinationState] [node_t0] handleStartJoin: leaving term [0] due to StartJoinRequest{term=1,node={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} [2019-02-09T06:28:46,120][DEBUG][o.e.c.c.Coordinator ] [node_t1] joinLeaderInTerm: for [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] with term 1 [2019-02-09T06:28:46,120][DEBUG][o.e.c.c.CoordinationState] [node_t1] handleStartJoin: leaving term [0] due to StartJoinRequest{term=1,node={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.JoinHelper ] [node_t0] attempting to join {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} with JoinRequest{sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, targetNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]} [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.JoinHelper ] [node_t0] successful response to StartJoinRequest{term=1,node={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} from {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.CoordinationState] [node_t0] handleJoin: added join Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, targetNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} from [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] for election, electionWon=false lastAcceptedTerm=0 lastAcceptedVersion=0 [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.JoinHelper ] [node_t1] attempting to join {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} with JoinRequest{sourceNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, targetNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]} [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.CoordinationState] [node_t0] handleJoin: added join Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, targetNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} from [{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}] for election, electionWon=true lastAcceptedTerm=0 lastAcceptedVersion=0 [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.JoinHelper ] [node_t0] successful response to StartJoinRequest{term=1,node={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} from {node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829} [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.CoordinationState] [node_t0] handleJoin: election won in term [1] with VoteCollection{gm38AL3aTZmKYs3giOXGJw,18eWrETQRQm3w1D52tHH-A} [2019-02-09T06:28:46,121][DEBUG][o.e.c.c.Coordinator ] [node_t0] handleJoinRequest: coordinator becoming LEADER in term 1 (was CANDIDATE, lastKnownLeader was [Optional.empty]) [2019-02-09T06:28:46,122][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [elected-as-master ([2] nodes joined)[{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829} elect leader, {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: execute [2019-02-09T06:28:46,122][DEBUG][o.e.c.c.JoinHelper ] [node_t0] received a join request for an existing node [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] [2019-02-09T06:28:46,122][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [1], source [elected-as-master ([2] nodes joined)[{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829} elect leader, {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]] [2019-02-09T06:28:46,122][INFO ][o.e.c.s.MasterService ] [node_t0] elected-as-master ([2] nodes joined)[{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829} elect leader, {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, reason: master node changed {previous [], current [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}]}, added {{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829},} [2019-02-09T06:28:46,122][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [1] [2019-02-09T06:28:46,123][DEBUG][o.e.c.c.Coordinator ] [node_t1] onFollowerCheckRequest: coordinator becoming FOLLOWER of [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] in term 1 (was CANDIDATE, lastKnownLeader was [Optional.empty]) [2019-02-09T06:28:46,123][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received full cluster state version [1] with size [376] [2019-02-09T06:28:46,129][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=1, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,129][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [1], source [ApplyCommitRequest{term=1, version=1, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,129][INFO ][o.e.c.s.ClusterApplierService] [node_t1] master node changed {previous [], current [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}]}, added {{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153},}, term: 1, version: 1, reason: ApplyCommitRequest{term=1, version=1, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}} [2019-02-09T06:28:46,129][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 1 [2019-02-09T06:28:46,129][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 1 [2019-02-09T06:28:46,129][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=1, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 1, uuid: XSLc-k8yR0uDAT--siAU4g) [2019-02-09T06:28:46,130][INFO ][o.e.n.Node ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#2]]] started [2019-02-09T06:28:46,130][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=1}]: execute [2019-02-09T06:28:46,130][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [1], source [Publication{term=1, version=1}] [2019-02-09T06:28:46,130][INFO ][o.e.c.s.ClusterApplierService] [node_t0] master node changed {previous [], current [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}]}, added {{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829},}, term: 1, version: 1, reason: Publication{term=1, version=1} [2019-02-09T06:28:46,130][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 1 [2019-02-09T06:28:46,130][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 1 [2019-02-09T06:28:46,130][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=1}]: took [0s] done applying updated cluster state (version: 1, uuid: XSLc-k8yR0uDAT--siAU4g) [2019-02-09T06:28:46,130][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=1} [2019-02-09T06:28:46,130][INFO ][o.e.n.Node ] [[test_TEST-CHILD_VM=[0]-CLUSTER_SEED=[-8546155166445698902]-HASH=[18BD7DE3ECF8]-cluster[T#1]]] started [2019-02-09T06:28:46,131][DEBUG][o.e.c.c.JoinHelper ] [node_t0] successfully joined {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} with JoinRequest{sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}, targetNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]} [2019-02-09T06:28:46,131][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [elected-as-master ([2] nodes joined)[{node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829} elect leader, {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]: took [9ms] done publishing updated cluster state (version: 1, uuid: XSLc-k8yR0uDAT--siAU4g) [2019-02-09T06:28:46,131][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [reconfigure]: execute [2019-02-09T06:28:46,131][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [2], source [reconfigure] [2019-02-09T06:28:46,131][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [2] [2019-02-09T06:28:46,131][DEBUG][o.e.c.c.JoinHelper ] [node_t1] successfully joined {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} with JoinRequest{sourceNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, targetNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]} [2019-02-09T06:28:46,132][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received full cluster state version [2] with size [377] [2019-02-09T06:28:46,134][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] using custom data_path for index: [gkLUDxVDix] [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=2, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [2], source [ApplyCommitRequest{term=1, version=2, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 2 [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 2 [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=2, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 2, uuid: j71HC9EGRSys61jBPMYeVw) [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=2}]: execute [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [2], source [Publication{term=1, version=2}] [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 2 [2019-02-09T06:28:46,138][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 2 [2019-02-09T06:28:46,139][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=2}]: took [0s] done applying updated cluster state (version: 2, uuid: j71HC9EGRSys61jBPMYeVw) [2019-02-09T06:28:46,139][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=2} [2019-02-09T06:28:46,139][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [reconfigure]: took [7ms] done publishing updated cluster state (version: 2, uuid: j71HC9EGRSys61jBPMYeVw) [2019-02-09T06:28:46,139][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [local-gateway-elected-state]: execute [2019-02-09T06:28:46,139][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [3], source [local-gateway-elected-state] [2019-02-09T06:28:46,139][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [3] [2019-02-09T06:28:46,140][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received full cluster state version [3] with size [322] [2019-02-09T06:28:46,141][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=3, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,141][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [3], source [ApplyCommitRequest{term=1, version=3, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,141][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 3 [2019-02-09T06:28:46,141][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 3 [2019-02-09T06:28:46,141][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 3 [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=3, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 3, uuid: 4RQRAOFUT9mz8B66CeWudg) [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=3}]: execute [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [3], source [Publication{term=1, version=3}] [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 3 [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 3 [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 3 [2019-02-09T06:28:46,142][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=3}]: took [0s] done applying updated cluster state (version: 3, uuid: 4RQRAOFUT9mz8B66CeWudg) [2019-02-09T06:28:46,143][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=3} [2019-02-09T06:28:46,143][INFO ][o.e.g.GatewayService ] [node_t0] recovered [0] indices into cluster_state [2019-02-09T06:28:46,143][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [local-gateway-elected-state]: took [3ms] done publishing updated cluster state (version: 3, uuid: 4RQRAOFUT9mz8B66CeWudg) [2019-02-09T06:28:46,143][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [create-index [index], cause [api]]: execute [2019-02-09T06:28:46,143][DEBUG][o.e.i.IndicesService ] [node_t0] creating Index [[index/JCwqBx5LQdqRvDfhfRYEBw]], shards [1]/[1] - reason [create index] [2019-02-09T06:28:46,144][INFO ][o.e.c.m.MetaDataCreateIndexService] [node_t0] [index] creating index, cause [api], templates [], shards [1]/[1], mappings [] [2019-02-09T06:28:46,145][DEBUG][o.e.i.IndicesService ] [node_t0] [index] closing ... (reason [NO_LONGER_ASSIGNED]) [2019-02-09T06:28:46,145][DEBUG][o.e.i.IndicesService ] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] closing index service (reason [NO_LONGER_ASSIGNED][cleaning up after validating index on master]) [2019-02-09T06:28:46,145][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t0] [index] clearing all bitsets because [close] [2019-02-09T06:28:46,145][DEBUG][o.e.i.c.q.IndexQueryCache] [node_t0] [index] full cache clear, reason [close] [2019-02-09T06:28:46,145][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t0] [index] clearing all bitsets because [close] [2019-02-09T06:28:46,145][DEBUG][o.e.i.IndicesService ] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] closed... (reason [NO_LONGER_ASSIGNED][cleaning up after validating index on master]) [2019-02-09T06:28:46,145][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [4], source [create-index [index], cause [api]] [2019-02-09T06:28:46,145][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [4] [2019-02-09T06:28:46,146][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [4] with uuid [qrG3ZyPKQhmTy-ptroWwGA], diff size [441] [2019-02-09T06:28:46,150][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=4, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,150][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [4], source [ApplyCommitRequest{term=1, version=4, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,150][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 4 [2019-02-09T06:28:46,150][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 4 [2019-02-09T06:28:46,150][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 4 [2019-02-09T06:28:46,151][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=4, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 4, uuid: qrG3ZyPKQhmTy-ptroWwGA) [2019-02-09T06:28:46,151][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=4}]: execute [2019-02-09T06:28:46,151][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [4], source [Publication{term=1, version=4}] [2019-02-09T06:28:46,151][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 4 [2019-02-09T06:28:46,151][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 4 [2019-02-09T06:28:46,151][DEBUG][o.e.i.c.IndicesClusterStateService] [node_t0] [[index/JCwqBx5LQdqRvDfhfRYEBw]] creating index [2019-02-09T06:28:46,151][DEBUG][o.e.i.IndicesService ] [node_t0] creating Index [[index/JCwqBx5LQdqRvDfhfRYEBw]], shards [1]/[1] - reason [create index] [2019-02-09T06:28:46,152][DEBUG][o.e.i.c.IndicesClusterStateService] [node_t0] [index][0] creating shard with primary term [1] [2019-02-09T06:28:46,153][DEBUG][o.e.i.IndexService ] [node_t0] [index] [index][0] creating using a new path [ShardPath{path=/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/0/indices/JCwqBx5LQdqRvDfhfRYEBw/0, shard=[index][0]}] [2019-02-09T06:28:46,153][DEBUG][o.e.i.IndexService ] [node_t0] [index] creating shard_id [index][0] [2019-02-09T06:28:46,153][DEBUG][o.e.t.s.MockFSDirectoryService] [node_t0] [index][0] Using MockDirWrapper with seed [0] throttle: [NEVER] crashIndex: [true] [2019-02-09T06:28:46,154][DEBUG][o.e.i.s.Store ] [node_t0] [index][0] store stats are refreshed with refresh_interval [10s] [2019-02-09T06:28:46,154][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] state: [CREATED] [2019-02-09T06:28:46,155][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] state: [CREATED]->[RECOVERING], reason [from store] [2019-02-09T06:28:46,155][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 4 [2019-02-09T06:28:46,155][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] starting recovery from store ... [2019-02-09T06:28:46,156][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=4}]: took [4ms] done applying updated cluster state (version: 4, uuid: qrG3ZyPKQhmTy-ptroWwGA) [2019-02-09T06:28:46,156][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=4} [2019-02-09T06:28:46,156][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [create-index [index], cause [api]]: took [13ms] done publishing updated cluster state (version: 4, uuid: qrG3ZyPKQhmTy-ptroWwGA) [2019-02-09T06:28:46,168][DEBUG][o.e.i.t.Translog ] [node_t0] [index][0] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-1, minTranslogGeneration=1, trimmedAboveSeqNo=-2} [2019-02-09T06:28:46,171][DEBUG][o.e.i.e.Engine ] [node_t0] [index][0] Safe commit [CommitPoint{segment[segments_2], userData[{history_uuid=mAvM-UwsSgiIXuSFcETd4A, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_generation=1, translog_uuid=JUSSDZ1TRK2klkEuL3tX9Q}]}], last commit [CommitPoint{segment[segments_2], userData[{history_uuid=mAvM-UwsSgiIXuSFcETd4A, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_generation=1, translog_uuid=JUSSDZ1TRK2klkEuL3tX9Q}]}] [2019-02-09T06:28:46,173][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store] [2019-02-09T06:28:46,173][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] recovery completed from [shard_store], took [20ms] [2019-02-09T06:28:46,173][DEBUG][o.e.c.a.s.ShardStateAction] [node_t0] sending [internal:cluster/shard/started] to [gm38AL3aTZmKYs3giOXGJw] for shard entry [StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}] [2019-02-09T06:28:46,174][DEBUG][o.e.c.a.s.ShardStateAction] [node_t0] [index][0] received shard started for [StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}] [2019-02-09T06:28:46,174][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [shard-started StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}]]: execute [2019-02-09T06:28:46,174][DEBUG][o.e.c.a.s.ShardStateAction] [node_t0] [index][0] starting shard [index][0], node[gm38AL3aTZmKYs3giOXGJw], [P], recovery_source[new shard recovery], s[INITIALIZING], a[id=mrqPodr3Qb2kOFPu_vFLeQ], unassigned_info[[reason=INDEX_CREATED], at[2019-02-08T19:28:46.144Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}]) [2019-02-09T06:28:46,175][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [5], source [shard-started StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}]] [2019-02-09T06:28:46,175][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [5] [2019-02-09T06:28:46,176][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [5] with uuid [0ReOsmOcRZeUSJcD05gp3Q], diff size [435] [2019-02-09T06:28:46,179][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=5, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [5], source [ApplyCommitRequest{term=1, version=5, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 5 [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 5 [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 5 [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=5, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 5, uuid: 0ReOsmOcRZeUSJcD05gp3Q) [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=5}]: execute [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [5], source [Publication{term=1, version=5}] [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 5 [2019-02-09T06:28:46,180][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 5 [2019-02-09T06:28:46,181][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] [2019-02-09T06:28:46,181][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 5 [2019-02-09T06:28:46,182][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=5}]: took [1ms] done applying updated cluster state (version: 5, uuid: 0ReOsmOcRZeUSJcD05gp3Q) [2019-02-09T06:28:46,182][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=5} [2019-02-09T06:28:46,182][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [shard-started StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}[StartedShardEntry{shardId [[index][0]], allocationId [mrqPodr3Qb2kOFPu_vFLeQ], primary term [1], message [after new shard recovery]}]]: took [8ms] done publishing updated cluster state (version: 5, uuid: 0ReOsmOcRZeUSJcD05gp3Q) [2019-02-09T06:28:46,183][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [update-settings]: execute [2019-02-09T06:28:46,184][DEBUG][o.e.i.IndicesQueryCache ] [node_t0] using [node] query cache with size [804.5mb] max filter count [10000] [2019-02-09T06:28:46,185][DEBUG][o.e.i.IndicesService ] [node_t0] creating Index [[index/JCwqBx5LQdqRvDfhfRYEBw]], shards [1]/[1] - reason [metadata verification] [2019-02-09T06:28:46,186][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t0] [index] clearing all bitsets because [close] [2019-02-09T06:28:46,186][DEBUG][o.e.i.c.q.IndexQueryCache] [node_t0] [index] full cache clear, reason [close] [2019-02-09T06:28:46,186][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t0] [index] clearing all bitsets because [close] [2019-02-09T06:28:46,186][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [6], source [update-settings] [2019-02-09T06:28:46,186][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [6] [2019-02-09T06:28:46,188][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [6] with uuid [u3YDbc_aTPSzfHvKKoxiBQ], diff size [474] [2019-02-09T06:28:46,192][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=6, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,192][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [6], source [ApplyCommitRequest{term=1, version=6, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,192][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 6 [2019-02-09T06:28:46,192][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 6 [2019-02-09T06:28:46,192][DEBUG][o.e.i.c.IndicesClusterStateService] [node_t1] [[index/JCwqBx5LQdqRvDfhfRYEBw]] creating index [2019-02-09T06:28:46,193][DEBUG][o.e.i.IndicesService ] [node_t1] creating Index [[index/JCwqBx5LQdqRvDfhfRYEBw]], shards [1]/[1] - reason [create index] [2019-02-09T06:28:46,193][DEBUG][o.e.i.c.IndicesClusterStateService] [node_t1] [index][0] creating shard with primary term [1] [2019-02-09T06:28:46,194][DEBUG][o.e.i.IndexService ] [node_t1] [index] [index][0] creating using a new path [ShardPath{path=/tmp/org.elasticsearch.cluster.coordination.RareClusterStateIT_7F2D32832D1B9B3B-001/tempDir-036/data/nodes/1/indices/JCwqBx5LQdqRvDfhfRYEBw/0, shard=[index][0]}] [2019-02-09T06:28:46,194][DEBUG][o.e.i.IndexService ] [node_t1] [index] creating shard_id [index][0] [2019-02-09T06:28:46,194][DEBUG][o.e.t.s.MockFSDirectoryService] [node_t1] [index][0] Using MockDirWrapper with seed [0] throttle: [NEVER] crashIndex: [true] [2019-02-09T06:28:46,195][DEBUG][o.e.i.s.Store ] [node_t1] [index][0] store stats are refreshed with refresh_interval [10s] [2019-02-09T06:28:46,195][DEBUG][o.e.i.s.IndexShard ] [node_t1] [index][0] state: [CREATED] [2019-02-09T06:28:46,197][DEBUG][o.e.i.s.IndexShard ] [node_t1] [index][0] state: [CREATED]->[RECOVERING], reason [from {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}] [2019-02-09T06:28:46,197][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 6 [2019-02-09T06:28:46,197][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=6, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [4ms] done applying updated cluster state (version: 6, uuid: u3YDbc_aTPSzfHvKKoxiBQ) [2019-02-09T06:28:46,198][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=6}]: execute [2019-02-09T06:28:46,198][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [6], source [Publication{term=1, version=6}] [2019-02-09T06:28:46,198][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 6 [2019-02-09T06:28:46,198][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 6 [2019-02-09T06:28:46,198][DEBUG][o.e.i.r.RecoverySourceHandler] [node_t0] [index][0][recover to node_t1] delaying recovery of [index][0] as it is not listed as assigned to target node {node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829} [2019-02-09T06:28:46,198][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 6 [2019-02-09T06:28:46,198][DEBUG][o.e.i.r.RecoveryTarget ] [node_t1] [index][0] reset of recovery with shard [index][0] and id [61] [2019-02-09T06:28:46,199][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=6}]: took [1ms] done applying updated cluster state (version: 6, uuid: u3YDbc_aTPSzfHvKKoxiBQ) [2019-02-09T06:28:46,199][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=6} [2019-02-09T06:28:46,199][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [update-settings]: took [15ms] done publishing updated cluster state (version: 6, uuid: u3YDbc_aTPSzfHvKKoxiBQ) [2019-02-09T06:28:46,199][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [cluster_health (wait_for_events [LANGUID])]: execute [2019-02-09T06:28:46,199][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [cluster_health (wait_for_events [LANGUID])]: took [0s] no change in cluster state [2019-02-09T06:28:46,257][DEBUG][o.e.i.t.Translog ] [node_t1] [index][0] open uncommitted translog checkpoint Checkpoint{offset=55, numOps=0, generation=1, minSeqNo=-1, maxSeqNo=-1, globalCheckpoint=-2, minTranslogGeneration=1, trimmedAboveSeqNo=-2} [2019-02-09T06:28:46,259][DEBUG][o.e.i.e.Engine ] [node_t1] [index][0] Safe commit [CommitPoint{segment[segments_3], userData[{history_uuid=mAvM-UwsSgiIXuSFcETd4A, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_generation=1, translog_uuid=GCln5sr8SbOIbh8-IcUTdA}]}], last commit [CommitPoint{segment[segments_3], userData[{history_uuid=mAvM-UwsSgiIXuSFcETd4A, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_generation=1, translog_uuid=GCln5sr8SbOIbh8-IcUTdA}]}] [2019-02-09T06:28:46,262][DEBUG][o.e.i.s.IndexShard ] [node_t1] [index][0] state: [RECOVERING]->[POST_RECOVERY], reason [peer recovery done] [2019-02-09T06:28:46,262][DEBUG][o.e.c.a.s.ShardStateAction] [node_t1] sending [internal:cluster/shard/started] to [gm38AL3aTZmKYs3giOXGJw] for shard entry [StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}] [2019-02-09T06:28:46,262][DEBUG][o.e.i.r.PeerRecoveryTargetService] [node_t1] [index][0] recovery done from [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}], took [68ms] [2019-02-09T06:28:46,262][DEBUG][o.e.c.a.s.ShardStateAction] [node_t0] [index][0] received shard started for [StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}] [2019-02-09T06:28:46,262][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [shard-started StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}[StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}]]: execute [2019-02-09T06:28:46,262][DEBUG][o.e.c.a.s.ShardStateAction] [node_t0] [index][0] starting shard [index][0], node[18eWrETQRQm3w1D52tHH-A], [R], recovery_source[peer recovery], s[INITIALIZING], a[id=-M1Q4k1rSgmgzCIw3M6Gtw], unassigned_info[[reason=INDEX_CREATED], at[2019-02-08T19:28:46.144Z], delayed=false, allocation_status[no_attempt]] (shard started task: [StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}]) [2019-02-09T06:28:46,263][INFO ][o.e.c.r.a.AllocationService] [node_t0] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[index][0]] ...]). [2019-02-09T06:28:46,263][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [7], source [shard-started StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}[StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}]] [2019-02-09T06:28:46,263][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [7] [2019-02-09T06:28:46,263][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [7] with uuid [qNJhCU_PSAmE5mzyJrsQZw], diff size [462] [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=7, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [7], source [ApplyCommitRequest{term=1, version=7, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 7 [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 7 [2019-02-09T06:28:46,266][DEBUG][o.e.i.s.IndexShard ] [node_t1] [index][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]] [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 7 [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=7, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 7, uuid: qNJhCU_PSAmE5mzyJrsQZw) [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=7}]: execute [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [7], source [Publication{term=1, version=7}] [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 7 [2019-02-09T06:28:46,266][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 7 [2019-02-09T06:28:46,267][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 7 [2019-02-09T06:28:46,267][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=7}]: took [0s] done applying updated cluster state (version: 7, uuid: qNJhCU_PSAmE5mzyJrsQZw) [2019-02-09T06:28:46,267][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=7} [2019-02-09T06:28:46,267][DEBUG][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] indices [_all] are green [2019-02-09T06:28:46,267][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [shard-started StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}[StartedShardEntry{shardId [[index][0]], allocationId [-M1Q4k1rSgmgzCIw3M6Gtw], primary term [1], message [after peer recovery]}]]: took [4ms] done publishing updated cluster state (version: 7, uuid: qNJhCU_PSAmE5mzyJrsQZw) [2019-02-09T06:28:46,267][INFO ][o.e.t.d.BlockClusterStateProcessing] [testDelayedMappingPropagationOnReplica] delaying cluster state updates on node [node_t1] [2019-02-09T06:28:46,267][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [service_disruption_block]: execute [2019-02-09T06:28:46,268][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [put-mapping[type]]: execute [2019-02-09T06:28:46,269][DEBUG][o.e.c.m.MetaDataMappingService] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] create_mapping [type] with source [{"type":{"properties":{"field":{"type":"long"}}}}] [2019-02-09T06:28:46,269][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [8], source [put-mapping[type]] [2019-02-09T06:28:46,269][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [8] [2019-02-09T06:28:46,270][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [8] with uuid [zPZMAXsfTZG8dUB7Hqswgg], diff size [454] [2019-02-09T06:28:46,271][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=8}]: execute [2019-02-09T06:28:46,271][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [8], source [Publication{term=1, version=8}] [2019-02-09T06:28:46,271][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 8 [2019-02-09T06:28:46,271][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 8 [2019-02-09T06:28:46,272][DEBUG][o.e.i.m.MapperService ] [node_t0] [index] [[index/JCwqBx5LQdqRvDfhfRYEBw]] added mapping [type], source [{"type":{"properties":{"field":{"type":"long"}}}}] [2019-02-09T06:28:46,272][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 8 [2019-02-09T06:28:46,272][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=8}]: took [1ms] done applying updated cluster state (version: 8, uuid: zPZMAXsfTZG8dUB7Hqswgg) [2019-02-09T06:28:46,272][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=8} [2019-02-09T06:28:46,272][DEBUG][o.e.c.c.LagDetector ] [node_t0] starting lag detector for version 8: [NodeAppliedStateTracker{discoveryNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, appliedVersion=7}] [2019-02-09T06:28:46,272][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [put-mapping[type]]: took [4ms] done publishing updated cluster state (version: 8, uuid: zPZMAXsfTZG8dUB7Hqswgg) [2019-02-09T06:28:46,283][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [put-mapping[type]]: execute [2019-02-09T06:28:46,286][DEBUG][o.e.c.m.MetaDataMappingService] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] update_mapping [type] with source [{"type":{"properties":{"field":{"type":"long"},"field2":{"type":"long"}}}}] [2019-02-09T06:28:46,286][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [9], source [put-mapping[type]] [2019-02-09T06:28:46,286][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [9] [2019-02-09T06:28:46,288][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [9] with uuid [MxDFx8k_T5G-Ax_94HN8hQ], diff size [467] [2019-02-09T06:28:46,816][DEBUG][o.e.c.c.ElectionSchedulerFactory] [node_t0] scheduleNextElection{gracePeriod=500ms, thisAttempt=1, maxDelayMillis=200, delayMillis=697, ElectionScheduler{attempt=2, ElectionSchedulerFactory{initialTimeout=100ms, backoffTime=100ms, maxTimeout=10s}}} not starting election [2019-02-09T06:28:56,285][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] [RareClusterStateIT#testDelayedMappingPropagationOnReplica [seed=[7F2D32832D1B9B3B:353F34C7DFCA0D8] #35]]: cleaning up after test [2019-02-09T06:28:56,285][INFO ][o.e.t.InternalTestCluster] [testDelayedMappingPropagationOnReplica] Clearing active scheme org.elasticsearch.test.disruption.BlockClusterStateProcessing@12edc304, expected healing time 0s [2019-02-09T06:28:56,285][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [service_disruption_block]: took [10s] no change in cluster state [2019-02-09T06:28:56,286][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=8, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:56,286][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [9], source [ApplyCommitRequest{term=1, version=8, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:56,286][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 9 [2019-02-09T06:28:56,286][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 9 [2019-02-09T06:28:56,286][DEBUG][o.e.i.m.MapperService ] [node_t1] [index] [[index/JCwqBx5LQdqRvDfhfRYEBw]] added mapping [type], source [{"type":{"properties":{"field":{"type":"long"},"field2":{"type":"long"}}}}] [2019-02-09T06:28:56,286][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 9 [2019-02-09T06:28:56,286][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=8, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] done applying updated cluster state (version: 9, uuid: MxDFx8k_T5G-Ax_94HN8hQ) [2019-02-09T06:28:56,287][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=9, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:56,287][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=9, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [0s] no change in cluster state [2019-02-09T06:28:56,287][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=9}]: execute [2019-02-09T06:28:56,287][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [9], source [Publication{term=1, version=9}] [2019-02-09T06:28:56,287][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 9 [2019-02-09T06:28:56,287][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 9 [2019-02-09T06:28:56,288][DEBUG][o.e.i.m.MapperService ] [node_t0] [index] [[index/JCwqBx5LQdqRvDfhfRYEBw]] updated mapping [type], source [{"type":{"properties":{"field":{"type":"long"},"field2":{"type":"long"}}}}] [2019-02-09T06:28:56,288][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 9 [2019-02-09T06:28:56,289][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=9}]: took [1ms] done applying updated cluster state (version: 9, uuid: MxDFx8k_T5G-Ax_94HN8hQ) [2019-02-09T06:28:56,289][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=9} [2019-02-09T06:28:56,289][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [put-mapping[type]]: took [10s] done publishing updated cluster state (version: 9, uuid: MxDFx8k_T5G-Ax_94HN8hQ) [2019-02-09T06:28:56,296][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [delete-index [[index/JCwqBx5LQdqRvDfhfRYEBw]]]: execute [2019-02-09T06:28:56,296][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] deleting index [2019-02-09T06:28:56,296][DEBUG][o.e.c.s.MasterService ] [node_t0] cluster state updated, version [10], source [delete-index [[index/JCwqBx5LQdqRvDfhfRYEBw]]] [2019-02-09T06:28:56,296][DEBUG][o.e.c.s.MasterService ] [node_t0] publishing cluster state version [10] [2019-02-09T06:28:56,296][DEBUG][o.e.c.c.PublicationTransportHandler] [node_t1] received diff cluster state version [10] with uuid [7PIPGSyDS0eBWPtcmSg_vg], diff size [268] [2019-02-09T06:28:56,298][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=10, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: execute [2019-02-09T06:28:56,298][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [10], source [ApplyCommitRequest{term=1, version=10, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}] [2019-02-09T06:28:56,298][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] applying settings from cluster state with version 10 [2019-02-09T06:28:56,298][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 10 [2019-02-09T06:28:56,298][DEBUG][o.e.i.c.IndicesClusterStateService] [node_t1] [[index/JCwqBx5LQdqRvDfhfRYEBw]] cleaning index, no longer part of the metadata [2019-02-09T06:28:56,298][DEBUG][o.e.i.IndicesService ] [node_t1] [index] closing ... (reason [DELETED]) [2019-02-09T06:28:56,299][DEBUG][o.e.i.IndicesService ] [node_t1] [index/JCwqBx5LQdqRvDfhfRYEBw] closing index service (reason [DELETED][index no longer part of the metadata]) [2019-02-09T06:28:56,299][DEBUG][o.e.i.IndexService ] [node_t1] [index] [0] closing... (reason: [index no longer part of the metadata]) [2019-02-09T06:28:56,299][DEBUG][o.e.i.s.IndexShard ] [node_t1] [index][0] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata] [2019-02-09T06:28:56,299][DEBUG][o.e.i.e.Engine ] [node_t1] [index][0] close now acquiring writeLock [2019-02-09T06:28:56,299][DEBUG][o.e.i.e.Engine ] [node_t1] [index][0] close acquired writeLock [2019-02-09T06:28:56,299][DEBUG][o.e.i.t.Translog ] [node_t1] [index][0] translog closed [2019-02-09T06:28:56,299][DEBUG][o.e.i.e.Engine ] [node_t1] [index][0] engine closed [api] [2019-02-09T06:28:56,299][INFO ][o.e.t.s.M.Listener ] [node_t1] [index][0] start check index [2019-02-09T06:28:56,300][DEBUG][o.e.t.s.M.Listener ] [node_t1] [index][0] check index [success] NaN% total deletions; 0 documents; 0 deleteions Segments file=segments_3 numSegments=0 id=6yka9aowji93padhpcrx8kgk4 userData={history_uuid=mAvM-UwsSgiIXuSFcETd4A, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_generation=1, translog_uuid=GCln5sr8SbOIbh8-IcUTdA} No problems were detected with this index. Took 0.000 sec total. [2019-02-09T06:28:56,300][INFO ][o.e.t.s.M.Listener ] [node_t1] [index][0] end check index [2019-02-09T06:28:56,301][DEBUG][o.e.i.s.Store ] [node_t1] [index][0] store reference count on close: 0 [2019-02-09T06:28:56,301][DEBUG][o.e.i.IndexService ] [node_t1] [index] [0] closed (reason: [index no longer part of the metadata]) [2019-02-09T06:28:56,301][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t1] [index] clearing all bitsets because [close] [2019-02-09T06:28:56,301][DEBUG][o.e.i.c.q.IndexQueryCache] [node_t1] [index] full cache clear, reason [close] [2019-02-09T06:28:56,302][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t1] [index] clearing all bitsets because [close] [2019-02-09T06:28:56,302][DEBUG][o.e.i.IndicesService ] [node_t1] [index/JCwqBx5LQdqRvDfhfRYEBw] closed... (reason [DELETED][index no longer part of the metadata]) [2019-02-09T06:28:56,302][DEBUG][o.e.i.IndicesService ] [node_t1] [index/JCwqBx5LQdqRvDfhfRYEBw] deleting index store reason [index no longer part of the metadata] [2019-02-09T06:28:56,302][DEBUG][o.e.i.IndicesService ] [node_t1] [index/JCwqBx5LQdqRvDfhfRYEBw] processing pending deletes [2019-02-09T06:28:56,302][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 10 [2019-02-09T06:28:56,302][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [ApplyCommitRequest{term=1, version=10, sourceNode={node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}}]: took [3ms] done applying updated cluster state (version: 10, uuid: 7PIPGSyDS0eBWPtcmSg_vg) [2019-02-09T06:28:56,302][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=10}]: execute [2019-02-09T06:28:56,302][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] cluster state updated, version [10], source [Publication{term=1, version=10}] [2019-02-09T06:28:56,302][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] applying settings from cluster state with version 10 [2019-02-09T06:28:56,302][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] apply cluster state with version 10 [2019-02-09T06:28:56,303][DEBUG][o.e.i.c.IndicesClusterStateService] [node_t0] [[index/JCwqBx5LQdqRvDfhfRYEBw]] cleaning index, no longer part of the metadata [2019-02-09T06:28:56,303][DEBUG][o.e.i.IndicesService ] [node_t0] [index] closing ... (reason [DELETED]) [2019-02-09T06:28:56,303][DEBUG][o.e.i.IndicesService ] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] closing index service (reason [DELETED][index no longer part of the metadata]) [2019-02-09T06:28:56,303][DEBUG][o.e.i.IndexService ] [node_t0] [index] [0] closing... (reason: [index no longer part of the metadata]) [2019-02-09T06:28:56,303][DEBUG][o.e.i.s.IndexShard ] [node_t0] [index][0] state: [STARTED]->[CLOSED], reason [index no longer part of the metadata] [2019-02-09T06:28:56,303][DEBUG][o.e.i.e.Engine ] [node_t0] [index][0] close now acquiring writeLock [2019-02-09T06:28:56,303][DEBUG][o.e.i.e.Engine ] [node_t0] [index][0] close acquired writeLock [2019-02-09T06:28:56,303][DEBUG][o.e.i.t.Translog ] [node_t0] [index][0] translog closed [2019-02-09T06:28:56,304][DEBUG][o.e.i.e.Engine ] [node_t0] [index][0] engine closed [api] [2019-02-09T06:28:56,304][INFO ][o.e.t.s.M.Listener ] [node_t0] [index][0] start check index [2019-02-09T06:28:56,305][DEBUG][o.e.t.s.M.Listener ] [node_t0] [index][0] check index [success] NaN% total deletions; 0 documents; 0 deleteions Segments file=segments_2 numSegments=0 id=6yka9aowji93padhpcrx8kgk3 userData={history_uuid=mAvM-UwsSgiIXuSFcETd4A, local_checkpoint=-1, max_seq_no=-1, max_unsafe_auto_id_timestamp=-1, translog_generation=1, translog_uuid=JUSSDZ1TRK2klkEuL3tX9Q} No problems were detected with this index. Took 0.000 sec total. [2019-02-09T06:28:56,305][INFO ][o.e.t.s.M.Listener ] [node_t0] [index][0] end check index [2019-02-09T06:28:56,308][DEBUG][o.e.i.s.Store ] [node_t0] [index][0] store reference count on close: 0 [2019-02-09T06:28:56,308][DEBUG][o.e.i.IndexService ] [node_t0] [index] [0] closed (reason: [index no longer part of the metadata]) [2019-02-09T06:28:56,308][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t0] [index] clearing all bitsets because [close] [2019-02-09T06:28:56,308][DEBUG][o.e.i.c.q.IndexQueryCache] [node_t0] [index] full cache clear, reason [close] [2019-02-09T06:28:56,308][DEBUG][o.e.i.c.b.BitsetFilterCache] [node_t0] [index] clearing all bitsets because [close] [2019-02-09T06:28:56,308][DEBUG][o.e.i.IndicesService ] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] closed... (reason [DELETED][index no longer part of the metadata]) [2019-02-09T06:28:56,308][DEBUG][o.e.i.IndicesService ] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] deleting index store reason [index no longer part of the metadata] [2019-02-09T06:28:56,308][DEBUG][o.e.i.IndicesService ] [node_t0] [index/JCwqBx5LQdqRvDfhfRYEBw] processing pending deletes [2019-02-09T06:28:56,308][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] set locally applied cluster state to version 10 [2019-02-09T06:28:56,308][DEBUG][o.e.c.s.ClusterApplierService] [node_t0] processing [Publication{term=1, version=10}]: took [6ms] done applying updated cluster state (version: 10, uuid: 7PIPGSyDS0eBWPtcmSg_vg) [2019-02-09T06:28:56,309][DEBUG][o.e.c.c.C.CoordinatorPublication] [node_t0] publication ended successfully: Publication{term=1, version=10} [2019-02-09T06:28:56,309][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [delete-index [[index/JCwqBx5LQdqRvDfhfRYEBw]]]: took [12ms] done publishing updated cluster state (version: 10, uuid: 7PIPGSyDS0eBWPtcmSg_vg) [2019-02-09T06:28:56,309][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [delete_repository [*]]: execute [2019-02-09T06:28:56,309][DEBUG][o.e.c.s.MasterService ] [node_t0] processing [delete_repository [*]]: took [0s] no change in cluster state [2019-02-09T06:28:56,309][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] stopping ... [2019-02-09T06:28:56,310][DEBUG][o.e.c.c.FollowersChecker ] [node_t0] FollowerChecker{discoveryNode={node_t1}{18eWrETQRQm3w1D52tHH-A}{c2zpQgJNRcmHiwIQgGazsg}{127.0.0.1}{127.0.0.1:40829}, failureCountSinceLastSuccess=0, [cluster.fault_detection.follower_check.retry_count]=3} marking node as faulty [2019-02-09T06:28:56,310][DEBUG][o.e.c.c.Coordinator ] [node_t1] onLeaderFailure: coordinator becoming CANDIDATE in term 1 (was FOLLOWER, lastKnownLeader was [Optional[{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}]]) [2019-02-09T06:28:56,310][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] stopped [2019-02-09T06:28:56,310][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] closing ... [2019-02-09T06:28:56,310][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [becoming candidate: onLeaderFailure]: execute [2019-02-09T06:28:56,310][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] cluster state updated, version [10], source [becoming candidate: onLeaderFailure] [2019-02-09T06:28:56,310][INFO ][o.e.c.s.ClusterApplierService] [node_t1] master node changed {previous [{node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153}], current []}, term: 1, version: 10, reason: becoming candidate: onLeaderFailure [2019-02-09T06:28:56,311][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] closed [2019-02-09T06:28:56,311][DEBUG][o.e.d.FileBasedSeedHostsProvider] [node_t1] seed addresses: [127.0.0.1:44153] [2019-02-09T06:28:56,311][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] stopping ... [2019-02-09T06:28:56,311][DEBUG][o.e.c.c.C.CoordinatorPeerFinder] [node_t1] Peer{transportAddress=127.0.0.1:44153, discoveryNode=null, peersRequestInFlight=false} connection failed org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:44153] connect_exception at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1298) ~[main/:?] at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:99) ~[main/:?] at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[main/:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?] at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[?:?] at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[?:?] at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[?:?] at org.elasticsearch.common.concurrent.CompletableContext.completeExceptionally(CompletableContext.java:57) ~[main/:?] at org.elasticsearch.nio.SocketChannelContext.connect(SocketChannelContext.java:119) ~[main/:?] at org.elasticsearch.nio.EventHandler.handleConnect(EventHandler.java:97) ~[main/:?] at org.elasticsearch.transport.nio.TestingSocketEventHandler.handleConnect(TestingSocketEventHandler.java:43) ~[main/:?] at org.elasticsearch.nio.NioSelector.attemptConnect(NioSelector.java:406) ~[main/:?] at org.elasticsearch.nio.NioSelector.setUpNewChannels(NioSelector.java:423) ~[main/:?] at org.elasticsearch.nio.NioSelector.preSelect(NioSelector.java:260) ~[main/:?] at org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:153) ~[main/:?] at org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:129) ~[main/:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?] at org.elasticsearch.nio.SocketChannelContext.connect(SocketChannelContext.java:116) ~[main/:?] ... 8 more [2019-02-09T06:28:56,311][WARN ][o.e.c.NodeConnectionsService] [node_t1] failed to connect to node {node_t0}{gm38AL3aTZmKYs3giOXGJw}{-Pd_rhHnTqSrhtmI56tkog}{127.0.0.1}{127.0.0.1:44153} (tried [1] times) org.elasticsearch.transport.ConnectTransportException: [node_t0][127.0.0.1:44153] connect_exception at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onFailure(TcpTransport.java:1298) ~[main/:?] at org.elasticsearch.action.ActionListener.lambda$toBiConsumer$2(ActionListener.java:99) ~[main/:?] at org.elasticsearch.common.concurrent.CompletableContext.lambda$addListener$0(CompletableContext.java:42) ~[main/:?] at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[?:?] at java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883) ~[?:?] at java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2251) ~[?:?] at org.elasticsearch.common.concurrent.CompletableContext.addListener(CompletableContext.java:45) ~[main/:?] at org.elasticsearch.nio.SocketChannelContext.addConnectListener(SocketChannelContext.java:82) ~[main/:?] at org.elasticsearch.nio.NioSocketChannel.addConnectListener(NioSocketChannel.java:77) ~[main/:?] at org.elasticsearch.transport.nio.MockNioTransport$MockSocketChannel.addConnectListener(MockNioTransport.java:304) ~[main/:?] at org.elasticsearch.transport.TcpTransport.initiateConnection(TcpTransport.java:325) ~[main/:?] at org.elasticsearch.transport.TcpTransport.openConnection(TcpTransport.java:292) ~[main/:?] at org.elasticsearch.transport.ConnectionManager.internalOpenConnection(ConnectionManager.java:223) ~[main/:?] at org.elasticsearch.transport.ConnectionManager.connectToNode(ConnectionManager.java:109) ~[main/:?] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:343) ~[main/:?] at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:330) ~[main/:?] at org.elasticsearch.cluster.NodeConnectionsService.validateAndConnectIfNeeded(NodeConnectionsService.java:154) [main/:?] at org.elasticsearch.cluster.NodeConnectionsService$1.doRun(NodeConnectionsService.java:107) [main/:?] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [main/:?] at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [main/:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:?] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779) ~[?:?] at org.elasticsearch.nio.SocketChannelContext.connect(SocketChannelContext.java:116) ~[main/:?] at org.elasticsearch.nio.EventHandler.handleConnect(EventHandler.java:97) ~[main/:?] at org.elasticsearch.transport.nio.TestingSocketEventHandler.handleConnect(TestingSocketEventHandler.java:43) ~[main/:?] at org.elasticsearch.nio.NioSelector.attemptConnect(NioSelector.java:406) ~[main/:?] at org.elasticsearch.nio.NioSelector.setUpNewChannels(NioSelector.java:423) ~[main/:?] at org.elasticsearch.nio.NioSelector.preSelect(NioSelector.java:260) ~[main/:?] at org.elasticsearch.nio.NioSelector.singleLoop(NioSelector.java:153) ~[main/:?] at org.elasticsearch.nio.NioSelector.runLoop(NioSelector.java:129) ~[main/:?] ... 1 more [2019-02-09T06:28:56,311][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] apply cluster state with version 10 [2019-02-09T06:28:56,311][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] set locally applied cluster state to version 10 [2019-02-09T06:28:56,311][DEBUG][o.e.c.s.ClusterApplierService] [node_t1] processing [becoming candidate: onLeaderFailure]: took [1ms] done applying updated cluster state (version: 10, uuid: 7PIPGSyDS0eBWPtcmSg_vg) [2019-02-09T06:28:56,312][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] stopped [2019-02-09T06:28:56,312][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] closing ... [2019-02-09T06:28:56,312][INFO ][o.e.n.Node ] [testDelayedMappingPropagationOnReplica] closed [2019-02-09T06:28:56,312][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] [RareClusterStateIT#testDelayedMappingPropagationOnReplica [seed=[7F2D32832D1B9B3B:353F34C7DFCA0D8] #35]]: cleaned up after test [2019-02-09T06:28:56,312][INFO ][o.e.c.c.RareClusterStateIT] [testDelayedMappingPropagationOnReplica] after test REPRODUCE WITH: ./gradlew null -Dtests.seed=7F2D32832D1B9B3B -Dtests.class=org.elasticsearch.cluster.coordination.RareClusterStateIT -Dtests.method="testDelayedMappingPropagationOnReplica [seed=[7F2D32832D1B9B3B:353F34C7DFCA0D8] #35]" -Dtests.locale=km-KH -Dtests.timezone=Australia/NSW java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([7F2D32832D1B9B3B:353F34C7DFCA0D8]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:858) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:832) at org.elasticsearch.cluster.coordination.RareClusterStateIT.executeAndCancelCommittedPublication(RareClusterStateIT.java:150) at org.elasticsearch.cluster.coordination.RareClusterStateIT.testDelayedMappingPropagationOnReplica(RareClusterStateIT.java:355) at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more Suppressed: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.elasticsearch.cluster.coordination.RareClusterStateIT.lambda$executeAndCancelCommittedPublication$0(RareClusterStateIT.java:150) at org.elasticsearch.test.ESTestCase.assertBusy(ESTestCase.java:846) ... 38 more