Skip to content

Commit

Permalink
Merge branch 'main' into can-match-doesnt-fail-on-frozen
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine authored Oct 18, 2024
2 parents a1d3bfd + 3bb20e3 commit d07dc7c
Show file tree
Hide file tree
Showing 55 changed files with 4,017 additions and 177 deletions.
5 changes: 5 additions & 0 deletions docs/changelog/114899.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 114899
summary: "ES|QL: Fix stats by constant expression"
area: ES|QL
type: bug
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/115031.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 115031
summary: Bool query early termination should also consider `must_not` clauses
area: Search
type: enhancement
issues: []
11 changes: 7 additions & 4 deletions docs/reference/cluster/allocation-explain.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ node.
<5> The decider which led to the `no` decision for the node.
<6> An explanation as to why the decider returned a `no` decision, with a helpful hint pointing to the setting that led to the decision. In this example, a newly created index has <<indices-get-settings,an index setting>> that requires that it only be allocated to a node named `nonexistent_node`, which does not exist, so the index is unable to allocate.

[[maximum-number-of-retries-exceeded]]
====== Maximum number of retries exceeded

The following response contains an allocation explanation for an unassigned
Expand Down Expand Up @@ -195,17 +196,19 @@ primary shard that has reached the maximum number of allocation retry attempts.
{
"decider": "max_retry",
"decision" : "NO",
"explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
"explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
}
]
}
]
}
----
// NOTCONSOLE

If decider message indicates a transient allocation issue, use
the <<cluster-reroute,cluster reroute>> API to retry allocation.
When Elasticsearch is unable to allocate a shard, it will attempt to retry allocation up to
the maximum number of retries allowed. After this, Elasticsearch will stop attempting to
allocate the shard in order to prevent infinite retries which may impact cluster
performance. Run the <<cluster-reroute,cluster reroute>> API to retry allocation, which
will allocate the shard if the issue preventing allocation has been resolved.

[[no-valid-shard-copy]]
====== No valid shard copy
Expand Down
9 changes: 5 additions & 4 deletions docs/reference/snapshot-restore/register-repository.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -248,10 +248,11 @@ that you have an archive copy of its contents that you can use to recreate the
repository in its current state at a later date.

You must ensure that {es} does not write to the repository while you are taking
the backup of its contents. You can do this by unregistering it, or registering
it with `readonly: true`, on all your clusters. If {es} writes any data to the
repository during the backup then the contents of the backup may not be
consistent and it may not be possible to recover any data from it in future.
the backup of its contents. If {es} writes any data to the repository during
the backup then the contents of the backup may not be consistent and it may not
be possible to recover any data from it in future. Prevent writes to the
repository by unregistering the repository from the cluster which has write
access to it.

Alternatively, if your repository supports it, you may take an atomic snapshot
of the underlying filesystem and then take a backup of this filesystem
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,19 @@
import org.elasticsearch.action.datastreams.GetDataStreamAction;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.ClusterStateUpdateTask;
import org.elasticsearch.cluster.metadata.ComposableIndexTemplate;
import org.elasticsearch.cluster.metadata.DataStream;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.disruption.IntermittentLongGCDisruption;
import org.elasticsearch.test.disruption.SingleNodeDisruption;
import org.elasticsearch.xcontent.XContentType;

import java.util.Collection;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.ExecutionException;

import static org.hamcrest.Matchers.equalTo;
Expand All @@ -43,25 +45,38 @@ protected Collection<Class<? extends Plugin>> nodePlugins() {
}

public void testRolloverIsExecutedOnce() throws ExecutionException, InterruptedException {
String masterNode = internalCluster().startMasterOnlyNode();
internalCluster().startMasterOnlyNode();
internalCluster().startDataOnlyNodes(3);
ensureStableCluster(4);

String dataStreamName = "my-data-stream";
createDataStream(dataStreamName);

// Mark it to lazy rollover
new RolloverRequestBuilder(client()).setRolloverTarget(dataStreamName).lazy(true).execute().get();
safeGet(new RolloverRequestBuilder(client()).setRolloverTarget(dataStreamName).lazy(true).execute());

// Verify that the data stream is marked for rollover and that it has currently one index
DataStream dataStream = getDataStream(dataStreamName);
assertThat(dataStream.rolloverOnWrite(), equalTo(true));
assertThat(dataStream.getBackingIndices().getIndices().size(), equalTo(1));

// Introduce a disruption to the master node that should delay the rollover execution
SingleNodeDisruption masterNodeDisruption = new IntermittentLongGCDisruption(random(), masterNode, 100, 200, 30000, 60000);
internalCluster().setDisruptionScheme(masterNodeDisruption);
masterNodeDisruption.startDisrupting();
final var barrier = new CyclicBarrier(2);
internalCluster().getCurrentMasterNodeInstance(ClusterService.class)
.submitUnbatchedStateUpdateTask("block", new ClusterStateUpdateTask() {
@Override
public ClusterState execute(ClusterState currentState) {
safeAwait(barrier);
safeAwait(barrier);
return currentState;
}

@Override
public void onFailure(Exception e) {
fail(e);
}
});
safeAwait(barrier);

// Start indexing operations
int docs = randomIntBetween(5, 10);
Expand All @@ -84,10 +99,10 @@ public void onFailure(Exception e) {
}

// End the disruption so that all pending tasks will complete
masterNodeDisruption.stopDisrupting();
safeAwait(barrier);

// Wait for all the indexing requests to be processed successfully
countDownLatch.await();
safeAwait(countDownLatch);

// Verify that the rollover has happened once
dataStream = getDataStream(dataStreamName);
Expand All @@ -96,10 +111,12 @@ public void onFailure(Exception e) {
}

private DataStream getDataStream(String dataStreamName) {
return client().execute(
GetDataStreamAction.INSTANCE,
new GetDataStreamAction.Request(TEST_REQUEST_TIMEOUT, new String[] { dataStreamName })
).actionGet().getDataStreams().get(0).getDataStream();
return safeGet(
client().execute(
GetDataStreamAction.INSTANCE,
new GetDataStreamAction.Request(TEST_REQUEST_TIMEOUT, new String[] { dataStreamName })
)
).getDataStreams().get(0).getDataStream();
}

private void createDataStream(String dataStreamName) throws InterruptedException, ExecutionException {
Expand All @@ -111,19 +128,19 @@ private void createDataStream(String dataStreamName) throws InterruptedException
.dataStreamTemplate(new ComposableIndexTemplate.DataStreamTemplate(false, false))
.build()
);
final AcknowledgedResponse putComposableTemplateResponse = client().execute(
TransportPutComposableIndexTemplateAction.TYPE,
putComposableTemplateRequest
).actionGet();
final AcknowledgedResponse putComposableTemplateResponse = safeGet(
client().execute(TransportPutComposableIndexTemplateAction.TYPE, putComposableTemplateRequest)
);
assertThat(putComposableTemplateResponse.isAcknowledged(), is(true));

final CreateDataStreamAction.Request createDataStreamRequest = new CreateDataStreamAction.Request(
TEST_REQUEST_TIMEOUT,
TEST_REQUEST_TIMEOUT,
dataStreamName
);
final AcknowledgedResponse createDataStreamResponse = client().execute(CreateDataStreamAction.INSTANCE, createDataStreamRequest)
.get();
final AcknowledgedResponse createDataStreamResponse = safeGet(
client().execute(CreateDataStreamAction.INSTANCE, createDataStreamRequest)
);
assertThat(createDataStreamResponse.isAcknowledged(), is(true));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -56,14 +56,19 @@ public abstract class DotPrefixValidator<RequestType> implements MappedActionFil
*
* .elastic-connectors-* is used by enterprise search
* .ml-* is used by ML
* .slo-observability-* is used by Observability
*/
private static Set<String> IGNORED_INDEX_NAMES = Set.of(
".elastic-connectors-v1",
".elastic-connectors-sync-jobs-v1",
".ml-state",
".ml-anomalies-unrelated"
);
private static Set<Pattern> IGNORED_INDEX_PATTERNS = Set.of(Pattern.compile("\\.ml-state-\\d+"));
private static Set<Pattern> IGNORED_INDEX_PATTERNS = Set.of(
Pattern.compile("\\.ml-state-\\d+"),
Pattern.compile("\\.slo-observability\\.sli-v\\d+.*"),
Pattern.compile("\\.slo-observability\\.summary-v\\d+.*")
);

DeprecationLogger deprecationLogger = DeprecationLogger.getLogger(DotPrefixValidator.class);

Expand Down Expand Up @@ -99,10 +104,11 @@ void validateIndices(@Nullable Set<String> indices) {
if (Strings.hasLength(index)) {
char c = getFirstChar(index);
if (c == '.') {
if (IGNORED_INDEX_NAMES.contains(index)) {
final String strippedName = stripDateMath(index);
if (IGNORED_INDEX_NAMES.contains(strippedName)) {
return;
}
if (IGNORED_INDEX_PATTERNS.stream().anyMatch(p -> p.matcher(index).matches())) {
if (IGNORED_INDEX_PATTERNS.stream().anyMatch(p -> p.matcher(strippedName).matches())) {
return;
}
deprecationLogger.warn(
Expand Down Expand Up @@ -132,7 +138,18 @@ private static char getFirstChar(String index) {
return c;
}

private boolean isInternalRequest() {
private static String stripDateMath(String index) {
char c = index.charAt(0);
if (c == '<') {
assert index.charAt(index.length() - 1) == '>'
: "expected index name with date math to start with < and end with >, how did this pass request validation? " + index;
return index.substring(1, index.length() - 1);
} else {
return index;
}
}

boolean isInternalRequest() {
final String actionOrigin = threadContext.getTransient(ThreadContext.ACTION_ORIGIN_TRANSIENT_NAME);
final boolean isSystemContext = threadContext.isSystemContext();
final boolean isInternalOrigin = Optional.ofNullable(actionOrigin).map(Strings::hasText).orElse(false);
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the "Elastic License
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
* Public License v 1"; you may not use this file except in compliance with, at
* your election, the "Elastic License 2.0", the "GNU Affero General Public
* License v3.0 only", or the "Server Side Public License, v 1".
*/

package org.elasticsearch.validation;

import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.threadpool.ThreadPool;
import org.junit.BeforeClass;

import java.util.HashSet;
import java.util.Set;

import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

public class DotPrefixValidatorTests extends ESTestCase {
private final OperatorValidator<?> opV = new OperatorValidator<>();
private final NonOperatorValidator<?> nonOpV = new NonOperatorValidator<>();
private static final Set<Setting<?>> settings;

private static ClusterService clusterService;
private static ClusterSettings clusterSettings;

static {
Set<Setting<?>> cSettings = new HashSet<>(ClusterSettings.BUILT_IN_CLUSTER_SETTINGS);
cSettings.add(DotPrefixValidator.VALIDATE_DOT_PREFIXES);
settings = cSettings;
}

@BeforeClass
public static void beforeClass() {
clusterService = mock(ClusterService.class);
clusterSettings = new ClusterSettings(Settings.EMPTY, Sets.newHashSet(DotPrefixValidator.VALIDATE_DOT_PREFIXES));
when(clusterService.getClusterSettings()).thenReturn(clusterSettings);
when(clusterService.getSettings()).thenReturn(Settings.EMPTY);
when(clusterService.threadPool()).thenReturn(mock(ThreadPool.class));
}

public void testValidation() {

nonOpV.validateIndices(Set.of("regular"));
opV.validateIndices(Set.of("regular"));
assertFails(Set.of(".regular"));
opV.validateIndices(Set.of(".regular"));
assertFails(Set.of("first", ".second"));
assertFails(Set.of("<.regular-{MM-yy-dd}>"));

// Test ignored names
nonOpV.validateIndices(Set.of(".elastic-connectors-v1"));
nonOpV.validateIndices(Set.of(".elastic-connectors-sync-jobs-v1"));
nonOpV.validateIndices(Set.of(".ml-state"));
nonOpV.validateIndices(Set.of(".ml-anomalies-unrelated"));

// Test ignored patterns
nonOpV.validateIndices(Set.of(".ml-state-21309"));
nonOpV.validateIndices(Set.of(">.ml-state-21309>"));
nonOpV.validateIndices(Set.of(".slo-observability.sli-v2"));
nonOpV.validateIndices(Set.of(".slo-observability.sli-v2.3"));
nonOpV.validateIndices(Set.of(".slo-observability.sli-v2.3-2024-01-01"));
nonOpV.validateIndices(Set.of("<.slo-observability.sli-v3.3.{2024-10-16||/M{yyyy-MM-dd|UTC}}>"));
nonOpV.validateIndices(Set.of(".slo-observability.summary-v2"));
nonOpV.validateIndices(Set.of(".slo-observability.summary-v2.3"));
nonOpV.validateIndices(Set.of(".slo-observability.summary-v2.3-2024-01-01"));
nonOpV.validateIndices(Set.of("<.slo-observability.summary-v3.3.{2024-10-16||/M{yyyy-MM-dd|UTC}}>"));
}

private void assertFails(Set<String> indices) {
nonOpV.validateIndices(indices);
assertWarnings(
"Index ["
+ indices.stream().filter(i -> i.startsWith(".") || i.startsWith("<.")).toList().getFirst()
+ "] name begins with a dot (.), which is deprecated, and will not be allowed in a future Elasticsearch version."
);
}

private class NonOperatorValidator<R> extends DotPrefixValidator<R> {

private NonOperatorValidator() {
super(new ThreadContext(Settings.EMPTY), clusterService);
}

@Override
protected Set<String> getIndicesFromRequest(Object request) {
return Set.of();
}

@Override
public String actionName() {
return "";
}

@Override
boolean isInternalRequest() {
return false;
}
}

private class OperatorValidator<R> extends NonOperatorValidator<R> {
@Override
boolean isInternalRequest() {
return true;
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,8 @@ public BlockLoader blockLoader(BlockLoaderContext blContext) {
SourceValueFetcher fetcher = SourceValueFetcher.toString(blContext.sourcePaths(name()));
// MatchOnlyText never has norms, so we have to use the field names field
BlockSourceReader.LeafIteratorLookup lookup = BlockSourceReader.lookupFromFieldNames(blContext.fieldNames(), name());
return new BlockSourceReader.BytesRefsBlockLoader(fetcher, lookup);
var sourceMode = blContext.indexSettings().getIndexMappingSourceMode();
return new BlockSourceReader.BytesRefsBlockLoader(fetcher, lookup, sourceMode);
}

@Override
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,8 @@ public BlockLoader blockLoader(BlockLoaderContext blContext) {
BlockSourceReader.LeafIteratorLookup lookup = isStored() || isIndexed()
? BlockSourceReader.lookupFromFieldNames(blContext.fieldNames(), name())
: BlockSourceReader.lookupMatchingAll();
return new BlockSourceReader.DoublesBlockLoader(valueFetcher, lookup);
var sourceMode = blContext.indexSettings().getIndexMappingSourceMode();
return new BlockSourceReader.DoublesBlockLoader(valueFetcher, lookup, sourceMode);
}

@Override
Expand Down
Loading

0 comments on commit d07dc7c

Please sign in to comment.