Skip to content

Commit

Permalink
Merge branch 'master' of github.com:elastic/kibana into detection-rul…
Browse files Browse the repository at this point in the history
…es-for-case
  • Loading branch information
jonathan-buttner committed Feb 10, 2021
2 parents 8a553c2 + 061cb50 commit f78ba23
Show file tree
Hide file tree
Showing 125 changed files with 3,333 additions and 1,047 deletions.
5 changes: 2 additions & 3 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,8 @@
/x-pack/plugins/telemetry_collection_xpack/ @elastic/kibana-core
/.telemetryrc.json @elastic/kibana-core
/x-pack/.telemetryrc.json @elastic/kibana-core
src/plugins/telemetry/schema/legacy_oss_plugins.json @elastic/kibana-core
src/plugins/telemetry/schema/oss_plugins.json @elastic/kibana-core
x-pack/plugins/telemetry_collection_xpack/schema/xpack_plugins.json @elastic/kibana-core
src/plugins/telemetry/schema/ @elastic/kibana-core @elastic/kibana-telemetry @elastic/infra-telemetry
x-pack/plugins/telemetry_collection_xpack/schema/ @elastic/kibana-core @elastic/kibana-telemetry @elastic/infra-telemetry

# Kibana Localization
/src/dev/i18n/ @elastic/kibana-localization @elastic/kibana-core
Expand Down
48 changes: 26 additions & 22 deletions rfcs/text/0013_saved_object_migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,45 +248,49 @@ Note:
6. Use the reindexed legacy `.kibana_pre6.5.0_001` as the source for the rest of the migration algorithm.
3. If `.kibana` and `.kibana_7.10.0` both exists and are pointing to the same index this version's migration has already been completed.
1. Because the same version can have plugins enabled at any point in time,
perform the mappings update in step (7) and migrate outdated documents
with step (8).
2. Skip to step (10) to start serving traffic.
migrate outdated documents with step (9) and perform the mappings update in step (10).
2. Skip to step (12) to start serving traffic.
4. Fail the migration if:
1. `.kibana` is pointing to an index that belongs to a later version of Kibana .e.g. `.kibana_7.12.0_001`
2. (Only in 8.x) The source index contains documents that belong to an unknown Saved Object type (from a disabled plugin). Log an error explaining that the plugin that created these documents needs to be enabled again or that these objects should be deleted. See section (4.2.1.4).
5. Mark the source index as read-only and wait for all in-flight operations to drain (requires https://github.com/elastic/elasticsearch/pull/58094). This prevents any further writes from outdated nodes. Assuming this API is similar to the existing `/<index>/_close` API, we expect to receive `"acknowledged" : true` and `"shards_acknowledged" : true`. If all shards don’t acknowledge within the timeout, retry the operation until it succeeds.
6. Clone the source index into a new target index which has writes enabled. All nodes on the same version will use the same fixed index name e.g. `.kibana_7.10.0_001`. The `001` postfix isn't used by Kibana, but allows for re-indexing an index should this be required by an Elasticsearch upgrade. E.g. re-index `.kibana_7.10.0_001` into `.kibana_7.10.0_002` and point the `.kibana_7.10.0` alias to `.kibana_7.10.0_002`.
1. `POST /.kibana_n/_clone/.kibana_7.10.0_001?wait_for_active_shards=all {"settings": {"index.blocks.write": false}}`. Ignore errors if the clone already exists.
2. Wait for the cloning to complete `GET /_cluster/health/.kibana_7.10.0_001?wait_for_status=green&timeout=60s` If cloning doesn’t complete within the 60s timeout, log a warning for visibility and poll again.
7. Update the mappings of the target index
5. Set a write block on the source index. This prevents any further writes from outdated nodes.
6. Create a new temporary index `.kibana_7.10.0_reindex_temp` with `dynamic: false` on the top-level mappings so that any kind of document can be written to the index. This allows us to write untransformed documents to the index which might have fields which have been removed from the latest mappings defined by the plugin. Define minimal mappings for the `migrationVersion` and `type` fields so that we're still able to search for outdated documents that need to be transformed.
1. Ignore errors if the target index already exists.
7. Reindex the source index into the new temporary index.
1. Use `op_type=create` `conflicts=proceed` and `wait_for_completion=false` so that multiple instances can perform the reindex in parallel but only one write per document will succeed.
2. Wait for the reindex task to complete. If reindexing doesn’t complete within the 60s timeout, log a warning for visibility and poll again.
8. Clone the temporary index into the target index `.kibana_7.10.0_001`. Since any further writes will only happen against the cloned target index this prevents a lost delete from occuring where one instance finishes the migration and deletes a document and another instance's reindex operation re-creates the deleted document.
1. Set a write block on the temporary index
2. Clone the temporary index into the target index while specifying that the target index should have writes enabled.
3. If the clone operation fails because the target index already exist, ignore the error and wait for the target index to become green before proceeding.
4. (The `001` postfix in the target index name isn't used by Kibana, but allows for re-indexing an index should this be required by an Elasticsearch upgrade. E.g. re-index `.kibana_7.10.0_001` into `.kibana_7.10.0_002` and point the `.kibana_7.10.0` alias to `.kibana_7.10.0_002`.)
9. Transform documents by reading batches of outdated documents from the target index then transforming and updating them with optimistic concurrency control.
1. Ignore any version conflict errors.
2. If a document transform throws an exception, add the document to a failure list and continue trying to transform all other documents. If any failures occured, log the complete list of documents that failed to transform. Fail the migration.
10. Update the mappings of the target index
1. Retrieve the existing mappings including the `migrationMappingPropertyHashes` metadata.
2. Update the mappings with `PUT /.kibana_7.10.0_001/_mapping`. The API deeply merges any updates so this won't remove the mappings of any plugins that were enabled in a previous version but are now disabled.
2. Update the mappings with `PUT /.kibana_7.10.0_001/_mapping`. The API deeply merges any updates so this won't remove the mappings of any plugins that are disabled on this instance but have been enabled on another instance that also migrated this index.
3. Ensure that fields are correctly indexed using the target index's latest mappings `POST /.kibana_7.10.0_001/_update_by_query?conflicts=proceed`. In the future we could optimize this query by only targeting documents:
1. That belong to a known saved object type.
2. Which don't have outdated migrationVersion numbers since these will be transformed anyway.
3. That belong to a type whose mappings were changed by comparing the `migrationMappingPropertyHashes`. (Metadata, unlike the mappings isn't commutative, so there is a small chance that the metadata hashes do not accurately reflect the latest mappings, however, this will just result in an less efficient query).
8. Transform documents by reading batches of outdated documents from the target index then transforming and updating them with optimistic concurrency control.
1. Ignore any version conflict errors.
2. If a document transform throws an exception, add the document to a failure list and continue trying to transform all other documents. If any failures occured, log the complete list of documents that failed to transform. Fail the migration.
9. Mark the migration as complete. This is done as a single atomic
11. Mark the migration as complete. This is done as a single atomic
operation (requires https://github.com/elastic/elasticsearch/pull/58100)
to guarantees when multiple versions of Kibana are performing the
to guarantee that when multiple versions of Kibana are performing the
migration in parallel, only one version will win. E.g. if 7.11 and 7.12
are started in parallel and migrate from a 7.9 index, either 7.11 or 7.12
should succeed and accept writes, but not both.
3. Checks that `.kibana` alias is still pointing to the source index
4. Points the `.kibana_7.10.0` and `.kibana` aliases to the target index.
5. If this fails with a "required alias [.kibana] does not exist" error fetch `.kibana` again:
1. Check that `.kibana` alias is still pointing to the source index
2. Point the `.kibana_7.10.0` and `.kibana` aliases to the target index.
3. Remove the temporary index `.kibana_7.10.0_reindex_temp`
4. If this fails with a "required alias [.kibana] does not exist" error or "index_not_found_exception" for the temporary index, fetch `.kibana` again:
1. If `.kibana` is _not_ pointing to our target index fail the migration.
2. If `.kibana` is pointing to our target index the migration has succeeded and we can proceed to step (10).
10. Start serving traffic. All saved object reads/writes happen through the
2. If `.kibana` is pointing to our target index the migration has succeeded and we can proceed to step (12).
12. Start serving traffic. All saved object reads/writes happen through the
version-specific alias `.kibana_7.10.0`.
Together with the limitations, this algorithm ensures that migrations are
idempotent. If two nodes are started simultaneously, both of them will start
transforming documents in that version's target index, but because migrations
are idempotent, it doesn’t matter which node’s writes win.
#### Known weaknesses:
(Also present in our existing migration algorithm since v7.4)
When the task manager index gets reindexed a reindex script is applied.
Expand Down
4 changes: 1 addition & 3 deletions x-pack/plugins/actions/server/action_type_registry.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,7 @@ beforeEach(() => {
actionTypeRegistryParams = {
licensing: licensingMock.createSetup(),
taskManager: mockTaskManager,
taskRunnerFactory: new TaskRunnerFactory(
new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false })
),
taskRunnerFactory: new TaskRunnerFactory(new ActionExecutor({ isESOCanEncrypt: true })),
actionsConfigUtils: mockedActionsConfig,
licenseState: mockedLicenseState,
preconfiguredActions: [
Expand Down
8 changes: 2 additions & 6 deletions x-pack/plugins/actions/server/actions_client.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -59,9 +59,7 @@ beforeEach(() => {
actionTypeRegistryParams = {
licensing: licensingMock.createSetup(),
taskManager: mockTaskManager,
taskRunnerFactory: new TaskRunnerFactory(
new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false })
),
taskRunnerFactory: new TaskRunnerFactory(new ActionExecutor({ isESOCanEncrypt: true })),
actionsConfigUtils: actionsConfigMock.create(),
licenseState: mockedLicenseState,
preconfiguredActions: [],
Expand Down Expand Up @@ -411,9 +409,7 @@ describe('create()', () => {
const localActionTypeRegistryParams = {
licensing: licensingMock.createSetup(),
taskManager: mockTaskManager,
taskRunnerFactory: new TaskRunnerFactory(
new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false })
),
taskRunnerFactory: new TaskRunnerFactory(new ActionExecutor({ isESOCanEncrypt: true })),
actionsConfigUtils: localConfigUtils,
licenseState: licenseStateMock.create(),
preconfiguredActions: [],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,7 @@ export function createActionTypeRegistry(): {
const actionTypeRegistry = new ActionTypeRegistry({
taskManager: taskManagerMock.createSetup(),
licensing: licensingMock.createSetup(),
taskRunnerFactory: new TaskRunnerFactory(
new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false })
),
taskRunnerFactory: new TaskRunnerFactory(new ActionExecutor({ isESOCanEncrypt: true })),
actionsConfigUtils: actionsConfigMock.create(),
licenseState: licenseStateMock.create(),
preconfiguredActions: [],
Expand Down
14 changes: 7 additions & 7 deletions x-pack/plugins/actions/server/create_execute_function.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ describe('execute()', () => {
const executeFn = createExecutionEnqueuerFunction({
taskManager: mockTaskManager,
actionTypeRegistry,
isESOUsingEphemeralEncryptionKey: false,
isESOCanEncrypt: true,
preconfiguredActions: [],
});
savedObjectsClient.get.mockResolvedValueOnce({
Expand Down Expand Up @@ -87,7 +87,7 @@ describe('execute()', () => {
const executeFn = createExecutionEnqueuerFunction({
taskManager: mockTaskManager,
actionTypeRegistry: actionTypeRegistryMock.create(),
isESOUsingEphemeralEncryptionKey: false,
isESOCanEncrypt: true,
preconfiguredActions: [
{
id: '123',
Expand Down Expand Up @@ -158,10 +158,10 @@ describe('execute()', () => {
);
});

test('throws when passing isESOUsingEphemeralEncryptionKey with true as a value', async () => {
test('throws when passing isESOCanEncrypt with false as a value', async () => {
const executeFn = createExecutionEnqueuerFunction({
taskManager: mockTaskManager,
isESOUsingEphemeralEncryptionKey: true,
isESOCanEncrypt: false,
actionTypeRegistry: actionTypeRegistryMock.create(),
preconfiguredActions: [],
});
Expand All @@ -173,15 +173,15 @@ describe('execute()', () => {
apiKey: null,
})
).rejects.toThrowErrorMatchingInlineSnapshot(
`"Unable to execute action because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."`
`"Unable to execute action because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."`
);
});

test('should ensure action type is enabled', async () => {
const mockedActionTypeRegistry = actionTypeRegistryMock.create();
const executeFn = createExecutionEnqueuerFunction({
taskManager: mockTaskManager,
isESOUsingEphemeralEncryptionKey: false,
isESOCanEncrypt: true,
actionTypeRegistry: mockedActionTypeRegistry,
preconfiguredActions: [],
});
Expand Down Expand Up @@ -211,7 +211,7 @@ describe('execute()', () => {
const mockedActionTypeRegistry = actionTypeRegistryMock.create();
const executeFn = createExecutionEnqueuerFunction({
taskManager: mockTaskManager,
isESOUsingEphemeralEncryptionKey: false,
isESOCanEncrypt: true,
actionTypeRegistry: mockedActionTypeRegistry,
preconfiguredActions: [
{
Expand Down
8 changes: 4 additions & 4 deletions x-pack/plugins/actions/server/create_execute_function.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import { isSavedObjectExecutionSource } from './lib';

interface CreateExecuteFunctionOptions {
taskManager: TaskManagerStartContract;
isESOUsingEphemeralEncryptionKey: boolean;
isESOCanEncrypt: boolean;
actionTypeRegistry: ActionTypeRegistryContract;
preconfiguredActions: PreConfiguredAction[];
}
Expand All @@ -33,16 +33,16 @@ export type ExecutionEnqueuer = (
export function createExecutionEnqueuerFunction({
taskManager,
actionTypeRegistry,
isESOUsingEphemeralEncryptionKey,
isESOCanEncrypt,
preconfiguredActions,
}: CreateExecuteFunctionOptions) {
return async function execute(
unsecuredSavedObjectsClient: SavedObjectsClientContract,
{ id, params, spaceId, source, apiKey }: ExecuteOptions
) {
if (isESOUsingEphemeralEncryptionKey === true) {
if (!isESOCanEncrypt) {
throw new Error(
`Unable to execute action because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.`
`Unable to execute action because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.`
);
}

Expand Down
8 changes: 4 additions & 4 deletions x-pack/plugins/actions/server/lib/action_executor.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ import { ActionType } from '../types';
import { actionsMock, actionsClientMock } from '../mocks';
import { pick } from 'lodash';

const actionExecutor = new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false });
const actionExecutor = new ActionExecutor({ isESOCanEncrypt: true });
const services = actionsMock.createServices();

const actionsClient = actionsClientMock.create();
Expand Down Expand Up @@ -310,8 +310,8 @@ test('should not throws an error if actionType is preconfigured', async () => {
});
});

test('throws an error when passing isESOUsingEphemeralEncryptionKey with value of true', async () => {
const customActionExecutor = new ActionExecutor({ isESOUsingEphemeralEncryptionKey: true });
test('throws an error when passing isESOCanEncrypt with value of false', async () => {
const customActionExecutor = new ActionExecutor({ isESOCanEncrypt: false });
customActionExecutor.initialize({
logger: loggingSystemMock.create().get(),
spaces: spacesMock,
Expand All @@ -325,7 +325,7 @@ test('throws an error when passing isESOUsingEphemeralEncryptionKey with value o
await expect(
customActionExecutor.execute(executeParams)
).rejects.toThrowErrorMatchingInlineSnapshot(
`"Unable to execute action because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."`
`"Unable to execute action because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."`
);
});

Expand Down
10 changes: 5 additions & 5 deletions x-pack/plugins/actions/server/lib/action_executor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ export type ActionExecutorContract = PublicMethodsOf<ActionExecutor>;
export class ActionExecutor {
private isInitialized = false;
private actionExecutorContext?: ActionExecutorContext;
private readonly isESOUsingEphemeralEncryptionKey: boolean;
private readonly isESOCanEncrypt: boolean;

constructor({ isESOUsingEphemeralEncryptionKey }: { isESOUsingEphemeralEncryptionKey: boolean }) {
this.isESOUsingEphemeralEncryptionKey = isESOUsingEphemeralEncryptionKey;
constructor({ isESOCanEncrypt }: { isESOCanEncrypt: boolean }) {
this.isESOCanEncrypt = isESOCanEncrypt;
}

public initialize(actionExecutorContext: ActionExecutorContext) {
Expand All @@ -72,9 +72,9 @@ export class ActionExecutor {
throw new Error('ActionExecutor not initialized');
}

if (this.isESOUsingEphemeralEncryptionKey === true) {
if (!this.isESOCanEncrypt) {
throw new Error(
`Unable to execute action because the Encrypted Saved Objects plugin uses an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.`
`Unable to execute action because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.`
);
}

Expand Down
8 changes: 2 additions & 6 deletions x-pack/plugins/actions/server/lib/task_runner_factory.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -84,18 +84,14 @@ beforeEach(() => {
});

test(`throws an error if factory isn't initialized`, () => {
const factory = new TaskRunnerFactory(
new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false })
);
const factory = new TaskRunnerFactory(new ActionExecutor({ isESOCanEncrypt: true }));
expect(() =>
factory.create({ taskInstance: mockedTaskInstance })
).toThrowErrorMatchingInlineSnapshot(`"TaskRunnerFactory not initialized"`);
});

test(`throws an error if factory is already initialized`, () => {
const factory = new TaskRunnerFactory(
new ActionExecutor({ isESOUsingEphemeralEncryptionKey: false })
);
const factory = new TaskRunnerFactory(new ActionExecutor({ isESOCanEncrypt: true }));
factory.initialize(taskRunnerFactoryInitializerParams);
expect(() =>
factory.initialize(taskRunnerFactoryInitializerParams)
Expand Down
Loading

0 comments on commit f78ba23

Please sign in to comment.