diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/description_readme.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/description_readme.njk index 9fe32dccb2b2b..f0f393902df26 100644 --- a/x-pack/platform/plugins/shared/integration_assistant/server/templates/description_readme.njk +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/description_readme.njk @@ -15,6 +15,27 @@ Check the [overview guidelines](https://www.elastic.co/guide/en/integrations-dev ## Requirements +Elastic Agent must be installed. For more information, refer to [these instructions](https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html). + +#### Installing and managing an Elastic Agent: + +You have a few options for installing and managing an Elastic Agent: + +#### Install a Fleet-managed Elastic Agent (recommended): + +With this approach, you install Elastic Agent and use Fleet in Kibana to define, configure, and manage your agents in a central location. We recommend using Fleet management because it makes the management and upgrading of your agents considerably easier. + +#### Install Elastic Agent in standalone mode (advanced users): + +With this approach, you install Elastic Agent and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is reserved for advanced users only. + +#### Install Elastic Agent in a containerized environment: + +You can run Elastic Agent inside a container, either with Fleet Server or standalone. Docker images for all versions of Elastic Agent are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes. + +You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. +You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware. + The requirements section helps readers to confirm that the integration will work with their systems. Check the [requirements guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-requirements) for more information. @@ -23,8 +44,27 @@ Check the [requirements guidelines](https://www.elastic.co/guide/en/integrations Point the reader to the [Observability Getting started guide](https://www.elastic.co/guide/en/observability/master/observability-get-started.html) for generic, step-by-step instructions. Include any additional setup instructions beyond what’s included in the guide, which may include instructions to update the configuration of a third-party service. Check the [setup guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-setup) for more information. +### Enabling the integration in Elastic: + +#### Create a new integration from a ZIP file (optional) +1. In Kibana, go to **Management** > **Integrations**. +2. Select **Create new integration**. +3. Select **Upload it as a .zip**. +4. Upload the ZIP file. +5. Select **Add to Elastic**. + +### Install the integration +1. In Kibana, go to **Management** > **Integrations**. +2. In **Search for integrations* search bar, type {{ package_name }}. +3. Click the **{{ package_name }}** integration from the search results. +4. Click the **Add {{ package_name }}** button to add the integration. +5. Add all the required integration configuration parameters. +6. Click **Save and continue** to save the integration. + ## Troubleshooting (optional) +- If some fields appear conflicted under the ``logs-*`` or ``metrics-*`` data views, this issue can be resolved by [reindexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/use-a-data-stream.html#reindex-with-a-data-stream) the impacted data stream. + Provide information about special cases and exceptions that aren’t necessary for getting started or won’t be applicable to all users. Check the [troubleshooting guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-troubleshooting) for more information. ## Reference diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/aws-cloudwatch.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/aws-cloudwatch.md.njk new file mode 100644 index 0000000000000..cc70120b9a23b --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/aws-cloudwatch.md.njk @@ -0,0 +1,5 @@ +### Collecting logs from AWS CloudWatch + +When collecting logs from CloudWatch is enabled, users can retrieve logs from all log streams in a specific log group. `filterLogEvents` AWS API is used to list log events from the specified log group. Amazon CloudWatch Logs can be used to store log files from Amazon Elastic Compute Cloud(EC2), AWS CloudTrail, Route53, and other sources. + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/aws-s3.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/aws-s3.md.njk new file mode 100644 index 0000000000000..097c6fba28d45 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/aws-s3.md.njk @@ -0,0 +1,26 @@ +### Collecting logs from Amazon S3 bucket + +When S3 bucket log collection is enabled, users can retrieve logs from S3 objects that are pointed to by S3 notification events read from an SQS queue, or by directly polling list of S3 objects in an S3 bucket. + +The use of SQS notification is preferred; polling list of S3 objects is expensive in terms of performance and costs and should be preferably used only when no SQS notification can be attached to the S3 buckets. This input integration also supports S3 notification from SNS to SQS. + +The SQS notification method is enabled setting `queue_url` configuration value. The S3 bucket list polling method is enabled setting `bucket_arn` configuration value and `number_of_workers` value. Exactly one of the `queue_url` and `bucket_arn` configuration options must be set. + +#### To collect data from AWS SQS, follow the below steps: +1. If data forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation. +2. Follow the steps below for each data stream that has been enabled: + 1. Create an SQS queue + - To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html). + - While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket. + 2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings: + - Event type: `All object create events` (`s3:ObjectCreated:*`) + - Destination: SQS Queue + - Prefix (filter): enter the prefix for this data stream, e.g. `alert_logs/` + - Select the SQS queue that has been created for this data stream + +**Note**: + - A separate SQS queue and S3 bucket notification is required for each enabled data stream. + - Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2) + - Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case. + + {% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/azure-blob-storage.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/azure-blob-storage.md.njk new file mode 100644 index 0000000000000..b08b21aae84af --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/azure-blob-storage.md.njk @@ -0,0 +1,29 @@ +### Collecting logs from Azure Blob Storage + +#### Create a Storage account container + +To create the storage account: + +1. Sign in to the [Azure Portal](https://portal.azure.com/) and create your storage account. +2. While configuring your project details, make sure you select the following recommended default settings: + - Hierarchical namespace: disabled + - Minimum TLS version: Version 1.2 + - Access tier: Hot + - Enable soft delete for blobs: disabled + - Enable soft delete for containers: disabled + +3. When the new storage account is ready, you need to take note of the storage account name and the storage account access keys, as you will use them later to authenticate your Elastic application’s requests to this storage account. + +##### How many Storage account containers? + +The Elastic Agent can use one Storage account container for all integrations. + +#### Running the integration behind a firewall + +When you run the Elastic Agent behind a firewall, to ensure proper communication with the necessary components, you need to allow traffic on port `443` for the Storage Account container. + +##### Storage Account Container + +Port `443` is used for secure communication with the Storage Account container. This port is commonly used for HTTPS traffic. By allowing traffic on port 443, the Elastic Agent can securely access and interact with the Storage Account container, which is essential for storing and retrieving checkpoint data for each event hub partition. + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/azure-eventhub.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/azure-eventhub.md.njk new file mode 100644 index 0000000000000..333542791d194 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/azure-eventhub.md.njk @@ -0,0 +1,178 @@ +### Collecting logs from Azure Event Hub + +#### Create an Event Hub + +The event hub receives the logs exported from the Azure service and makes them available to the Elastic Agent to pick up. + +Here's the high-level overview of the required steps: + +* Create a resource group, or select an existing one. +* Create an Event Hubs namespace. +* Create an Event Hub. + +For a detailed step-by-step guide, check the quickstart [Create an event hub using Azure portal](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create). +Take note of the event hub **Name**, which you will use later when specifying an **eventhub** in the integration settings. + +##### Event Hubs Namespace vs Event Hub + +You should use the event hub name (not the Event Hubs namespace name) as a value for the **eventhub** option in the integration settings. +If you are new to Event Hubs, think of the Event Hubs namespace as the cluster and the event hub as the topic. You will typically have one cluster and multiple topics. +If you are familiar with Kafka, here's a conceptual mapping between the two: + +| Kafka Concept | Event Hub Concept | +|----------------|-------------------| +| Cluster | Namespace | +| Topic | An event hub | +| Partition | Partition | +| Consumer Group | Consumer Group | +| Offset | Offset | + + +##### How many partitions? + +The number of partitions is essential to balance the event hub cost and performance. +Here are a few examples with one or multiple agents, with recommendations on picking the correct number of partitions for your use case. + +###### Single Agent + +With a single Agent deployment, increasing the number of partitions on the event hub is the primary driver in scale-up performances. The Agent creates one worker for each partition. + +```text +┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + +│ │ │ │ + +│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ + │ partition 0 │◀───────────│ worker │ +│ └─────────────────┘ │ │ └─────────────────┘ │ + ┌─────────────────┐ ┌─────────────────┐ +│ │ partition 1 │◀──┼────┼───│ worker │ │ + └─────────────────┘ └─────────────────┘ +│ ┌─────────────────┐ │ │ ┌─────────────────┐ │ + │ partition 2 │◀────────── │ worker │ +│ └─────────────────┘ │ │ └─────────────────┘ │ + ┌─────────────────┐ ┌─────────────────┐ +│ │ partition 3 │◀──┼────┼───│ worker │ │ + └─────────────────┘ └─────────────────┘ +│ │ │ │ + +│ │ │ │ + +└ Event Hub ─ ─ ─ ─ ─ ─ ─ ┘ └ Agent ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ +``` + + +###### Two or more Agents + +With more than one Agent, setting the number of partitions is crucial. The agents share the existing partitions to scale out performance and improve availability. +The number of partitions must be at least the number of agents. + +```text +┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + +│ │ │ ┌─────────────────┐ │ + ┌──────│ worker │ +│ ┌─────────────────┐ │ │ │ └─────────────────┘ │ + │ partition 0 │◀────┘ ┌─────────────────┐ +│ └─────────────────┘ │ ┌──┼───│ worker │ │ + ┌─────────────────┐ │ └─────────────────┘ +│ │ partition 1 │◀──┼─┘ │ │ + └─────────────────┘ ─Agent─ ─ ─ ─ ─ ─ ─ ─ ─ ─ +│ ┌─────────────────┐ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ + │ partition 2 │◀────┐ +│ └─────────────────┘ │ │ │ ┌─────────────────┐ │ + ┌─────────────────┐ └─────│ worker │ +│ │ partition 3 │◀──┼─┐ │ └─────────────────┘ │ + └─────────────────┘ │ ┌─────────────────┐ +│ │ └──┼──│ worker │ │ + └─────────────────┘ +│ │ │ │ + +└ Event Hub ─ ─ ─ ─ ─ ─ ─ ┘ └ Agent ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ +``` + + +###### Recommendations + +Create an event hub with at least two partitions. Two partitions allow low-volume deployment to support high availability with two agents. Consider creating four partitions or more to handle medium-volume deployments with availability. +To learn more about event hub partitions, read an in-depth guide from Microsoft at https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-create. +To learn more about event hub partition from the performance perspective, check the scalability-focused document at https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-scalability#partitions. + +##### How many Event Hubs? + +Elastic strongly recommends creating one event hub for each Azure service you collect data from. +For example, if you plan to collect Microsoft Entra ID logs and Activity logs, create two event hubs: one for Microsoft Entra ID and one for Activity logs. + +Here's an high-level diagram of the solution: + +```text + ┌───────────────┐ ┌──────────────┐ ┌───────────────┐ + │ MS Entra ID │ │ Diagnostic │ │ adlogs │ + │ <> │──▶│ Settings │──▶│ <> │──┐ + └───────────────┘ └──────────────┘ └───────────────┘ │ ┌───────────┐ + │ │ Elastic │ + ├──▶│ Agent │ + ┌───────────────┐ ┌──────────────┐ ┌───────────────┐ │ └───────────┘ + │ Azure Monitor │ │ Diagnostic │ │ activitylogs │ │ + │ <> ├──▶│ Settings │──▶│ <> │──┘ + └───────────────┘ └──────────────┘ └───────────────┘ +``` + +Having one event hub for each Azure service is beneficial in terms of performance and easy of troubleshooting. +For high-volume deployments, we recommend one event hub for each data stream: + +```text + ┌──────────────┐ ┌─────────────────────┐ + │ Diagnostic │ │ signin (adlogs) │ + ┌─▶│ Settings │──▶│ <> │──┐ + │ └──────────────┘ └─────────────────────┘ │ + │ │ +┌─────────────┐ │ ┌──────────────┐ ┌─────────────────────┐ │ ┌───────────┐ +│ MS Entra ID │ │ │ Diagnostic │ │ audit (adlogs) │ │ │ Elastic │ +│ <> │─┼─▶│ Settings │──▶│ <> │──┼─▶│ Agent │ +└─────────────┘ │ └──────────────┘ └─────────────────────┘ │ └───────────┘ + │ │ + │ ┌──────────────┐ ┌─────────────────────┐ │ + │ │ Diagnostic │ │provisioning (adlogs)│ │ + └─▶│ Settings │──▶│ <> │──┘ + └──────────────┘ └─────────────────────┘ +``` + +##### Consumer Group + +Like all other event hub clients, Elastic Agent needs a consumer group name to access the event hub. +A Consumer Group is a view (state, position, or offset) of an entire event hub. Consumer groups enable multiple agents to each have a separate view of the event stream, and to read the logs independently at their own pace and with their own offsets. +Consumer groups allow multiple Elastic Agents assigned to the same agent policy to work together; this enables horizontal scaling of the logs processing when required. +In most cases, you can use the default consumer group named `$Default`. If `$Default` is already used by other applications, you can create a consumer group dedicated to the Azure Logs integration. + +##### Connection string + +The Elastic Agent requires a connection string to access the event hub and fetch the exported logs. The connection string contains details about the event hub used and the credentials required to access it. +To get the connection string for your Event Hubs namespace: + +1. Visit the **Event Hubs namespace** you created in a previous step. +1. Select **Settings** > **Shared access policies**. + +Create a new Shared Access Policy (SAS): + +1. Select **Add** to open the creation panel. +1. Add a **Policy name** (for example, "ElasticAgent"). +1. Select the **Listen** claim. +1. Select **Create**. + +When the SAS Policy is ready, select it to display the information panel. +Take note of the **Connection string–primary key**, which you will use later when specifying a **connection_string** in the integration settings. + +### Running the integration behind a firewall + +When you run the Elastic Agent behind a firewall, to ensure proper communication with the necessary components, you need to allow traffic on port `5671` and `5672` for the event hub. + +##### Event Hub + +Port `5671` and `5672` are commonly used for secure communication with the event hub. These ports are used to receive events. By allowing traffic on these ports, the Elastic Agent can establish a secure connection with the event hub. +For more information, check the following documents: + +- [What ports do I need to open on the firewall?](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-faq#what-ports-do-i-need-to-open-on-the-firewall) from the [Event Hubs frequently asked questions](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-faq#what-ports-do-i-need-to-open-on-the-firewall). +- [AMQP outbound port requirements](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-amqp-protocol-guide#amqp-outbound-port-requirements) + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/cel.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/cel.md.njk new file mode 100644 index 0000000000000..1e015dd1dc3c7 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/cel.md.njk @@ -0,0 +1,10 @@ +### Collecting logs from Common Expression Language (CEL) + +The full documentation for the input are currently available [here](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-cel.html). +CEL configurations are split into two main parts: Program and State, with an associated named Resource. + +The program is a CEL program that will obtain data from the API either by an HTTP request or a local file-system read operation, and then transform the data into a set of events and cursor states. The CEL environment that the program is run in is provided with an HTTP client that has been set-up with the user-specified authentication, proxy, rate limit and other options, and with a set of user-defined regular expressions that are available to use during execution. + +The state is passed into the program on start of execution and may contain configuration details not included in the standard set of options. The CEL program return value will be the state used during the next cycle of CEL execution, and will include the events to be published to Elasticsearch and then removed from the state. State values in general will not persist over restarts, but it may contain a cursor that is persisted. The cursor part of state is used when there is a need to keep long-lived state that will persist over restarts. + +The named resource is a string value that the CEL program can use to identify the location of the API and will usually either be a URL or a file path. It is included in the state as the `state.url`. CEL programs are not required to make use of this configuration, but its presence is required in the configuration. diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/filestream.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/filestream.md.njk new file mode 100644 index 0000000000000..bfbbae376703c --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/filestream.md.njk @@ -0,0 +1,3 @@ +### Collecting logs from Filestream + +Identify the log location on the system. Determine the directory or file path where logs are stored. Then, add this path to the integration configuration. \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/gcp-pubsub.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/gcp-pubsub.md.njk new file mode 100644 index 0000000000000..29290eef4284b --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/gcp-pubsub.md.njk @@ -0,0 +1,17 @@ +### Collecting logs from GCP Pub/Sub + +#### To create GCP Pub/Sub, follow the below steps: + +- [Create Topic for Pub/sub](https://cloud.google.com/pubsub/docs/create-topic#create_a_topic). +- [Create Subscription for topic](https://cloud.google.com/pubsub/docs/create-subscription#create_subscriptions) + +#### To collect data from GCP Pub/Sub, follow the below steps: + +- [Configure to export finding to GCP Pub/Sub](https://cloud.google.com/security-command-center/docs/how-to-notifications). +- [Configure to export asset to GCP Pub/Sub](https://cloud.google.com/asset-inventory/docs/monitoring-asset-changes). +- [Configure to export audit to GCP Pub/Sub](https://cloud.google.com/logging/docs/export/configure_export_v2?_ga=2.110932226.-66737431.1679995682#overview). + +**NOTE**: +- Create unique Pub/Sub topic per data-stream. + +{% include "ssl-tls.md.njk" %} diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/gcs.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/gcs.md.njk new file mode 100644 index 0000000000000..5cc525c3ae921 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/gcs.md.njk @@ -0,0 +1,38 @@ +### Collecting logs from GCS bucket + +#### To collect data from a GCS bucket, follow the below steps: + +- Considering you already have a GCS bucket setup, configure it with SentinelOne Cloud Funnel. +- Enable the Cloud Funnel Streaming as mentioned here: `[Your Login URL]/docs/en/how-to-enable-cloud-funnel-streaming.html#how-to-enable-cloud-funnel-streaming`. +- The default value of the field `File Selectors` is `- regex: "s1/cloud_funnel"`. It is commented out by default and resides in the advanced settings section. +- Configure the integration with your GCS project ID and JSON Credentials key. + +### The GCS credentials key file: +This is a one-time download JSON key file that you get after adding a key to a GCP service account. +If you are just starting out creating your GCS bucket, do the following: + +1. Make sure you have a service account available, if not follow the steps below: + - Navigate to 'APIs & Services' > 'Credentials' + - Click on 'Create credentials' > 'Service account' +2. Once the service account is created, you can navigate to the 'Keys' section and attach/generate your service account key. +3. Make sure to download the JSON key file once prompted. +4. Use this JSON key file either inline (JSON string object), or by specifying the path to the file on the host machine, where the agent is running. + +A sample JSON Credentials file looks as follows: +```json +{ + "type": "dummy_service_account", + "project_id": "dummy-project", + "private_key_id": "dummy-private-key-id", + "private_key": "-----BEGIN PRIVATE KEY-----\nDummyPrivateKey\n-----END PRIVATE KEY-----\n", + "client_email": "dummy-service-account@example.com", + "client_id": "12345678901234567890", + "auth_uri": "https://dummy-auth-uri.com", + "token_uri": "https://dummy-token-uri.com", + "auth_provider_x509_cert_url": "https://dummy-auth-provider-cert-url.com", + "client_x509_cert_url": "https://dummy-client-cert-url.com", + "universe_domain": "dummy-universe-domain.com" +} +``` + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/http_endpoint.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/http_endpoint.md.njk new file mode 100644 index 0000000000000..ceb50acbb2804 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/http_endpoint.md.njk @@ -0,0 +1,5 @@ +### Collecting logs from HTTP endpoint + +Specify the address and port that will be used to initialize a listening HTTP server that collects incoming HTTP POST requests containing a JSON body. The body must be either an object or an array of objects. Any other data types will result in an HTTP 400 (Bad Request) response. For arrays, one document is created for each object in the array. + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/journald.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/journald.md.njk new file mode 100644 index 0000000000000..b5498d4d5fd6e --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/journald.md.njk @@ -0,0 +1,3 @@ +### Collecting logs from journald + +The journald input is available on Linux systems with systemd installed. \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/kafka.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/kafka.md.njk new file mode 100644 index 0000000000000..35bae538cf985 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/kafka.md.njk @@ -0,0 +1,5 @@ +### Collecting logs from Kafka + +This integration collects logs and metrics from [Kafka](https://kafka.apache.org) servers. + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/ssl-tls.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/ssl-tls.md.njk new file mode 100644 index 0000000000000..dd3530c771405 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/ssl-tls.md.njk @@ -0,0 +1,12 @@ +### TLS/SSL Configuration (Optional) +To enhance security, configure the server with TLS/SSL settings. This ensures secure communication between clients and the server. Below is an example of how to configure these settings: + +```yml +ssl.certificate: "/etc/pki/server/cert.pem" +ssl.key: "/etc/pki/server/cert.key" +``` + +ssl.key: The private key used by the server to decrypt data encrypted with its corresponding public key, as well as to sign data to authenticate its identity. +ssl.certificate: The server's certificate, used to verify its identity. It contains the public key and is validated against a Certificate Authority (CA) to establish an encrypted connection with clients. + +In the input settings, include any relevant SSL Configuration and Secret Header values depending on the specific requirements of your endpoint. You may also configure additional options such as certificate, keys, supported_protocols, and verification_mode. Refer to the [Elastic SSL Documentation](https://www.elastic.co/guide/en/beats/filebeat/current/configuration-ssl.html#ssl-server-config) for further details. diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/tcp.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/tcp.md.njk new file mode 100644 index 0000000000000..5fd8b2cef3651 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/tcp.md.njk @@ -0,0 +1,5 @@ +### Collecting logs from TCP + +Specify the address and port that will be used to intialize a listening TCP socket that collects any TCP traffic received and sends each line as a document to Elasticsearch. + +{% include "ssl-tls.md.njk" %} \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/udp.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/udp.md.njk new file mode 100644 index 0000000000000..036d72db9828b --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/setup/udp.md.njk @@ -0,0 +1,3 @@ +### Collecting logs from UDP + +Specify the address and port that will be used to intialize a listening UDP socket that collects any UDP traffic received and sends each line as a document to Elasticsearch. \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/troubleshooting/gcp.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/troubleshooting/gcp.md.njk new file mode 100644 index 0000000000000..22c90a7f32df5 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/troubleshooting/gcp.md.njk @@ -0,0 +1,9 @@ +### Troubleshooting GCP + +If you don't see metrics showing up, check the Agent logs to see if there are errors. + +Common error types: + +- Period is lower than 60 seconds +- Missing roles in the Service Account +- Misconfigured settings, like "Project Id" \ No newline at end of file diff --git a/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/troubleshooting/http_endpoint.md.njk b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/troubleshooting/http_endpoint.md.njk new file mode 100644 index 0000000000000..b83a272028096 --- /dev/null +++ b/x-pack/platform/plugins/shared/integration_assistant/server/templates/readme/troubleshooting/http_endpoint.md.njk @@ -0,0 +1,10 @@ +### Troubleshooting HTTP endpoint + +If you encounter an error while ingesting data, it might be due to the data collected over a long time span. Generating a response in such cases may take longer and might cause a request timeout if the `HTTP Client Timeout` parameter is set to a small duration. To avoid this error, it is recommended to adjust the `HTTP Client Timeout` and `Interval` parameters based on the duration of data collection. +``` +{ + "error": { + "message": "failed eval: net/http: request canceled (Client.Timeout or context cancellation while reading body)" + } +} +``` \ No newline at end of file