Skip to content

Commit

Permalink
[Automatic import] readme input types templates (elastic#194308)
Browse files Browse the repository at this point in the history
  • Loading branch information
haetamoudi authored Jan 9, 2025
1 parent 3515a0f commit aff9217
Show file tree
Hide file tree
Showing 17 changed files with 398 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,27 @@ Check the [overview guidelines](https://www.elastic.co/guide/en/integrations-dev

## Requirements

Elastic Agent must be installed. For more information, refer to [these instructions](https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html).

#### Installing and managing an Elastic Agent:

You have a few options for installing and managing an Elastic Agent:

#### Install a Fleet-managed Elastic Agent (recommended):

With this approach, you install Elastic Agent and use Fleet in Kibana to define, configure, and manage your agents in a central location. We recommend using Fleet management because it makes the management and upgrading of your agents considerably easier.

#### Install Elastic Agent in standalone mode (advanced users):

With this approach, you install Elastic Agent and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is reserved for advanced users only.

#### Install Elastic Agent in a containerized environment:

You can run Elastic Agent inside a container, either with Fleet Server or standalone. Docker images for all versions of Elastic Agent are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes.

You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it.
You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.

The requirements section helps readers to confirm that the integration will work with their systems.
Check the [requirements guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-requirements) for more information.

Expand All @@ -23,8 +44,27 @@ Check the [requirements guidelines](https://www.elastic.co/guide/en/integrations
Point the reader to the [Observability Getting started guide](https://www.elastic.co/guide/en/observability/master/observability-get-started.html) for generic, step-by-step instructions. Include any additional setup instructions beyond what’s included in the guide, which may include instructions to update the configuration of a third-party service.
Check the [setup guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-setup) for more information.

### Enabling the integration in Elastic:

#### Create a new integration from a ZIP file (optional)
1. In Kibana, go to **Management** > **Integrations**.
2. Select **Create new integration**.
3. Select **Upload it as a .zip**.
4. Upload the ZIP file.
5. Select **Add to Elastic**.

### Install the integration
1. In Kibana, go to **Management** > **Integrations**.
2. In **Search for integrations* search bar, type {{ package_name }}.
3. Click the **{{ package_name }}** integration from the search results.
4. Click the **Add {{ package_name }}** button to add the integration.
5. Add all the required integration configuration parameters.
6. Click **Save and continue** to save the integration.

## Troubleshooting (optional)

- If some fields appear conflicted under the ``logs-*`` or ``metrics-*`` data views, this issue can be resolved by [reindexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/use-a-data-stream.html#reindex-with-a-data-stream) the impacted data stream.

Provide information about special cases and exceptions that aren’t necessary for getting started or won’t be applicable to all users. Check the [troubleshooting guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-troubleshooting) for more information.

## Reference
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### Collecting logs from AWS CloudWatch

When collecting logs from CloudWatch is enabled, users can retrieve logs from all log streams in a specific log group. `filterLogEvents` AWS API is used to list log events from the specified log group. Amazon CloudWatch Logs can be used to store log files from Amazon Elastic Compute Cloud(EC2), AWS CloudTrail, Route53, and other sources.

{% include "ssl-tls.md.njk" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
### Collecting logs from Amazon S3 bucket

When S3 bucket log collection is enabled, users can retrieve logs from S3 objects that are pointed to by S3 notification events read from an SQS queue, or by directly polling list of S3 objects in an S3 bucket.

The use of SQS notification is preferred; polling list of S3 objects is expensive in terms of performance and costs and should be preferably used only when no SQS notification can be attached to the S3 buckets. This input integration also supports S3 notification from SNS to SQS.

The SQS notification method is enabled setting `queue_url` configuration value. The S3 bucket list polling method is enabled setting `bucket_arn` configuration value and `number_of_workers` value. Exactly one of the `queue_url` and `bucket_arn` configuration options must be set.

#### To collect data from AWS SQS, follow the below steps:
1. If data forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.
2. Follow the steps below for each data stream that has been enabled:
1. Create an SQS queue
- To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings:
- Event type: `All object create events` (`s3:ObjectCreated:*`)
- Destination: SQS Queue
- Prefix (filter): enter the prefix for this data stream, e.g. `alert_logs/`
- Select the SQS queue that has been created for this data stream

**Note**:
- A separate SQS queue and S3 bucket notification is required for each enabled data stream.
- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2)
- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case.

{% include "ssl-tls.md.njk" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
### Collecting logs from Azure Blob Storage

#### Create a Storage account container

To create the storage account:

1. Sign in to the [Azure Portal](https://portal.azure.com/) and create your storage account.
2. While configuring your project details, make sure you select the following recommended default settings:
- Hierarchical namespace: disabled
- Minimum TLS version: Version 1.2
- Access tier: Hot
- Enable soft delete for blobs: disabled
- Enable soft delete for containers: disabled

3. When the new storage account is ready, you need to take note of the storage account name and the storage account access keys, as you will use them later to authenticate your Elastic application’s requests to this storage account.

##### How many Storage account containers?

The Elastic Agent can use one Storage account container for all integrations.

#### Running the integration behind a firewall

When you run the Elastic Agent behind a firewall, to ensure proper communication with the necessary components, you need to allow traffic on port `443` for the Storage Account container.

##### Storage Account Container

Port `443` is used for secure communication with the Storage Account container. This port is commonly used for HTTPS traffic. By allowing traffic on port 443, the Elastic Agent can securely access and interact with the Storage Account container, which is essential for storing and retrieving checkpoint data for each event hub partition.

{% include "ssl-tls.md.njk" %}
Loading

0 comments on commit aff9217

Please sign in to comment.