You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/sources/get-started/labels/structured-metadata.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Structured metadata can also be used to query commonly needed metadata from log
21
21
22
22
You should only use structured metadata in the following situations:
23
23
24
-
- If you are ingesting data in OpenTelemetry format, using the Grafana Agent or an OpenTelemetry Collector. Structured metadata was designed to support native ingestion of OpenTelemetry data.
24
+
- If you are ingesting data in OpenTelemetry format, using Grafana Alloy or an OpenTelemetry Collector. Structured metadata was designed to support native ingestion of OpenTelemetry data.
25
25
- If you have high cardinality metadata that should not be used as a label and does not exist in the log line. Some examples might include `process_id` or `thread_id` or Kubernetes pod names.
26
26
27
27
It is an antipattern to extract information that already exists in your log lines and put it into structured metadata.
@@ -31,7 +31,7 @@ It is an antipattern to extract information that already exists in your log line
31
31
You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp.
32
32
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/api/#ingest-logs).
33
33
34
-
Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines.
34
+
Alternatively, you can use Grafana Alloy or Promtail to extract and attach structured metadata to your log lines.
35
35
See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/stages/structured_metadata/) for more information.
36
36
37
37
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/).
Copy file name to clipboardexpand all lines: docs/sources/get-started/overview.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ Log data is then compressed and stored in chunks in an object store such as Amaz
22
22
23
23
A typical Loki-based logging stack consists of 3 components:
24
24
25
-
-**Agent** - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
25
+
-**Agent** - An agent or client, for example Grafana Alloy, or Promtail, which is distributed with Loki. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
26
26
27
27
-**Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}).
Copy file name to clipboardexpand all lines: docs/sources/release-notes/v3.0.md
+3-1
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ Key features in Loki 3.0.0 include the following:
20
20
21
21
-**Query acceleration with Bloom filters** (experimental): This is designed to speed up filter queries, with best results for queries that are looking for a specific text string like an error message or UUID. For more information, refer to [Query acceleration with Blooms](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/query-acceleration-blooms/).
22
22
23
-
-**Native OpenTelemetry Support**: A simplified ingestion pipeline (Loki Exporter no longer needed) and a more intuitive query experience for OTel logs. For more information, refer to the [OTEL documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/).
23
+
-**Native OpenTelemetry Support**: A simplified ingestion pipeline (Loki Exporter no longer needed) and a more intuitive query experience for OTel logs. For more information, refer to the [OTel documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/).
24
24
25
25
-**Helm charts**: A major upgrade to the Loki helm chart introduces support for `Distributed` mode (also known as [microservices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode) mode), includes memcached by default, and includes several updates to configurations to improve Loki operations.
26
26
@@ -46,6 +46,8 @@ One of the focuses of Loki 3.0 was cleaning up unused code and old features that
46
46
47
47
To learn more about breaking changes in this release, refer to the [Upgrade guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/).
The path from 2.9 to 3.0 includes several breaking changes. For important upgrade guidance, refer to the [Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/) and the separate [Helm Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/upgrade-to-6x/).
Copy file name to clipboardexpand all lines: docs/sources/send-data/_index.md
+17-11
Original file line number
Diff line number
Diff line change
@@ -12,20 +12,26 @@ weight: 500
12
12
There are a number of different clients available to send log data to Loki.
13
13
While all clients can be used simultaneously to cover multiple use cases, which client is initially picked to send logs depends on your use case.
14
14
15
+
{{< youtube id="xtEppndO7F8" >}}
16
+
15
17
## Grafana Clients
16
18
17
19
The following clients are developed and supported (for those customers who have purchased a support contract) by Grafana Labs for sending logs to Loki:
18
20
19
-
-[Grafana Agent](/docs/agent/latest/) - The Grafana Agent is the recommended client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems.
20
-
-[Promtail]({{< relref "./promtail" >}}) - Promtail is the client of choice when you're running Kubernetes, as you can configure it to automatically scrape logs from pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set.
21
-
Promtail is also the client of choice on bare-metal since it can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
22
-
Lastly, Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message.
23
-
-[xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki]({{< relref "./k6" >}}).
21
+
-[Grafana Alloy](https://grafana.com/docs/alloy/latest/) - Grafana Alloy is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector. Alloy offers native pipelines for OTel, Prometheus, Pyroscope, Loki, and many other metrics, logs, traces, and profile tools. In addition, you can use Alloy pipelines to do different tasks, such as configure alert rules in Loki and Mimir. Alloy is fully compatible with the OTel Collector, Prometheus Agent, and Promtail. You can use Alloy as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents. You can deploy Alloy anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
-[Grafana Agent](/docs/agent/latest/) - The Grafana Agent is a client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems.
24
+
-[Promtail](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/) - Promtail can be configured to automatically scrape logs from Kubernetes pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set. Promtail can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
25
+
Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message.
26
+
{{< admonition type="note" >}}
27
+
Promtail is feature complete. All future feature development will occur in Grafana Alloy.
28
+
{{< /admonition >}}
29
+
-[xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/k6/).
24
30
25
31
## OpenTelemetry Collector
26
32
27
33
Loki natively supports ingesting OpenTelemetry logs over HTTP.
28
-
See [Ingesting logs to Loki using OpenTelemetry Collector]({{< relref "./otel" >}}) for more details.
34
+
For more information, see [Ingesting logs to Loki using OpenTelemetry Collector](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/).
29
35
30
36
## Third-party clients
31
37
@@ -37,14 +43,14 @@ Grafana Labs cannot provide support for third-party clients. Once an issue has b
37
43
38
44
The following are popular third-party Loki clients:
39
45
40
-
-[Docker Driver]({{< relref "./docker-driver" >}}) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
46
+
-[Docker Driver](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/docker-driver/) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
41
47
be used as it automatically adds labels appropriate to the running container.
42
-
-[Fluent Bit]({{< relref "./fluentbit" >}}) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
48
+
-[Fluent Bit](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/fluentbit/) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
43
49
and you already have configured `Parser` and `Filter` plugins.
44
-
-[Fluentd]({{< relref "./fluentd" >}}) - The Fluentd plugin is ideal when you already have Fluentd deployed
50
+
-[Fluentd](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/fluentd/) - The Fluentd plugin is ideal when you already have Fluentd deployed
45
51
and you already have configured `Parser` and `Filter` plugins. Fluentd also works well for extracting metrics from logs when using itsPrometheus plugin.
46
-
-[Lambda Promtail]({{< relref "./lambda-promtail" >}}) - This is a workflow combining the Promtail push-api [scrape config]({{< relref "./promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "./lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
47
-
-[Logstash]({{< relref "./logstash" >}}) - If you are already using logstash and/or beats, this will be the easiest way to start.
52
+
-[Lambda Promtail](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/lambda-promtail/) - This is a workflow combining the Promtail push-api [scrape config](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/configuration/#loki_push_api) and the lambda-promtail AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
53
+
-[Logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/) - If you are already using logstash and/or beats, this will be the easiest way to start.
48
54
By adding our output plugin you can quickly try Loki without doing big configuration changes.
49
55
50
56
These third-party clients also enable sending logs to Loki:
Copy file name to clipboardexpand all lines: docs/sources/send-data/otel/_index.md
+9-5
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Ingesting logs to Loki using OpenTelemetry Collector
3
-
menuTitle: OTEL Collector
3
+
menuTitle: OTel Collector
4
4
description: Configuring the OpenTelemetry Collector to send logs to Loki.
5
5
aliases:
6
6
- ../clients/k6/
@@ -72,7 +72,7 @@ service:
72
72
73
73
Since the OpenTelemetry protocol differs from the Loki storage model, here is how data in the OpenTelemetry format will be mapped by default to the Loki data model during ingestion, which can be changed as explained later:
74
74
75
-
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. Because Loki has a limit of 30 index labels, we have selected the following resource attributes to be stored as index labels, while the remaining attributes are stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}) with each log entry:
75
+
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. The default list of Resource Attributes to store as Index labels can be configured using `default_resource_attributes_as_index_labels` under [distributor's otlp_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor). By default, the following resource attributes will be stored as index labels, while the remaining attributes are stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}) with each log entry:
76
76
- cloud.availability_zone
77
77
- cloud.region
78
78
- container.name
@@ -91,9 +91,13 @@ Since the OpenTelemetry protocol differs from the Loki storage model, here is ho
91
91
- service.name
92
92
- service.namespace
93
93
94
+
{{% admonition type="note" %}}
95
+
Because Loki has a default limit of 15 index labels, we recommend storing only select resource attributes as index labels. Although the default config selects more than 15 Resource Attributes, it should be fine since a few are mutually exclusive.
96
+
{{% /admonition %}}
97
+
94
98
- Timestamp: One of `LogRecord.TimeUnixNano` or `LogRecord.ObservedTimestamp`, based on which one is set. If both are not set, the ingestion timestamp will be used.
95
99
96
-
- LogLine: `LogRecord.Body`holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTEL collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
100
+
- LogLine: `LogRecord.Body`holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
97
101
98
102
- [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold:
99
103
- Resource Attributes not stored as Index labels is replicated and stored with each log entry.
@@ -105,7 +109,7 @@ Things to note before ingesting OpenTelemetry logs to Loki:
105
109
- Dots (.) are converted to underscores (_).
106
110
107
111
Loki does not support `.` or any other special characters other than `_` in label names. The unsupported characters are replaced with an `_` while converting Attributes to Index Labels or Structured Metadata.
108
-
Also, please note that while writing the queries, you must use the normalized format, i.e. use `_` instead of special characters while querying data using OTEL Attributes.
112
+
Also, please note that while writing the queries, you must use the normalized format, i.e. use `_` instead of special characters while querying data using OTel Attributes.
109
113
110
114
For example, `service.name` in OTLP would become `service_name` in Loki.
111
115
@@ -116,7 +120,7 @@ Things to note before ingesting OpenTelemetry logs to Loki:
116
120
117
121
- Stringification of non-string Attribute values
118
122
119
-
While converting Attribute values in OTLP to Index label values or Structured Metadata, any non-string values are converted to string using [AsString method from the OTEL collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
123
+
While converting Attribute values in OTLP to Index label values or Structured Metadata, any non-string values are converted to string using [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
120
124
121
125
### Changing the default mapping of OTLP to Loki Format
Copy file name to clipboardexpand all lines: docs/sources/setup/install/helm/concepts.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ By default Loki will be installed in the scalable mode. This consists of a read
21
21
22
22
## Dashboards
23
23
24
-
This chart includes dashboards for monitoring Loki. These require the scrape configs defined in the `monitoring.serviceMonitor` and `monitoring.selfMonitoring` sections described below. The dashboards are deployed via a config map which can be mounted on a Grafana instance. The Dashboard require an installation of the Grafana Agent and the Prometheus operator. The agent is installed with this chart.
24
+
This chart includes dashboards for monitoring Loki. These require the scrape configs defined in the `monitoring.serviceMonitor` and `monitoring.selfMonitoring` sections described below. The dashboards are deployed via a config map which can be mounted on a Grafana instance. The Dashboard requires an installation of the Grafana Agent and the Prometheus operator. The agent is installed with this chart.
0 commit comments