Skip to content

Commit abb31a8

Browse files
Merge branch 'main' into sample-count-and-bytes
2 parents 33ead60 + 00d3c7a commit abb31a8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+4636
-862
lines changed

docs/sources/get-started/labels/structured-metadata.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Structured metadata can also be used to query commonly needed metadata from log
2121

2222
You should only use structured metadata in the following situations:
2323

24-
- If you are ingesting data in OpenTelemetry format, using the Grafana Agent or an OpenTelemetry Collector. Structured metadata was designed to support native ingestion of OpenTelemetry data.
24+
- If you are ingesting data in OpenTelemetry format, using Grafana Alloy or an OpenTelemetry Collector. Structured metadata was designed to support native ingestion of OpenTelemetry data.
2525
- If you have high cardinality metadata that should not be used as a label and does not exist in the log line. Some examples might include `process_id` or `thread_id` or Kubernetes pod names.
2626

2727
It is an antipattern to extract information that already exists in your log lines and put it into structured metadata.
@@ -31,7 +31,7 @@ It is an antipattern to extract information that already exists in your log line
3131
You have the option to attach structured metadata to log lines in the push payload along with each log line and the timestamp.
3232
For more information on how to push logs to Loki via the HTTP endpoint, refer to the [HTTP API documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/api/#ingest-logs).
3333

34-
Alternatively, you can use the Grafana Agent or Promtail to extract and attach structured metadata to your log lines.
34+
Alternatively, you can use Grafana Alloy or Promtail to extract and attach structured metadata to your log lines.
3535
See the [Promtail: Structured metadata stage](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/stages/structured_metadata/) for more information.
3636

3737
With Loki version 1.2.0, support for structured metadata has been added to the Logstash output plugin. For more information, see [logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/).

docs/sources/get-started/overview.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Log data is then compressed and stored in chunks in an object store such as Amaz
2222

2323
A typical Loki-based logging stack consists of 3 components:
2424

25-
- **Agent** - An agent or client, for example Promtail, which is distributed with Loki, or the Grafana Agent. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
25+
- **Agent** - An agent or client, for example Grafana Alloy, or Promtail, which is distributed with Loki. The agent scrapes logs, turns the logs into streams by adding labels, and pushes the streams to Loki through an HTTP API.
2626

2727
- **Loki** - The main server, responsible for ingesting and storing logs and processing queries. It can be deployed in three different configurations, for more information see [deployment modes]({{< relref "../get-started/deployment-modes" >}}).
2828

docs/sources/operations/loki-canary/_index.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ array. The contents look something like this:
2929
The relevant part of the log entry is the timestamp; the `p`s are just filler
3030
bytes to make the size of the log configurable.
3131

32-
An agent (like Promtail) should be configured to read the log file and ship it
32+
An agent (like Grafana Alloy) should be configured to read the log file and ship it
3333
to Loki.
3434

3535
Meanwhile, Loki Canary will open a WebSocket connection to Loki and will tail

docs/sources/release-notes/v3.0.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Key features in Loki 3.0.0 include the following:
2020

2121
- **Query acceleration with Bloom filters** (experimental): This is designed to speed up filter queries, with best results for queries that are looking for a specific text string like an error message or UUID. For more information, refer to [Query acceleration with Blooms](https://grafana.com/docs/loki/<LOKI_VERSION>/operations/query-acceleration-blooms/).
2222

23-
- **Native OpenTelemetry Support**: A simplified ingestion pipeline (Loki Exporter no longer needed) and a more intuitive query experience for OTel logs. For more information, refer to the [OTEL documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/).
23+
- **Native OpenTelemetry Support**: A simplified ingestion pipeline (Loki Exporter no longer needed) and a more intuitive query experience for OTel logs. For more information, refer to the [OTel documentation](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/).
2424

2525
- **Helm charts**: A major upgrade to the Loki helm chart introduces support for `Distributed` mode (also known as [microservices](https://grafana.com/docs/loki/<LOKI_VERSION>/get-started/deployment-modes/#microservices-mode) mode), includes memcached by default, and includes several updates to configurations to improve Loki operations.
2626

@@ -46,6 +46,8 @@ One of the focuses of Loki 3.0 was cleaning up unused code and old features that
4646

4747
To learn more about breaking changes in this release, refer to the [Upgrade guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/).
4848

49+
{{< docs/shared source="alloy" lookup="agent-deprecation.md" version="next" >}}
50+
4951
## Upgrade Considerations
5052

5153
The path from 2.9 to 3.0 includes several breaking changes. For important upgrade guidance, refer to the [Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/) and the separate [Helm Upgrade Guide](https://grafana.com/docs/loki/<LOKI_VERSION>/setup/upgrade/upgrade-to-6x/).

docs/sources/send-data/_index.md

+17-11
Original file line numberDiff line numberDiff line change
@@ -12,20 +12,26 @@ weight: 500
1212
There are a number of different clients available to send log data to Loki.
1313
While all clients can be used simultaneously to cover multiple use cases, which client is initially picked to send logs depends on your use case.
1414

15+
{{< youtube id="xtEppndO7F8" >}}
16+
1517
## Grafana Clients
1618

1719
The following clients are developed and supported (for those customers who have purchased a support contract) by Grafana Labs for sending logs to Loki:
1820

19-
- [Grafana Agent](/docs/agent/latest/) - The Grafana Agent is the recommended client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems.
20-
- [Promtail]({{< relref "./promtail" >}}) - Promtail is the client of choice when you're running Kubernetes, as you can configure it to automatically scrape logs from pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set.
21-
Promtail is also the client of choice on bare-metal since it can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
22-
Lastly, Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message.
23-
- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki]({{< relref "./k6" >}}).
21+
- [Grafana Alloy](https://grafana.com/docs/alloy/latest/) - Grafana Alloy is a vendor-neutral distribution of the OpenTelemetry (OTel) Collector. Alloy offers native pipelines for OTel, Prometheus, Pyroscope, Loki, and many other metrics, logs, traces, and profile tools. In addition, you can use Alloy pipelines to do different tasks, such as configure alert rules in Loki and Mimir. Alloy is fully compatible with the OTel Collector, Prometheus Agent, and Promtail. You can use Alloy as an alternative to either of these solutions or combine it into a hybrid system of multiple collectors and agents. You can deploy Alloy anywhere within your IT infrastructure and pair it with your Grafana LGTM stack, a telemetry backend from Grafana Cloud, or any other compatible backend from any other vendor.
22+
{{< docs/shared source="alloy" lookup="agent-deprecation.md" version="next" >}}
23+
- [Grafana Agent](/docs/agent/latest/) - The Grafana Agent is a client for the Grafana stack. It can collect telemetry data for metrics, logs, traces, and continuous profiles and is fully compatible with the Prometheus, OpenTelemetry, and Grafana open source ecosystems.
24+
- [Promtail](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/) - Promtail can be configured to automatically scrape logs from Kubernetes pods running on the same node that Promtail runs on. Promtail and Prometheus running together in Kubernetes enables powerful debugging: if Prometheus and Promtail use the same labels, users can use tools like Grafana to switch between metrics and logs based on the label set. Promtail can be configured to tail logs from all files given a host path. It is the easiest way to send logs to Loki from plain-text files (for example, things that log to `/var/log/*.log`).
25+
Promtail works well if you want to extract metrics from logs such as counting the occurrences of a particular message.
26+
{{< admonition type="note" >}}
27+
Promtail is feature complete. All future feature development will occur in Grafana Alloy.
28+
{{< /admonition >}}
29+
- [xk6-loki extension](https://github.com/grafana/xk6-loki) - The k6-loki extension lets you perform [load testing on Loki](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/k6/).
2430

2531
## OpenTelemetry Collector
2632

2733
Loki natively supports ingesting OpenTelemetry logs over HTTP.
28-
See [Ingesting logs to Loki using OpenTelemetry Collector]({{< relref "./otel" >}}) for more details.
34+
For more information, see [Ingesting logs to Loki using OpenTelemetry Collector](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/otel/).
2935

3036
## Third-party clients
3137

@@ -37,14 +43,14 @@ Grafana Labs cannot provide support for third-party clients. Once an issue has b
3743

3844
The following are popular third-party Loki clients:
3945

40-
- [Docker Driver]({{< relref "./docker-driver" >}}) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
46+
- [Docker Driver](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/docker-driver/) - When using Docker and not Kubernetes, the Docker logging driver for Loki should
4147
be used as it automatically adds labels appropriate to the running container.
42-
- [Fluent Bit]({{< relref "./fluentbit" >}}) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
48+
- [Fluent Bit](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/fluentbit/) - The Fluent Bit plugin is ideal when you already have Fluentd deployed
4349
and you already have configured `Parser` and `Filter` plugins.
44-
- [Fluentd]({{< relref "./fluentd" >}}) - The Fluentd plugin is ideal when you already have Fluentd deployed
50+
- [Fluentd](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/fluentd/) - The Fluentd plugin is ideal when you already have Fluentd deployed
4551
and you already have configured `Parser` and `Filter` plugins. Fluentd also works well for extracting metrics from logs when using itsPrometheus plugin.
46-
- [Lambda Promtail]({{< relref "./lambda-promtail" >}}) - This is a workflow combining the Promtail push-api [scrape config]({{< relref "./promtail/configuration#loki_push_api" >}}) and the [lambda-promtail]({{< relref "./lambda-promtail" >}}) AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
47-
- [Logstash]({{< relref "./logstash" >}}) - If you are already using logstash and/or beats, this will be the easiest way to start.
52+
- [Lambda Promtail](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/lambda-promtail/) - This is a workflow combining the Promtail push-api [scrape config](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/configuration/#loki_push_api) and the lambda-promtail AWS Lambda function which pipes logs from Cloudwatch to Loki. This is a good choice if you're looking to try out Loki in a low-footprint way or if you wish to monitor AWS lambda logs in Loki
53+
- [Logstash](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/logstash/) - If you are already using logstash and/or beats, this will be the easiest way to start.
4854
By adding our output plugin you can quickly try Loki without doing big configuration changes.
4955

5056
These third-party clients also enable sending logs to Loki:

docs/sources/send-data/k6/log-generation.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,8 @@ export default () => {
6161

6262
The second and third argument of the method take the lower and upper bound of
6363
the batch size. The resulting batch size is a random value between the two
64-
arguments. This mimics the behaviour of a log client, such as Promtail or
65-
the Grafana Agent, where logs are buffered and pushed once a certain batch size
64+
arguments. This mimics the behavior of a log client, such as Grafana Alloy or Promtail,
65+
where logs are buffered and pushed once a certain batch size
6666
is reached or after a certain size when no logs have been received.
6767

6868
The batch size is not equal to the payload size, as the batch size only counts

docs/sources/send-data/otel/_index.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Ingesting logs to Loki using OpenTelemetry Collector
3-
menuTitle: OTEL Collector
3+
menuTitle: OTel Collector
44
description: Configuring the OpenTelemetry Collector to send logs to Loki.
55
aliases:
66
- ../clients/k6/
@@ -72,7 +72,7 @@ service:
7272

7373
Since the OpenTelemetry protocol differs from the Loki storage model, here is how data in the OpenTelemetry format will be mapped by default to the Loki data model during ingestion, which can be changed as explained later:
7474

75-
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. Because Loki has a limit of 30 index labels, we have selected the following resource attributes to be stored as index labels, while the remaining attributes are stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}) with each log entry:
75+
- Index labels: Resource attributes map well to index labels in Loki, since both usually identify the source of the logs. The default list of Resource Attributes to store as Index labels can be configured using `default_resource_attributes_as_index_labels` under [distributor's otlp_config](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#distributor). By default, the following resource attributes will be stored as index labels, while the remaining attributes are stored as [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}) with each log entry:
7676
- cloud.availability_zone
7777
- cloud.region
7878
- container.name
@@ -91,9 +91,13 @@ Since the OpenTelemetry protocol differs from the Loki storage model, here is ho
9191
- service.name
9292
- service.namespace
9393

94+
{{% admonition type="note" %}}
95+
Because Loki has a default limit of 15 index labels, we recommend storing only select resource attributes as index labels. Although the default config selects more than 15 Resource Attributes, it should be fine since a few are mutually exclusive.
96+
{{% /admonition %}}
97+
9498
- Timestamp: One of `LogRecord.TimeUnixNano` or `LogRecord.ObservedTimestamp`, based on which one is set. If both are not set, the ingestion timestamp will be used.
9599

96-
- LogLine: `LogRecord.Body` holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTEL collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
100+
- LogLine: `LogRecord.Body` holds the body of the log. However, since Loki only supports Log body in string format, we will stringify non-string values using the [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
97101

98102
- [Structured Metadata]({{< relref "../../get-started/labels/structured-metadata" >}}): Anything which can’t be stored in Index labels and LogLine would be stored as Structured Metadata. Here is a non-exhaustive list of what will be stored in Structured Metadata to give a sense of what it will hold:
99103
- Resource Attributes not stored as Index labels is replicated and stored with each log entry.
@@ -105,7 +109,7 @@ Things to note before ingesting OpenTelemetry logs to Loki:
105109
- Dots (.) are converted to underscores (_).
106110

107111
Loki does not support `.` or any other special characters other than `_` in label names. The unsupported characters are replaced with an `_` while converting Attributes to Index Labels or Structured Metadata.
108-
Also, please note that while writing the queries, you must use the normalized format, i.e. use `_` instead of special characters while querying data using OTEL Attributes.
112+
Also, please note that while writing the queries, you must use the normalized format, i.e. use `_` instead of special characters while querying data using OTel Attributes.
109113

110114
For example, `service.name` in OTLP would become `service_name` in Loki.
111115

@@ -116,7 +120,7 @@ Things to note before ingesting OpenTelemetry logs to Loki:
116120

117121
- Stringification of non-string Attribute values
118122

119-
While converting Attribute values in OTLP to Index label values or Structured Metadata, any non-string values are converted to string using [AsString method from the OTEL collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
123+
While converting Attribute values in OTLP to Index label values or Structured Metadata, any non-string values are converted to string using [AsString method from the OTel collector lib](https://github.com/open-telemetry/opentelemetry-collector/blob/ab3d6c5b64701e690aaa340b0a63f443ff22c1f0/pdata/pcommon/value.go#L353).
120124

121125
### Changing the default mapping of OTLP to Loki Format
122126

docs/sources/send-data/promtail/_index.md

+4
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,10 @@ Promtail is an agent which ships the contents of local logs to a private Grafana
1212
instance or [Grafana Cloud](/oss/loki). It is usually
1313
deployed to every machine that runs applications which need to be monitored.
1414

15+
{{< admonition type="note" >}}
16+
Promtail is feature complete. All future feature development will occur in Grafana Alloy.
17+
{{< /admonition >}}
18+
1519
It primarily:
1620

1721
- Discovers targets

docs/sources/send-data/promtail/installation.md

+4
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,10 @@ weight: 100
99

1010
# Install Promtail
1111

12+
{{< admonition type="note" >}}
13+
Promtail is feature complete. All future feature development will occur in Grafana Alloy.
14+
{{< /admonition >}}
15+
1216
Promtail is distributed as a binary, in a Docker container,
1317
or there is a Helm chart to install it in a Kubernetes cluster.
1418

docs/sources/setup/install/helm/concepts.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ By default Loki will be installed in the scalable mode. This consists of a read
2121

2222
## Dashboards
2323

24-
This chart includes dashboards for monitoring Loki. These require the scrape configs defined in the `monitoring.serviceMonitor` and `monitoring.selfMonitoring` sections described below. The dashboards are deployed via a config map which can be mounted on a Grafana instance. The Dashboard require an installation of the Grafana Agent and the Prometheus operator. The agent is installed with this chart.
24+
This chart includes dashboards for monitoring Loki. These require the scrape configs defined in the `monitoring.serviceMonitor` and `monitoring.selfMonitoring` sections described below. The dashboards are deployed via a config map which can be mounted on a Grafana instance. The Dashboard requires an installation of the Grafana Agent and the Prometheus operator. The agent is installed with this chart.
2525

2626
## Canary
2727

0 commit comments

Comments
 (0)