Skip to content

Commit bb36007

Browse files
Merge branch 'main' into boldfield/allow-prefixing-rollout-group
2 parents 22f3370 + 5844fac commit bb36007

File tree

71 files changed

+1113
-723
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+1113
-723
lines changed

.github/renovate.json5

+1
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@
4848
"matchManagers": ["helm-requirements", "helm-values", "helmv3"],
4949
"groupName": "helm-{{packageName}}",
5050
"matchUpdateTypes": ["major", "minor", "patch"],
51+
"matchPackageNames": ["!grafana/loki"], // This is updated via a different job
5152
"autoApprove": false,
5253
"automerge": false
5354
},

docs/sources/query/metric_queries.md

+11
Original file line numberDiff line numberDiff line change
@@ -153,3 +153,14 @@ Examples:
153153
or
154154
vector(0) # will return 0
155155
```
156+
157+
## Probabilistic aggregation
158+
159+
The `topk` keyword lets you find the largest 1,000 elements in a data stream by sample size. When `topk` hits the maximum series limit, LogQL also supports using a probable approximation; `approx_topk` is a drop-in replacement when `topk` hits the maximum series limit.
160+
161+
```logql
162+
approx_topk(k, <vector expression>)
163+
```
164+
165+
It is only supported for instant queries and does not support grouping. It is useful when the cardinality of the inner
166+
vector is too high, for example, when it uses an aggregation by a structured metadata label.

docs/sources/send-data/alloy/examples/alloy-kafka-logs.md

+37-20
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,21 @@
11
---
22
title: Sending Logs to Loki via Kafka using Alloy
33
menuTitle: Sending Logs to Loki via Kafka using Alloy
4-
description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki.
4+
description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki.
55
weight: 250
66
killercoda:
77
title: Sending Logs to Loki via Kafka using Alloy
8-
description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki.
8+
description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki.
99
backend:
1010
imageid: ubuntu
1111
---
12-
12+
<!-- vale Grafana.We = NO -->
1313
<!-- INTERACTIVE page intro.md START -->
1414

15-
# Sending Logs to Loki via Kafka using Alloy
15+
# Sending Logs to Loki via Kafka using Alloy
1616

1717
Alloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods:
18+
1819
- [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*` components.
1920
- [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
2021

@@ -38,9 +39,10 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos
3839
{{< /admonition >}}
3940
<!-- INTERACTIVE ignore END -->
4041

41-
4242
## Scenario
43+
4344
In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services:
45+
4446
- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in.
4547
- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created.
4648
- **Simulation Service:** Generates sensor data for each plant.
@@ -50,7 +52,8 @@ In this scenario, we have a microservices application called the Carnivorous Gre
5052
- **Database:** A database that stores user and plant data.
5153

5254
Each service generates logs that are sent to Alloy via Kafka. In this example, they are sent on two different topics:
53-
- `loki`: This sends a structured log formatted message (json).
55+
56+
- `loki`: This sends a structured log formatted message (json).
5457
- `otlp`: This sends a serialized OpenTelemetry log message.
5558

5659
You would not typically do this within your own application, but for the purposes of this example we wanted to show how Alloy can handle different types of log messages over Kafka.
@@ -69,7 +72,8 @@ In this step, we will set up our environment by cloning the repository that cont
6972
git clone -b microservice-kafka https://github.com/grafana/loki-fundamentals.git
7073
```
7174
<!-- INTERACTIVE exec END -->
72-
1. Next we will spin up our observability stack using Docker Compose:
75+
76+
1. Next we will spin up our observability stack using Docker Compose:
7377

7478
<!-- INTERACTIVE ignore START -->
7579
```bash
@@ -80,14 +84,15 @@ In this step, we will set up our environment by cloning the repository that cont
8084
{{< docs/ignore >}}
8185

8286
<!-- INTERACTIVE exec START -->
83-
```bash
87+
```bash
8488
docker-compose -f loki-fundamentals/docker-compose.yml up -d
8589
```
8690
<!-- INTERACTIVE exec END -->
8791

8892
{{< /docs/ignore >}}
8993

9094
This will spin up the following services:
95+
9196
```console
9297
✔ Container loki-fundamentals-grafana-1 Started
9398
✔ Container loki-fundamentals-loki-1 Started
@@ -97,6 +102,7 @@ In this step, we will set up our environment by cloning the repository that cont
97102
```
98103

99104
We will be access two UI interfaces:
105+
100106
- Alloy at [http://localhost:12345](http://localhost:12345)
101107
- Grafana at [http://localhost:3000](http://localhost:3000)
102108
<!-- INTERACTIVE page step1.md END -->
@@ -107,12 +113,13 @@ We will be access two UI interfaces:
107113

108114
In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration.
109115

110-
### Open your Code Editor and Locate the `config.alloy` file
116+
### Open your code editor and locate the `config.alloy` file
111117

112118
Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor:
113119

114120
{{< docs/ignore >}}
115121
**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.**
122+
116123
1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab.
117124
1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory).
118125
1. Click on the `config.alloy` file to open it in the code editor.
@@ -126,13 +133,14 @@ Grafana Alloy requires a configuration file to define the components and their r
126133
127134
You will copy all three of the following configuration snippets into the `config.alloy` file.
128135
129-
### Source logs from kafka
136+
### Source logs from Kafka
130137
131138
First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components.
132139
133140
The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`.
134141
135142
Add the following configuration to the `config.alloy` file:
143+
136144
```alloy
137145
loki.source.kafka "raw" {
138146
brokers = ["kafka:9092"]
@@ -145,6 +153,7 @@ loki.source.kafka "raw" {
145153
```
146154
147155
In this configuration:
156+
148157
- `brokers`: The Kafka brokers to connect to.
149158
- `topics`: The Kafka topics to consume. In this case, we are consuming the `loki` topic.
150159
- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`.
@@ -159,6 +168,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf
159168
Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component.
160169
161170
Now add the following configuration to the `config.alloy` file:
171+
162172
```alloy
163173
loki.relabel "kafka" {
164174
forward_to = [loki.write.http.receiver]
@@ -170,6 +180,7 @@ loki.relabel "kafka" {
170180
```
171181
172182
In this configuration:
183+
173184
- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component.
174185
- `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`.
175186
@@ -180,6 +191,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel
180191
Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format.
181192
182193
And finally, add the following configuration to the `config.alloy` file:
194+
183195
```alloy
184196
loki.write "http" {
185197
endpoint {
@@ -189,6 +201,7 @@ loki.write "http" {
189201
```
190202
191203
In this configuration:
204+
192205
- `endpoint`: The endpoint to send the logs to. In this case, we are sending the logs to the Loki HTTP endpoint.
193206
194207
For more information on the `loki.write` configuration, see the [Loki Write documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.write/).
@@ -209,7 +222,6 @@ The new configuration will be loaded. You can verify this by checking the Alloy
209222
210223
If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file:
211224
212-
213225
<!-- INTERACTIVE exec START -->
214226
```bash
215227
cp loki-fundamentals/completed/config-raw.alloy loki-fundamentals/config.alloy
@@ -225,16 +237,16 @@ curl -X POST http://localhost:12345/-/reload
225237
226238
Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components.
227239
228-
### Open your Code Editor and Locate the `config.alloy` file
240+
### Open your code editor and locate the `config.alloy` file
229241
230242
Like before, we generate our next pipeline configuration within the same `config.alloy` file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file.
231243
232-
233244
### Source OpenTelemetry logs from Kafka
234245
235-
First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
246+
First, we will configure the OpenTelemetry Kafka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
236247
237248
Now add the following configuration to the `config.alloy` file:
249+
238250
```alloy
239251
otelcol.receiver.kafka "default" {
240252
brokers = ["kafka:9092"]
@@ -249,6 +261,7 @@ otelcol.receiver.kafka "default" {
249261
```
250262
251263
In this configuration:
264+
252265
- `brokers`: The Kafka brokers to connect to.
253266
- `protocol_version`: The Kafka protocol version to use.
254267
- `topic`: The Kafka topic to consume. In this case, we are consuming the `otlp` topic.
@@ -257,12 +270,12 @@ In this configuration:
257270
258271
For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/).
259272
260-
261273
### Batch OpenTelemetry logs before sending
262274
263275
Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
264276
265277
Now add the following configuration to the `config.alloy` file:
278+
266279
```alloy
267280
otelcol.processor.batch "default" {
268281
output {
@@ -272,6 +285,7 @@ otelcol.processor.batch "default" {
272285
```
273286
274287
In this configuration:
288+
275289
- `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`.
276290
277291
For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/).
@@ -281,6 +295,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op
281295
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
282296
283297
Finally, add the following configuration to the `config.alloy` file:
298+
284299
```alloy
285300
otelcol.exporter.otlphttp "default" {
286301
client {
@@ -290,6 +305,7 @@ otelcol.exporter.otlphttp "default" {
290305
```
291306
292307
In this configuration:
308+
293309
- `client`: The client configuration for the exporter. In this case, we are sending the logs to the Loki OTLP endpoint.
294310
295311
For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/).
@@ -341,7 +357,6 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --
341357
```
342358
<!-- INTERACTIVE ignore END -->
343359
344-
345360
{{< docs/ignore >}}
346361
347362
<!-- INTERACTIVE exec START -->
@@ -353,6 +368,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --
353368
{{< /docs/ignore >}}
354369
355370
This will start the following services:
371+
356372
```console
357373
✔ Container greenhouse-db-1 Started
358374
✔ Container greenhouse-websocket_service-1 Started
@@ -372,7 +388,6 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l
372388
373389
Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore).
374390
375-
376391
<!-- INTERACTIVE page step4.md END -->
377392
378393
<!-- INTERACTIVE page finish.md START -->
@@ -383,14 +398,16 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All
383398
384399
{{< docs/ignore >}}
385400
386-
### Back to Docs
401+
### Back to docs
402+
387403
Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)
388404
389405
{{< /docs/ignore >}}
390406
391407
## Further reading
392408
393409
For more information on Grafana Alloy, refer to the following resources:
410+
394411
- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/)
395412
- [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/)
396413
@@ -400,5 +417,5 @@ If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, y
400417
401418
The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud.
402419
403-
404-
<!-- INTERACTIVE page finish.md END -->
420+
<!-- INTERACTIVE page finish.md END -->
421+
<!-- vale Grafana.We = YES -->

0 commit comments

Comments
 (0)