You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/sources/query/metric_queries.md
+11
Original file line number
Diff line number
Diff line change
@@ -153,3 +153,14 @@ Examples:
153
153
or
154
154
vector(0) # will return 0
155
155
```
156
+
157
+
## Probabilistic aggregation
158
+
159
+
The `topk` keyword lets you find the largest 1,000 elements in a data stream by sample size. When `topk` hits the maximum series limit, LogQL also supports using a probable approximation; `approx_topk` is a drop-in replacement when `topk` hits the maximum series limit.
160
+
161
+
```logql
162
+
approx_topk(k, <vector expression>)
163
+
```
164
+
165
+
It is only supported for instant queries and does not support grouping. It is useful when the cardinality of the inner
166
+
vector is too high, for example, when it uses an aggregation by a structured metadata label.
Copy file name to clipboardexpand all lines: docs/sources/send-data/alloy/examples/alloy-kafka-logs.md
+37-20
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,21 @@
1
1
---
2
2
title: Sending Logs to Loki via Kafka using Alloy
3
3
menuTitle: Sending Logs to Loki via Kafka using Alloy
4
-
description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki.
4
+
description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki.
5
5
weight: 250
6
6
killercoda:
7
7
title: Sending Logs to Loki via Kafka using Alloy
8
-
description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki.
8
+
description: Configuring Grafana Alloy to receive logs via Kafka and send them to Loki.
9
9
backend:
10
10
imageid: ubuntu
11
11
---
12
-
12
+
<!-- vale Grafana.We = NO -->
13
13
<!-- INTERACTIVE page intro.md START -->
14
14
15
-
# Sending Logs to Loki via Kafka using Alloy
15
+
# Sending Logs to Loki via Kafka using Alloy
16
16
17
17
Alloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods:
18
+
18
19
-[loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*` components.
19
20
-[otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
20
21
@@ -38,9 +39,10 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos
38
39
{{< /admonition >}}
39
40
<!-- INTERACTIVE ignore END -->
40
41
41
-
42
42
## Scenario
43
+
43
44
In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services:
45
+
44
46
-**User Service:** Manages user data and authentication for the application. Such as creating users and logging in.
45
47
-**Plant Service:** Manages the creation of new plants and updates other services when a new plant is created.
46
48
-**Simulation Service:** Generates sensor data for each plant.
@@ -50,7 +52,8 @@ In this scenario, we have a microservices application called the Carnivorous Gre
50
52
-**Database:** A database that stores user and plant data.
51
53
52
54
Each service generates logs that are sent to Alloy via Kafka. In this example, they are sent on two different topics:
53
-
-`loki`: This sends a structured log formatted message (json).
55
+
56
+
-`loki`: This sends a structured log formatted message (json).
54
57
-`otlp`: This sends a serialized OpenTelemetry log message.
55
58
56
59
You would not typically do this within your own application, but for the purposes of this example we wanted to show how Alloy can handle different types of log messages over Kafka.
@@ -69,7 +72,8 @@ In this step, we will set up our environment by cloning the repository that cont
1. Next we will spin up our observability stack using Docker Compose:
75
+
76
+
1. Next we will spin up our observability stack using Docker Compose:
73
77
74
78
<!-- INTERACTIVE ignore START -->
75
79
```bash
@@ -80,14 +84,15 @@ In this step, we will set up our environment by cloning the repository that cont
80
84
{{< docs/ignore >}}
81
85
82
86
<!-- INTERACTIVE exec START -->
83
-
```bash
87
+
```bash
84
88
docker-compose -f loki-fundamentals/docker-compose.yml up -d
85
89
```
86
90
<!-- INTERACTIVE exec END -->
87
91
88
92
{{< /docs/ignore >}}
89
93
90
94
This will spin up the following services:
95
+
91
96
```console
92
97
✔ Container loki-fundamentals-grafana-1 Started
93
98
✔ Container loki-fundamentals-loki-1 Started
@@ -97,6 +102,7 @@ In this step, we will set up our environment by cloning the repository that cont
97
102
```
98
103
99
104
We will be access two UI interfaces:
105
+
100
106
- Alloy at [http://localhost:12345](http://localhost:12345)
101
107
- Grafana at [http://localhost:3000](http://localhost:3000)
102
108
<!-- INTERACTIVE page step1.md END -->
@@ -107,12 +113,13 @@ We will be access two UI interfaces:
107
113
108
114
In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration.
109
115
110
-
### Open your Code Editor and Locate the `config.alloy` file
116
+
### Open your code editor and locate the `config.alloy` file
111
117
112
118
Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor:
113
119
114
120
{{< docs/ignore >}}
115
121
**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.**
122
+
116
123
1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab.
117
124
1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory).
118
125
1. Click on the `config.alloy` file to open it in the code editor.
@@ -126,13 +133,14 @@ Grafana Alloy requires a configuration file to define the components and their r
126
133
127
134
You will copy all three of the following configuration snippets into the `config.alloy` file.
128
135
129
-
### Source logs from kafka
136
+
### Source logs from Kafka
130
137
131
138
First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components.
132
139
133
140
The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`.
134
141
135
142
Add the following configuration to the `config.alloy` file:
143
+
136
144
```alloy
137
145
loki.source.kafka "raw" {
138
146
brokers = ["kafka:9092"]
@@ -145,6 +153,7 @@ loki.source.kafka "raw" {
145
153
```
146
154
147
155
In this configuration:
156
+
148
157
- `brokers`: The Kafka brokers to connect to.
149
158
- `topics`: The Kafka topics to consume. In this case, we are consuming the `loki` topic.
150
159
- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`.
@@ -159,6 +168,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf
159
168
Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component.
160
169
161
170
Now add the following configuration to the `config.alloy` file:
171
+
162
172
```alloy
163
173
loki.relabel "kafka" {
164
174
forward_to = [loki.write.http.receiver]
@@ -170,6 +180,7 @@ loki.relabel "kafka" {
170
180
```
171
181
172
182
In this configuration:
183
+
173
184
- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component.
174
185
- `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`.
175
186
@@ -180,6 +191,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel
180
191
Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format.
181
192
182
193
And finally, add the following configuration to the `config.alloy` file:
194
+
183
195
```alloy
184
196
loki.write "http" {
185
197
endpoint {
@@ -189,6 +201,7 @@ loki.write "http" {
189
201
```
190
202
191
203
In this configuration:
204
+
192
205
- `endpoint`: The endpoint to send the logs to. In this case, we are sending the logs to the Loki HTTP endpoint.
193
206
194
207
For more information on the `loki.write` configuration, see the [Loki Write documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.write/).
@@ -209,7 +222,6 @@ The new configuration will be loaded. You can verify this by checking the Alloy
209
222
210
223
If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file:
@@ -225,16 +237,16 @@ curl -X POST http://localhost:12345/-/reload
225
237
226
238
Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components.
227
239
228
-
### Open your Code Editor and Locate the `config.alloy` file
240
+
### Open your code editor and locate the `config.alloy` file
229
241
230
242
Like before, we generate our next pipeline configuration within the same `config.alloy` file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file.
231
243
232
-
233
244
### Source OpenTelemetry logs from Kafka
234
245
235
-
First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
246
+
First, we will configure the OpenTelemetry Kafka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components.
236
247
237
248
Now add the following configuration to the `config.alloy` file:
- `protocol_version`: The Kafka protocol version to use.
254
267
- `topic`: The Kafka topic to consume. In this case, we are consuming the `otlp` topic.
@@ -257,12 +270,12 @@ In this configuration:
257
270
258
271
For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/).
259
272
260
-
261
273
### Batch OpenTelemetry logs before sending
262
274
263
275
Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
264
276
265
277
Now add the following configuration to the `config.alloy` file:
- `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`.
276
290
277
291
For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/).
@@ -281,6 +295,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op
281
295
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to the Loki native OTLP endpoint.
282
296
283
297
Finally, add the following configuration to the `config.alloy` file:
- `client`: The client configuration for the exporter. In this case, we are sending the logs to the Loki OTLP endpoint.
294
310
295
311
For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/).
@@ -353,6 +368,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --
353
368
{{< /docs/ignore >}}
354
369
355
370
This will start the following services:
371
+
356
372
```console
357
373
✔ Container greenhouse-db-1 Started
358
374
✔ Container greenhouse-websocket_service-1 Started
@@ -372,7 +388,6 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l
372
388
373
389
Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore).
374
390
375
-
376
391
<!-- INTERACTIVE page step4.md END -->
377
392
378
393
<!-- INTERACTIVE page finish.md START -->
@@ -383,14 +398,16 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All
383
398
384
399
{{< docs/ignore >}}
385
400
386
-
### Back to Docs
401
+
### Back to docs
402
+
387
403
Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)
388
404
389
405
{{< /docs/ignore >}}
390
406
391
407
## Further reading
392
408
393
409
For more information on Grafana Alloy, refer to the following resources:
410
+
394
411
- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/)
@@ -400,5 +417,5 @@ If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, y
400
417
401
418
The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud.
0 commit comments