-
Notifications
You must be signed in to change notification settings - Fork 3.6k
/
Copy pathconfiguration.md
2248 lines (1698 loc) · 85.6 KB
/
configuration.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: Configure Promtail
menuTitle: Configuration reference
description: Configuration parameters for the Promtail agent.
aliases:
- ../../clients/promtail/configuration/
weight: 200
---
# Configure Promtail
Promtail is configured in a YAML file (usually referred to as `config.yaml`)
which contains information on the Promtail server, where positions are stored,
and how to scrape logs from files.
## Printing Promtail Config At Runtime
If you pass Promtail the flag `-print-config-stderr` or `-log-config-reverse-order`, (or `-print-config-stderr=true`)
Promtail will dump the entire config object it has created from the built in defaults combined first with
overrides from config file, and second by overrides from flags.
The result is the value for every config object in the Promtail config struct.
Some values may not be relevant to your install, this is expected as every option has a default value if it is being used or not.
This config is what Promtail will use to run, it can be invaluable for debugging issues related to configuration and
is especially useful in making sure your config files and flags are being read and loaded properly.
`-print-config-stderr` is nice when running Promtail directly e.g. `./promtail ` as you can get a quick output of the entire Promtail config.
`-log-config-reverse-order` is the flag we run Promtail with in all our environments, the config entries are reversed so
that the order of configs reads correctly top to bottom when viewed in Grafana's Explore.
## Configuration File Reference
To specify which configuration file to load, pass the `-config.file` flag at the
command line. The file is written in [YAML format](https://en.wikipedia.org/wiki/YAML),
defined by the schema below. Brackets indicate that a parameter is optional. For
non-list parameters the value is set to the specified default.
For more detailed information on configuring how to discover and scrape logs from
targets, see [Scraping]({{< relref "./scraping" >}}). For more information on transforming logs
from scraped targets, see [Pipelines]({{< relref "./pipelines" >}}).
### Use environment variables in the configuration
You can use environment variable references in the configuration file to set values that need to be configurable during deployment.
To do this, pass `-config.expand-env=true` and use:
```
${VAR}
```
Where VAR is the name of the environment variable.
Each variable reference is replaced at startup by the value of the environment variable.
The replacement is case-sensitive and occurs before the YAML file is parsed.
References to undefined variables are replaced by empty strings unless you specify a default value or custom error text.
To specify a default value, use:
```
${VAR:-default_value}
```
Where default_value is the value to use if the environment variable is undefined.
{{% admonition type="note" %}}
With `expand-env=true` the configuration will first run through
[envsubst](https://pkg.go.dev/github.com/drone/envsubst) which will replace double
backslashes with single backslashes. Because of this every use of a backslash `\` needs to
be replaced with a double backslash `\\`.
{{% /admonition %}}
### Generic placeholders
- `<boolean>`: a boolean that can take the values `true` or `false`
- `<int>`: any integer matching the regular expression `[1-9]+[0-9]*`
- `<duration>`: a duration matching the regular expression `[0-9]+(ms|[smhdwy])`
- `<labelname>`: a string matching the regular expression `[a-zA-Z_][a-zA-Z0-9_]*`
- `<labelvalue>`: a string of Unicode characters
- `<filename>`: a valid path relative to current working directory or an
absolute path.
- `<host>`: a valid string consisting of a hostname or IP followed by an optional port number
- `<string>`: a string
- `<secret>`: a string that represents a secret, such as a password
### Supported contents and default values of `config.yaml`:
```yaml
# Configures global settings which impact all targets.
[global: <global_config>]
# Configures the server for Promtail.
[server: <server_config>]
# Describes how Promtail connects to multiple instances
# of Grafana Loki, sending logs to each.
# WARNING: If one of the remote Loki servers fails to respond or responds
# with any error which is retryable, this will impact sending logs to any
# other configured remote Loki servers. Sending is done on a single thread!
# It is generally recommended to run multiple Promtail clients in parallel
# if you want to send to multiple remote Loki instances.
clients:
- [<client_config>]
# Describes how to save read file offsets to disk
[positions: <position_config>]
scrape_configs:
- [<scrape_config>]
# Configures global limits for this instance of Promtail
[limits_config: <limits_config>]
# Configures how tailed targets will be watched.
[target_config: <target_config>]
# Configures additional promtail configurations.
[options: <options_config>]
# Configures tracing support
[tracing: <tracing_config>]
```
## global
The `global` block configures global settings which impact all scrape targets:
```yaml
# Configure how frequently log files from disk get polled for changes.
[file_watch_config: <file_watch_config>]
```
## file_watch_config
The `file_watch_config` block configures how often to poll log files from disk
for changes:
```yaml
# Minimum frequency to poll for files. Any time file changes are detected, the
# poll frequency gets reset to this duration.
[min_poll_frequency: <duration> | default = "250ms"]
# Maximum frequency to poll for files. Any time no file changes are detected,
# the poll frequency doubles in value up to the maximum duration specified by
# this value.
#
# The default is set to the same as min_poll_frequency.
[max_poll_frequency: <duration> | default = "250ms"]
```
## server
The `server` block configures Promtail's behavior as an HTTP server:
```yaml
# Disable the HTTP and GRPC server.
[disable: <boolean> | default = false]
# Enable the /debug/fgprof and /debug/pprof endpoints for profiling.
[profiling_enabled: <boolean> | default = false]
# HTTP server listen host
[http_listen_address: <string>]
# HTTP server listen port (0 means random port)
[http_listen_port: <int> | default = 80]
# gRPC server listen host
[grpc_listen_address: <string>]
# gRPC server listen port (0 means random port)
[grpc_listen_port: <int> | default = 9095]
# Register instrumentation handlers (/metrics, etc.)
[register_instrumentation: <boolean> | default = true]
# Timeout for graceful shutdowns
[graceful_shutdown_timeout: <duration> | default = 30s]
# Read timeout for HTTP server
[http_server_read_timeout: <duration> | default = 30s]
# Write timeout for HTTP server
[http_server_write_timeout: <duration> | default = 30s]
# Idle timeout for HTTP server
[http_server_idle_timeout: <duration> | default = 120s]
# Max gRPC message size that can be received
[grpc_server_max_recv_msg_size: <int> | default = 4194304]
# Max gRPC message size that can be sent
[grpc_server_max_send_msg_size: <int> | default = 4194304]
# Limit on the number of concurrent streams for gRPC calls (0 = unlimited)
[grpc_server_max_concurrent_streams: <int> | default = 100]
# Log only messages with the given severity or above. Supported values [debug,
# info, warn, error]
[log_level: <string> | default = "info"]
# Base path to server all API routes from (e.g., /v1/).
[http_path_prefix: <string>]
# Target managers check flag for Promtail readiness, if set to false the check is ignored
[health_check_target: <bool> | default = true]
# Enable reload via HTTP request.
[enable_runtime_reload: <bool> | default = false]
```
## clients
The `clients` block configures how Promtail connects to instances of
Loki:
```yaml
# The URL where Loki is listening, denoted in Loki as http_listen_address and
# http_listen_port. If Loki is running in microservices mode, this is the HTTP
# URL for the Distributor. Path to the push API needs to be included.
# Example: http://example.com:3100/loki/api/v1/push
url: <string>
# Custom HTTP headers to be sent along with each push request.
# Be aware that headers that are set by Promtail itself (e.g. X-Scope-OrgID) can't be overwritten.
headers:
# Example: CF-Access-Client-Id: xxx
[ <labelname>: <labelvalue> ... ]
# The tenant ID used by default to push logs to Loki. If omitted or empty
# it assumes Loki is running in single-tenant mode and no X-Scope-OrgID header
# is sent.
[tenant_id: <string>]
# Maximum amount of time to wait before sending a batch, even if that
# batch isn't full.
[batchwait: <duration> | default = 1s]
# Maximum batch size (in bytes) of logs to accumulate before sending
# the batch to Loki.
[batchsize: <int> | default = 1048576]
# If using basic auth, configures the username and password
# sent.
basic_auth:
# The username to use for basic auth
[username: <string>]
# The password to use for basic auth
[password: <string>]
# The file containing the password for basic auth
[password_file: <filename>]
# Optional OAuth 2.0 configuration
# Cannot be used at the same time as basic_auth or authorization
oauth2:
# Client id and secret for oauth2
[client_id: <string>]
[client_secret: <secret>]
# Read the client secret from a file
# It is mutually exclusive with `client_secret`
[client_secret_file: <filename>]
# Optional scopes for the token request
scopes:
[ - <string> ... ]
# The URL to fetch the token from
token_url: <string>
# Optional parameters to append to the token URL
endpoint_params:
[ <string>: <string> ... ]
# Bearer token to send to the server.
[bearer_token: <secret>]
# File containing bearer token to send to the server.
[bearer_token_file: <filename>]
# HTTP proxy server to use to connect to the server.
[proxy_url: <string>]
# If connecting to a TLS server, configures how the TLS
# authentication handshake will operate.
tls_config:
# The CA file to use to verify the server
[ca_file: <string>]
# The cert file to send to the server for client auth
[cert_file: <filename>]
# The key file to send to the server for client auth
[key_file: <filename>]
# Validates that the server name in the server's certificate
# is this value.
[server_name: <string>]
# If true, ignores the server certificate being signed by an
# unknown CA.
[insecure_skip_verify: <boolean> | default = false]
# Configures how to retry requests to Loki when a request
# fails.
# Default backoff schedule:
# 0.5s, 1s, 2s, 4s, 8s, 16s, 32s, 64s, 128s, 256s(4.267m)
# For a total time of 511.5s(8.5m) before logs are lost
backoff_config:
# Initial backoff time between retries
[min_period: <duration> | default = 500ms]
# Maximum backoff time between retries
[max_period: <duration> | default = 5m]
# Maximum number of retries to do
[max_retries: <int> | default = 10]
# Disable retries of batches that Loki responds to with a 429 status code (TooManyRequests). This reduces
# impacts on batches from other tenants, which could end up being delayed or dropped due to exponential backoff.
[drop_rate_limited_batches: <boolean> | default = false]
# Static labels to add to all logs being sent to Loki.
# Use map like {"foo": "bar"} to add a label foo with
# value bar.
# These can also be specified from command line:
# -client.external-labels=k1=v1,k2=v2
# (or --client.external-labels depending on your OS)
# labels supplied by the command line are applied
# to all clients configured in the `clients` section.
# NOTE: values defined in the config file will replace values
# defined on the command line for a given client if the
# label keys are the same.
external_labels:
[ <labelname>: <labelvalue> ... ]
# Maximum time to wait for a server to respond to a request
[timeout: <duration> | default = 10s]
```
## positions
The `positions` block configures where Promtail will save a file
indicating how far it has read into a file. It is needed for when Promtail
is restarted to allow it to continue from where it left off.
```yaml
# Location of positions file
[filename: <string> | default = "/var/log/positions.yaml"]
# How often to update the positions file
[sync_period: <duration> | default = 10s]
# Whether to ignore & later overwrite positions files that are corrupted
[ignore_invalid_yaml: <boolean> | default = false]
```
## scrape_configs
The `scrape_configs` block configures how Promtail can scrape logs from a series
of targets using a specified discovery method. Promtail uses the same [Prometheus scrape_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config). This means if you already own a Prometheus instance, the config will be very similar:
```yaml
# Name to identify this scrape config in the Promtail UI.
job_name: <string>
# Describes how to transform logs from targets.
[pipeline_stages: <pipeline_stages>]
# Defines decompression behavior for the given scrape target.
decompression:
# Whether decompression should be tried or not.
[enabled: <boolean> | default = false]
# Initial delay to wait before starting the decompression.
# Especially useful in scenarios where compressed files are found before the compression is finished.
[initial_delay: <duration> | default = 0s]
# Compression format. Supported formats are: 'gz', 'bz2' and 'z.
[format: <string> | default = ""]
# Describes how to scrape logs from the journal.
[journal: <journal_config>]
# Describes from which encoding a scraped file should be converted.
[encoding: <iana_encoding_name>]
# Describes how to receive logs from syslog.
[syslog: <syslog_config>]
# Describes how to receive logs via the Loki push API, (e.g. from other Promtails or the Docker Logging Driver)
[loki_push_api: <loki_push_api_config>]
# Describes how to scrape logs from the Windows event logs.
[windows_events: <windows_events_config>]
# Configuration describing how to pull/receive Google Cloud Platform (GCP) logs.
[gcplog: <gcplog_config>]
# Configuration describing how to get Azure Event Hubs messages.
[azure_event_hub: <azure_event_hub_config>]
# Describes how to fetch logs from Kafka via a Consumer group.
[kafka: <kafka_config>]
# Describes how to receive logs from gelf client.
[gelf: <gelf_config>]
# Configuration describing how to pull logs from Cloudflare.
[cloudflare: <cloudflare>]
# Configuration describing how to pull logs from a Heroku LogPlex drain.
[heroku_drain: <heroku_drain>]
# Describes how to relabel targets to determine if they should
# be processed.
relabel_configs:
- [<relabel_config>]
# Static targets to scrape.
static_configs:
- [<static_config>]
# Files containing targets to scrape.
file_sd_configs:
- [<file_sd_configs>]
# Describes how to discover Kubernetes services running on the
# same host.
kubernetes_sd_configs:
- [<kubernetes_sd_config>]
# Describes how to use the Consul Catalog API to discover services registered with the
# consul cluster.
consul_sd_configs:
[ - <consul_sd_config> ... ]
# Describes how to use the Consul Agent API to discover services registered with the consul agent
# running on the same host as Promtail.
consulagent_sd_configs:
[ - <consulagent_sd_config> ... ]
# Describes how to use the Docker daemon API to discover containers running on
# the same host as Promtail.
docker_sd_configs:
[ - <docker_sd_config> ... ]
```
### pipeline_stages
[Pipeline]({{< relref "./pipelines" >}}) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.
In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by Promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.
```yaml
- [
<docker> |
<cri> |
<regex> |
<json> |
<template> |
<match> |
<timestamp> |
<output> |
<labels> |
<metrics> |
<tenant> |
<replace>
]
```
#### docker
The Docker stage parses the contents of logs from Docker containers, and is defined by name with an empty object:
```yaml
docker: {}
```
The docker stage will match and parse log lines of this format:
```nohighlight
`{"log":"level=info ts=2019-04-30T02:12:41.844179Z caller=filetargetmanager.go:180 msg=\"Adding target\"\n","stream":"stderr","time":"2019-04-30T02:12:41.8443515Z"}`
```
Automatically extracting the `time` into the logs timestamp, `stream` into a label, and `log` field into the output, this can be very helpful as docker is wrapping your application log in this way and this will unwrap it for further pipeline processing of just the log content.
The Docker stage is just a convenience wrapper for this definition:
```yaml
- json:
expressions:
output: log
stream: stream
timestamp: time
- labels:
stream:
- timestamp:
source: timestamp
format: RFC3339Nano
- output:
source: output
```
#### cri
The CRI stage parses the contents of logs from CRI containers, and is defined by name with an empty object:
```yaml
cri: {}
```
The CRI stage will match and parse log lines of this format:
```nohighlight
2019-01-01T01:00:00.000000001Z stderr P some log message
```
Automatically extracting the `time` into the logs timestamp, `stream` into a label, and the remaining message into the output, this can be very helpful as CRI is wrapping your application log in this way and this will unwrap it for further pipeline processing of just the log content.
The CRI stage is just a convenience wrapper for this definition:
```yaml
- regex:
expression: "^(?s)(?P<time>\\S+?) (?P<stream>stdout|stderr) (?P<flags>\\S+?) (?P<content>.*)$"
- labels:
stream:
- timestamp:
source: time
format: RFC3339Nano
- output:
source: content
```
#### regex
The Regex stage takes a regular expression and extracts captured named groups to
be used in further stages.
```yaml
regex:
# The RE2 regular expression. Each capture group must be named.
expression: <string>
# Name from extracted data to parse. If empty, uses the log message.
[source: <string>]
```
#### json
The JSON stage parses a log line as JSON and takes
[JMESPath](http://jmespath.org/) expressions to extract data from the JSON to be
used in further stages.
```yaml
json:
# Set of key/value pairs of JMESPath expressions. The key will be
# the key in the extracted data while the expression will be the value,
# evaluated as a JMESPath from the source data.
expressions:
[ <string>: <string> ... ]
# Name from extracted data to parse. If empty, uses the log message.
[source: <string>]
```
#### template
The template stage uses Go's
[`text/template`](https://golang.org/pkg/text/template) language to manipulate
values.
```yaml
template:
# Name from extracted data to parse. If key in extract data doesn't exist, an
# entry for it will be created.
source: <string>
# Go template string to use. In additional to normal template
# functions, ToLower, ToUpper, Replace, Trim, TrimLeft, TrimRight,
# TrimPrefix, TrimSuffix, and TrimSpace are available as functions.
template: <string>
```
Example:
```yaml
template:
source: level
template: '{{ if eq .Value "WARN" }}{{ Replace .Value "WARN" "OK" -1 }}{{ else }}{{ .Value }}{{ end }}'
```
#### match
The match stage conditionally executes a set of stages when a log entry matches
a configurable [LogQL]({{< relref "../../query" >}}) stream selector.
```yaml
match:
# LogQL stream selector.
selector: <string>
# Names the pipeline. When defined, creates an additional label in
# the pipeline_duration_seconds histogram, where the value is
# concatenated with job_name using an underscore.
[pipeline_name: <string>]
# Nested set of pipeline stages only if the selector
# matches the labels of the log entries:
stages:
- [
<docker> |
<cri> |
<regex>
<json> |
<template> |
<match> |
<timestamp> |
<output> |
<labels> |
<metrics>
]
```
#### timestamp
The timestamp stage parses data from the extracted map and overrides the final
time value of the log that is stored by Loki. If this stage isn't present,
Promtail will associate the timestamp of the log entry with the time that
log entry was read.
```yaml
timestamp:
# Name from extracted data to use for the timestamp.
source: <string>
# Determines how to parse the time string. Can use
# pre-defined formats by name: [ANSIC UnixDate RubyDate RFC822
# RFC822Z RFC850 RFC1123 RFC1123Z RFC3339 RFC3339Nano Unix
# UnixMs UnixUs UnixNs].
format: <string>
# IANA Timezone Database string.
[location: <string>]
```
#### output
The output stage takes data from the extracted map and sets the contents of the
log entry that will be stored by Loki.
```yaml
output:
# Name from extracted data to use for the log entry.
source: <string>
```
#### labels
The labels stage takes data from the extracted map and sets additional labels
on the log entry that will be sent to Loki.
```yaml
labels:
# Key is REQUIRED and the name for the label that will be created.
# Value is optional and will be the name from extracted data whose value
# will be used for the value of the label. If empty, the value will be
# inferred to be the same as the key.
[ <string>: [<string>] ... ]
```
#### metrics
The metrics stage allows for defining metrics from the extracted data.
Created metrics are not pushed to Loki and are instead exposed via Promtail's
`/metrics` endpoint. Prometheus should be configured to scrape Promtail to be
able to retrieve the metrics configured by this stage.
```yaml
# A map where the key is the name of the metric and the value is a specific
# metric type.
metrics:
[<string>: [ <counter> | <gauge> | <histogram> ] ...]
```
##### counter
Defines a counter metric whose value only goes up.
```yaml
# The metric type. Must be Counter.
type: Counter
# Describes the metric.
[description: <string>]
# Key from the extracted data map to use for the metric,
# defaulting to the metric's name if not present.
[source: <string>]
config:
# Filters down source data and only changes the metric
# if the targeted value exactly matches the provided string.
# If not present, all data will match.
[value: <string>]
# Must be either "inc" or "add" (case insensitive). If
# inc is chosen, the metric value will increase by 1 for each
# log line received that passed the filter. If add is chosen,
# the extracted value must be convertible to a positive float
# and its value will be added to the metric.
action: <string>
```
##### gauge
Defines a gauge metric whose value can go up or down.
```yaml
# The metric type. Must be Gauge.
type: Gauge
# Describes the metric.
[description: <string>]
# Key from the extracted data map to use for the metric,
# defaulting to the metric's name if not present.
[source: <string>]
config:
# Filters down source data and only changes the metric
# if the targeted value exactly matches the provided string.
# If not present, all data will match.
[value: <string>]
# Must be either "set", "inc", "dec"," add", or "sub". If
# add, set, or sub is chosen, the extracted value must be
# convertible to a positive float. inc and dec will increment
# or decrement the metric's value by 1 respectively.
action: <string>
```
##### histogram
Defines a histogram metric whose values are bucketed.
```yaml
# The metric type. Must be Histogram.
type: Histogram
# Describes the metric.
[description: <string>]
# Key from the extracted data map to use for the metric,
# defaulting to the metric's name if not present.
[source: <string>]
config:
# Filters down source data and only changes the metric
# if the targeted value exactly matches the provided string.
# If not present, all data will match.
[value: <string>]
# Must be either "inc" or "add" (case insensitive). If
# inc is chosen, the metric value will increase by 1 for each
# log line received that passed the filter. If add is chosen,
# the extracted value must be convertible to a positive float
# and its value will be added to the metric.
action: <string>
# Holds all the numbers in which to bucket the metric.
buckets:
- <int>
```
#### tenant
The tenant stage is an action stage that sets the tenant ID for the log entry
picking it from a field in the extracted data map.
```yaml
tenant:
# Either label, source or value config option is required, but not all (they
# are mutually exclusive).
# Name from labels to whose value should be set as tenant ID.
[ label: <string> ]
# Name from extracted data to whose value should be set as tenant ID.
[ source: <string> ]
# Value to use to set the tenant ID when this stage is executed. Useful
# when this stage is included within a conditional pipeline with "match".
[ value: <string> ]
```
#### replace
The replace stage is a parsing stage that parses a log line using
a regular expression and replaces the log line.
```yaml
replace:
# The RE2 regular expression. Each named capture group will be added to extracted.
# Each capture group and named capture group will be replaced with the value given in
# `replace`
expression: <string>
# Name from extracted data to parse. If empty, uses the log message.
# The replaced value will be assigned back to soure key
[source: <string>]
# Value to which the captured group will be replaced. The captured group or the named
# captured group will be replaced with this value and the log line will be replaced with
# new replaced values. An empty value will remove the captured group from the log line.
[replace: <string>]
```
### journal
The `journal` block configures reading from the systemd journal from
Promtail. Requires a build of Promtail that has journal support _enabled_. If
using the AMD64 Docker image, this is enabled by default.
```yaml
# When true, log messages from the journal are passed through the
# pipeline as a JSON message with all of the journal entries' original
# fields. When false, the log message is the text content of the MESSAGE
# field from the journal entry.
[json: <boolean> | default = false]
# The oldest relative time from process start that will be read
# and sent to Loki.
[max_age: <duration> | default = 7h]
# Label map to add to every log coming out of the journal
labels:
[ <labelname>: <labelvalue> ... ]
# Path to a directory to read entries from. Defaults to system
# paths (/var/log/journal and /run/log/journal) when empty.
[path: <string>]
```
{{% admonition type="note" %}}
Priority label is available as both value and keyword. For example, if `priority` is `3` then the labels will be `__journal_priority` with a value `3` and `__journal_priority_keyword` with a corresponding keyword `err`.
{{% /admonition %}}
### syslog
The `syslog` block configures a syslog listener allowing users to push
logs to Promtail with the syslog protocol.
Currently supported is [IETF Syslog (RFC5424)](https://tools.ietf.org/html/rfc5424)
with and without octet counting.
The recommended deployment is to have a dedicated syslog forwarder like **syslog-ng** or **rsyslog**
in front of Promtail. The forwarder can take care of the various specifications
and transports that exist (UDP, BSD syslog, ...).
[Octet counting](https://tools.ietf.org/html/rfc6587#section-3.4.1) is recommended as the
message framing method. In a stream with [non-transparent framing](https://tools.ietf.org/html/rfc6587#section-3.4.2),
Promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
[syslog-ng]({{< relref "./scraping#syslog-ng-output-configuration" >}}) and
[rsyslog]({{< relref "./scraping#rsyslog-output-configuration" >}}). Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the Promtail process
if many clients are connected. (`ulimit -Sn`)
```yaml
# TCP address to listen on. Has the format of "host:port".
listen_address: <string>
# Configure the receiver to use TLS.
tls_config:
# Certificate and key files sent by the server (required)
cert_file: <string>
key_file: <string>
# CA certificate used to validate client certificate. Enables client certificate verification when specified.
[ ca_file: <string> ]
# The idle timeout for tcp syslog connections, default is 120 seconds.
idle_timeout: <duration>
# Whether to convert syslog structured data to labels.
# A structured data entry of [example@99999 test="yes"] would become
# the label "__syslog_message_sd_example_99999_test" with the value "yes".
label_structured_data: <bool>
# Label map to add to every log message.
labels:
[ <labelname>: <labelvalue> ... ]
# Whether Promtail should pass on the timestamp from the incoming syslog message.
# When false, or if no timestamp is present on the syslog message, Promtail will assign the current timestamp to the log when it was processed.
# Default is false
use_incoming_timestamp: <bool>
# Sets the maximum limit to the length of syslog messages
max_message_length: <int>
```
#### Available Labels
- `__syslog_connection_ip_address`: The remote IP address.
- `__syslog_connection_hostname`: The remote hostname.
- `__syslog_message_severity`: The [syslog severity](https://tools.ietf.org/html/rfc5424#section-6.2.1) parsed from the message. Symbolic name as per [syslog_message.go](https://github.com/influxdata/go-syslog/blob/v2.0.1/rfc5424/syslog_message.go#L184).
- `__syslog_message_facility`: The [syslog facility](https://tools.ietf.org/html/rfc5424#section-6.2.1) parsed from the message. Symbolic name as per [syslog_message.go](https://github.com/influxdata/go-syslog/blob/v2.0.1/rfc5424/syslog_message.go#L235) and `syslog(3)`.
- `__syslog_message_hostname`: The [hostname](https://tools.ietf.org/html/rfc5424#section-6.2.4) parsed from the message.
- `__syslog_message_app_name`: The [app-name field](https://tools.ietf.org/html/rfc5424#section-6.2.5) parsed from the message.
- `__syslog_message_proc_id`: The [procid field](https://tools.ietf.org/html/rfc5424#section-6.2.6) parsed from the message.
- `__syslog_message_msg_id`: The [msgid field](https://tools.ietf.org/html/rfc5424#section-6.2.7) parsed from the message.
- `__syslog_message_sd_<sd_id>[_<iana_enterprise_id>]_<sd_name>`: The [structured-data field](https://tools.ietf.org/html/rfc5424#section-6.3) parsed from the message. The data field `[custom@99770 example="1"]` becomes `__syslog_message_sd_custom_99770_example`.
### loki_push_api
The `loki_push_api` block configures Promtail to expose a [Loki push API](https://grafana.com/docs/loki/<LOKI_VERSION>/reference/loki-http-api#ingest-logs) server.
Each job configured with a `loki_push_api` will expose this API and will require a separate port.
Note the `server` configuration is the same as [server](#server).
Promtail also exposes a second endpoint on `/promtail/api/v1/raw` which expects newline-delimited log lines.
This can be used to send NDJSON or plaintext logs.
The readiness of the loki_push_api server can be checked using the endpoint `/ready`.
```yaml
# The push server configuration options
[server: <server_config>]
# Label map to add to every log line sent to the push API
labels:
[ <labelname>: <labelvalue> ... ]
# If Promtail should pass on the timestamp from the incoming log or not.
# When false Promtail will assign the current timestamp to the log when it was processed.
# Does not apply to the plaintext endpoint on `/promtail/api/v1/raw`.
[use_incoming_timestamp: <bool> | default = false]
```
See [Example Push Config](#example-push-config)
### windows_events
The `windows_events` block configures Promtail to scrape windows event logs and send them to Loki.
To subscribe to a specific events stream you need to provide either an `eventlog_name` or an `xpath_query`.
Events are scraped periodically every 3 seconds by default but can be changed using `poll_interval`.
A bookmark path `bookmark_path` is mandatory and will be used as a position file where Promtail will
keep record of the last event processed. This file persists across Promtail restarts.
You can set `use_incoming_timestamp` if you want to keep incoming event timestamps. By default Promtail will use the timestamp when
the event was read from the event log.
Promtail will serialize JSON windows events, adding `channel` and `computer` labels from the event received.
You can add additional labels with the `labels` property.
```yaml
# LCID (Locale ID) for event rendering
# - 1033 to force English language
# - 0 to use default Windows locale
[locale: <int> | default = 0]
# Name of eventlog, used only if xpath_query is empty
# Example: "Application"
[eventlog_name: <string> | default = ""]
# xpath_query can be in defined short form like "Event/System[EventID=999]"
# or you can form a XML Query. Refer to the Consuming Events article:
# https://docs.microsoft.com/en-us/windows/win32/wes/consuming-events
# XML query is the recommended form, because it is most flexible
# You can create or debug XML Query by creating Custom View in Windows Event Viewer
# and then copying resulting XML here
[xpath_query: <string> | default = "*"]
# Sets the bookmark location on the filesystem.
# The bookmark contains the current position of the target in XML.
# When restarting or rolling out Promtail, the target will continue to scrape events where it left off based on the bookmark position.
# The position is updated after each entry processed.
[bookmark_path: <string> | default = ""]
# PollInterval is the interval at which we're looking if new events are available. By default the target will check every 3seconds.
[poll_interval: <duration> | default = 3s]