Skip to content

Latest commit

 

History

History

jmxexporter-prometheus-grafana

Prometheus and Grafana stack

After run the demo, for Grafana, go to http://localhost:3000 and then login with admin/password

  • JMX Exporter 1.1.0
  • Prometheus version: 2.47.2
  • Grafana version 10.2.0

Note

JMX Exporter 1.x Dashboards

Grafana dashboards are supported for JMX Exporter version 1.x.

If you still use JMX Exporter version < 1.x (example 0.20), you would need to use the dashboards from this folder: grafana-dashboards-exporter-pre-1.x


List of provided dashboards:


Note

Consumer Group Lag

Starting with CP 7.5, brokers expose JMX tenant-metrics for consumer lags, see the documentation.

Consequently, you can either go with the kafka-lag-exporter or with the broker built-in tenant metrics. For the later one, you need to enable it by setting confluent.consumer.lag.emitter.enabled = true in the broker configuration, see the documentation.

This repository contains both options:

  • Dedicated Kafka lag exporter dashboard
  • Consumer lag visualizations within the consumer dashboard

JMX Exporter UI

[Experimental]

You can test JMX metrics using the UI and see if they are matching against a Prometheus ruleset file.

To run the UI:

  • ensure you have Python 3.x install
  • install python dependencies:
pip install Flask
  • run the UI and then connect to localhost:5000
python shared-assets/jmx-exporter-matching-ui/app.py
  • play with the UI

JMX Exporter UI

Confluent Platform overview

Confluent Platform overview

Zookeeper cluster

Zookeeper cluster dashboard

Kafka cluster

Kafka cluster dashboard 0 Kafka cluster dashboard 1

As an alternative, it is also available a definition file to collect only metrics with value at 99th percentile.

Kafka topics

Kafka topics

Kafka clients

Kafka Producer

Kafka Consumer

As an alternative, it is also available a definition file to collect only a limited number of metrics for clients clients - reduced.

Kafka quotas

For Kafka to output quota metrics, at least one quota configuration is necessary.

A quota can be configured from the cp-demo folder using docker-compose:

docker-compose exec kafka1 kafka-configs --bootstrap-server kafka1:12091 --alter --add-config 'producer_byte_rate=10000,consumer_byte_rate=30000,request_percentage=0.2' --entity-type users --entity-name unknown --entity-type clients --entity-name unknown

Kafka quotas

Kafka Lag Exporter

kafkalagexporter

Kafka Transaction Coordinator

kafkalagexporter

Schema Registry cluster

Schema Registry cluster

Kafka Connect cluster

Kafka Connect cluster dashboard 0 Kafka Connect cluster dashboard 1

ksqlDB cluster

ksqlDB cluster dashboard 0 ksqlDB cluster dashboard 1

Kafka streams

Kafka streams dashboard 0

Kafka streams RocksDB

kafkastreams-rocksdb 0

Librdkafka

librdkafka consumer librdkafka producer

Oracle CDC source Connector

Demo is based on https://github.com/vdesabou/kafka-docker-playground/tree/master/connect/connect-cdc-oracle19-source

To test:

oraclecdc

Debezium CDC source Connectors

debezium

Mongo source and sink Connectors

mongo

Cluster Linking

To test use dev-toolkit with clusterlinking profile:

  1. Start dev-toolkit with
$ cd dev-toolkit
$ start.sh --profile clusterlinking

clusterlinking

Rest Proxy

restproxy

KRaft

To test use dev-toolkit with default profile:

  1. Start dev-toolkit with
$ cd dev-toolkit
$ start.sh

kraft1 kraft2

Confluent RBAC

rbac

Replicator

To test follow the next steps:

  1. Start dev-toolkit with replicator profile
$ cd dev-toolkit
$ start.sh --profile replicator

replicator replicator

Tiered Storage

To test follow the next steps:

  1. Start dev-toolkit with tieredstorage profile
$ cd dev-toolkit
$ start.sh --profile tieredstorage

tiered-storage

Confluent Audit

confluent-audit

Flink Cluster

To test follow instructions exsisting at: flink folder

flink-cluster