This project is based on Confluent's Kafka JDBC connector with additional functionalities, namely:
- Support for
TimescaleDB
databases - Support for multiple
createTable
statements. - Support for schema creation and setting of schema name format in the connector config.
- Support for
TIMESTAMPTZ
data type inPostgreSQL
databases.
This project depends on a transform plugin that transforms the Kafka record before it is written to the database. See RADAR-base / kafka-connect-transform-keyvalue for more information.
If you're using Docker, the transform plugin image is included in the Dockerfile. If you're installing manually, the kafka-connect-transform-keyvalue
plugin must be installed to your Confluent plugin path.
This repository relies on a recent version of docker and docker-compose as well as an installation of Java 8 or later.
Copy docker/sink-timescale.properties.template
to docker/sink-timescale.properties
and enter your database connection URL, username, and password.
Now you can run a full Kafka stack using
docker-compose up -d --build
Code should be formatted using the Google Java Code Style Guide. If you want to contribute a feature or fix browse our issues, and please make a pull request.
To enable Sentry monitoring for the JDBC connector, follow these steps:
- Set a
SENTRY_DSN
environment variable that points to the desired Sentry DSN. - (Optional) Set the
SENTRY_LOG_LEVEL
environment variable to control the minimum log level of events sent to Sentry. The default log level for Sentry isWARN
. Possible values areTRACE
,DEBUG
,INFO
,WARN
, andERROR
.
For further configuration of Sentry via environmental variables see here. For instance:
SENTRY_LOG_LEVEL: 'ERROR'
SENTRY_DSN: 'https://000000000000.ingest.de.sentry.io/000000000000'
SENTRY_ATTACHSTACKTRACE: true
SENTRY_STACKTRACE_APP_PACKAGES: io.confluent.connect.jdbc