-
Notifications
You must be signed in to change notification settings - Fork 138
Kafka Connectors Shared Logic
The configuration properties below are shared across all Kafka connectors in Brooklin.
Remember:
-
All Connector config properties must be prefixed with
brooklin.server.connector.<connectorName>
-
connectorName
is an arbitrary user-supplied name specified in Brooklin configuration (brooklin.server.connectorNames
)
Property | Description | Default |
---|---|---|
|
|
(None) |
|
|
(None) |
|
|
|
|
|
|
|
|
|
|
|
|
|
The maximum number of poll attempts to Kafka in case of failure |
|
|
The time duration (in milliseconds) to wait between successive poll attempts to Kafka in case of failure |
|
|
A flag indicating whether to auto-pause a topic partition if dispatching its data for delivery to the destination system fails |
|
|
The time duration (in milliseconds) to keep a topic partition paused after encountering send errors, before attempting to auto-resume |
|
|
|
|
|
|
|
The maximum time duration (in milliseconds) to allow between consuming data from Kafka and dispatching it for delivery to destination, before incrementing |
|
|
|
|
|
|
|
|
|
|
|
|
Kafka consumer configuration properties |
(None) |
The diagnostic endpoints below are shared across all Kafka connectors in Brooklin.
URL |
|
||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
URL Params |
Required
|
The metrics below are shared across all Kafka connectors in Brooklin.
General metrics prefix: <connectorName>.
-
connectorName
is the name of the connector in question as it appears inbrooklin.server.connectorNames
Metric Name | Description |
---|---|
|
The number of datastreams using the connector in the entire cluster |
|
The number of datastream tasks that belong to datastreams using the connector in the entire cluster |
-
Aggregate metrics cover all datastreams in a single Brooklin instance.
-
Aggregate metrics prefix:
<connectorName>.<connectorTask>.aggregate.
-
connectorName
is the name of the connector as it appears inbrooklin.server.connectorNames
-
connectorTask
is the name of the connector task, i.e.-
kafkaConnectorTask
forKafkaConnector
-
kafkaMirrorMakerConnectorTask
forKafkaMirrorMakerConnector
-
-
Metric Name | Description |
---|---|
|
The number of times polling Kafka consumer exceeds |
|
The rate of errors encountered when data is dispatched for delivery to the destination system |
|
The rate of bytes processed and dispatched for delivery to destination |
|
The rate of Kafka record consumption |
|
The number of auto-paused topic partitions awaiting destination topic creation |
|
The number of auto-paused topic partitions due to errors encountered during dispatch for delivery |
|
The number of auto-paused topic partitions due to exceeding their maximum in-flight messages thresholds |
|
The number of topic partitions paused manually |
|
The number of Kafka topic partitions |
The number of times dispatching records to destination exceeds |
|
|
The number of Kafka topics |
|
The number of polls exceeding the configured maximum session timeout for |
|
The rate of rebalances seen by the Kafka consumer |
|
The number of stuck topic partitions |
-
Datastream-specific metrics prefix:
<connectorName>.<connectorTask>.<datastreamName>.
-
connectorName
is the name of the connector as it appears inbrooklin.server.connectorNames
-
datastreamName
is the datastream name -
connectorTask
is the name of the connector task, i.e.-
kafkaConnectorTask
forKafkaConnector
-
kafkaMirrorMakerConnectorTask
forKafkaMirrorMakerConnector
-
-
Metric Name | Description |
---|---|
|
The rate of errors encountered when data is dispatched for delivery to the destination system |
|
The distribution (histogram) of the number of records retrieved from Kafka in every poll |
|
The rate of bytes processed and dispatched for delivery to destination |
|
The rate of Kafka record consumption |
|
The number of auto-paused topic partitions awaiting destination topic creation |
|
The number of auto-paused topic partitions due to errors encountered during dispatch for delivery |
|
The number of auto-paused topic partitions due to exceeding their maximum in-flight messages thresholds |
|
The number of topic partitions paused manually |
|
The number of Kafka topic partitions |
|
The rate of polls performed using the Kafka consumer |
|
The number of times dispatching records to destination exceeds |
|
The number of Kafka topics |
|
The number of polls exceeding the configured maximum session timeout for |
|
The rate of rebalances seen by the Kafka consumer |
|
The number of stuck topic partitions |
|
The time duration (in milliseconds) since the last non-empty |
Reference source file: com.linkedin.datastream.connectors.kafka.KafkaDatastreamStatesResponse
Field Name | Type | Description |
---|---|---|
|
|
Datastream name |
|
|
Assigned topic partitions |
|
|
Associates each auto-paused topic partition with metadata about the paused partitions |
|
|
Associates each topic with a list of manually paused partitions |
|
|
Associates each topic partition with the number of in-flight messages |
Reference source file: com.linkedin.datastream.common.diag.KafkaPositionKey
Field Name | Type | Description |
---|---|---|
|
|
The Kafka topic we are consuming from |
|
|
The Kafka partition we are consuming from |
|
|
The task prefix of the DatastreamTask the connector has been assigned (which is causing this topic partition to be consumed) |
|
|
The task name of the DatastreamTask the connector has been assigned (which is causing this topic partition to be consumed) |
|
|
The time (in milliseconds since the Unix epoch) at which consumption of this topic partition started. |
Reference source file: com.linkedin.datastream.common.diag.KafkaPositionValue
Field Name | Type | Description |
---|---|---|
|
|
The latest offset (the offset of the last produced message) on the Kafka broker for this topic partition. If the consumer is also at this position, then it is completely caught up and has no more messages to process. |
|
|
The current offset that the Kafka consumer has for this topic partition. When the consumer receives new messages for this topic partition, the received messages will have an offset greater than or equal to this value. |
|
|
The time (in milliseconds since the Unix epoch) that we were assigned this topic partition. |
|
|
The timestamp (in milliseconds since the Unix epoch) of the last record that we received from this topic partition. |
|
|
The last time (in milliseconds since the Unix epoch) that we queried a broker for its latest offset data — either by reading metrics data provided by the consumer, or by querying the broker for its latest offset data directly. |
|
|
The last time that the consumer received messages for this topic partition. |
- Home
- Brooklin Architecture
- Production Use Cases
- Developer Guide
- Documentation
- REST Endpoints
- Connectors
- Transport Providers
- Brooklin Configuration
- Test Driving Brooklin