Confluent Metrics Reporter

The Confluent Metrics Reporter collects various metrics from a Kafka cluster that are necessary for the Confluent Control Center system health monitoring and Confluent Auto Data Balancer to operate.

This data is produced to a topic in the configured Kafka cluster. You may choose to publish metrics to the same Kafka cluster the reporting brokers are part of or to a different one (e.g. a dedicated metrics cluster). The former is more convenient and it’s a reasonable way to get started. However, if the reporting cluster is experiencing issues, it may also affect the availability of the metrics data, which is suboptimal for the system health monitoring use case. A separate metrics cluster tends to be more resilient in such situations.

Installation

To enable the Confluent Metrics Reporter, install the confluent-rebalancer package (installed automatically if the confluent-control-center or the confluent package is installed) in each broker that is part of the Kafka cluster from which we want to collect metrics.

Configuration

Enable the reporter in each broker by updating its server.properties. For convenience, the server.properties shipped with the Confluent platform includes the configs below commented out under the Confluent Metrics Reporter section.

metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
confluent.metrics.reporter.bootstrap.servers=localhost:9092

# Uncomment the following if the metrics cluster has < 3 brokers
#confluent.metrics.reporter.topic.replicas=1

We have configured the reporting brokers to publish the metrics to the cluster they are part of. The alternative approach of publishing to a different cluster (e.g. a dedicated metrics cluster) might look like:

metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
confluent.metrics.reporter.bootstrap.servers=broker1:9092,broker2:9092,broker3:9092

A rolling restart of the brokers is required for the config changes to be picked up. After the restart, the reporter should output messaged similar to the following to standard output and server.log:

[2017-07-17 17:11:32,304] INFO KafkaConfig values:
...
metric.reporters = [io.confluent.metrics.reporter.ConfluentMetricsReporter]
...
[2017-07-17 17:11:32,611] INFO ConfluentMetricsReporterConfig values:
         confluent.metrics.reporter.bootstrap.servers = localhost:9092
         confluent.metrics.reporter.publish.ms = 15000
   ...
...
[2017-07-17 17:11:48,288] INFO Created metrics reporter topic _confluent-metrics (io.confluent.metrics.reporter.ConfluentMetricsReporter)

Once the topic is created, the metrics reporter will produce to the topic periodically (every 15 seconds by default).

If the bootstrap servers are misconfigured (e.g. wrong port) or they are not available, a message like the following will be logged:

[2017-07-17 16:58:46,912] WARN Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

If confluent.metrics.reporter.topic.replicas, which defaults to 3, is less than the number of brokers in the Kafka cluster, a message like the following will be logged:

[2017-07-17 12:12:03,616] INFO [Admin Manager on Broker 0]: Error processing create topic request for topic _confluent-metrics with arguments (numPartition
org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 3 larger than available brokers: 1

If metric.reporters has not been set, a message like the following will be logged:

[2017-07-17 17:11:32,304] INFO KafkaConfig values:
...
   metric.reporters = []

Message size

If the total number of partitions in the Kafka cluster is large, it may be possible that the produced message is larger than the maximum allowed by the broker for the metrics topic. As of 3.3.0, the topic is configured to accept messages with size up to 10 MB by default. In previous versions, the broker default was used (1 MB by default). The following would be logged if a message is rejected due to its size:

[2017-07-19 00:34:50,664] WARN Failed to produce metrics message (io.confluent.metrics.reporter.ConfluentMetricsReporter)
org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.

The solution is to increase the max.message.bytes config for the metrics topic. For example, the following updates it to be twice the default:

./bin/kafka-topics --alter --zookeeper localhost:2181 --config max.message.bytes=20000000 --topic _confluent-metrics

We intend to remove the need for this in a future release.

Configuration Options

Only the first config below is required although confluent.metrics.reporter.topic.replicas should be updated if there are less than 3 brokers in the Kafka metrics cluster.

The other configs allow one to tune the publisher for additional performance and reliability. Configs categorised with “Importance: low” do not need to be modified in the common case.

confluent.metrics.reporter.bootstrap.servers

Bootstrap servers for the Kafka cluster metrics will be published to. The metrics cluster may be different from the cluster(s) whose metrics are being collected. Several production Kafka clusters can publish to a single metrics cluster, for example.

  • Type: string
  • Importance: high
confluent.metrics.reporter.topic.max.message.bytes

Maximum message size for the metrics topic.

  • Type: int
  • Default: 10485760
  • Valid Values: [0,...]
  • Importance: medium
confluent.metrics.reporter.publish.ms

The metrics reporter will publish new metrics to the metrics topic in intervals defined by this setting. This means that control center system health data lags by this duration, or that rebalancer may compute a plan based on broker data that is stale by this duration. The default is a reasonable value for production environments and it typically does not need to be changed.

  • Type: long
  • Default: 15000
  • Importance: low
confluent.metrics.reporter.topic

Topic on which metrics data will be written.

  • Type: string
  • Default: _confluent-metrics
  • Importance: low
confluent.metrics.reporter.topic.create

Create the metrics topic if it does not exist.

  • Type: boolean
  • Default: true
  • Importance: low
confluent.metrics.reporter.topic.partitions

Number of partitions in the metrics topic.

  • Type: int
  • Default: 12
  • Importance: low
confluent.metrics.reporter.topic.replicas

Number of replicas in the metric topic. It must not be higher than the number of brokers in the Kafka cluster.

  • Type: int
  • Default: 3
  • Importance: low
confluent.metrics.reporter.topic.retention.bytes

Retention bytes for the metrics topic.

  • Type: long
  • Default: -1
  • Importance: low
confluent.metrics.reporter.topic.retention.ms

Retention time for the metrics topic.

  • Type: long
  • Default: 259200000 (3 days)
  • Importance: low
confluent.metrics.reporter.topic.roll.ms

Log rolling time for the metrics topic.

  • Type: long
  • Default: 14400000 (4 hours)
  • Importance: low
confluent.metrics.reporter.volume.metrics.refresh.ms

The minimum interval at which to fetch new volume metrics.

  • Type: long
  • Default: 15000
  • Importance: low
confluent.metrics.reporter.whitelist

Regex matching the yammer metric mbean name or Kafka metric name to be published to the metrics topic.

By default this includes all the metrics required by Confluent Control Center and Confluent Auto Data Balancer. This should typically never be modified unless requested by Confluent.

  • Type: string
  • Default: includes all the metrics necessary for Confluent Control Center and Confluent Auto Data Balancer
  • Importance: low

Security

When configuring Metrics Reporter on a secure Kafka broker, the embedded producer (that sends metrics data to _confluent-metrics topic) in Metrics Reporter needs to have the correct client security configurations prefixed with confluent.metrics.reporter.

Authentication

For SSL related configs refer to SSL for Kafka Clients

confluent.metrics.reporter.security.protocol=SSL
confluent.metrics.reporter.ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
confluent.metrics.reporter.ssl.truststore.password=test1234
confluent.metrics.reporter.ssl.keystore.location=/var/private/ssl/kafka.client.keystore.jks
confluent.metrics.reporter.ssl.keystore.password=test1234
confluent.metrics.reporter.ssl.key.password=test1234

For SASL related configs refer to SASL for Kafka Clients

confluent.metrics.reporter.sasl.mechanism=PLAIN
confluent.metrics.reporter.security.protocol=SASL_PLAINTEXT

Pass the JAAS config file location as JVM parameter, more details here

-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf

Authorization

  1. The broker’s principal must have permission to create the metrics topic in the configured Kafka cluster.
  2. The broker’s principal must have permission to produce to the metrics topic.
  3. The tool’s principal must have permission to consume from the metrics topic. This would typically be the Confluent Control Center and/or the Auto Data Balancer, but it also applies to the Console Consumer if it’s used to inspect the topic.

If you have ACLs set up for Kafka, use the bin/kafka-acls command line tool to add/remove ACLs on topics, for example:

bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add \
    --allow-principal User:Alice --allow-host 198.51.100.0 \
    --operation Read --operation Write --topic _confluent-metrics

Verification

Use the console consumer tool to verify the brokers are properly sending metrics data to the correct topic.

bin/kafka-console-consumer.sh --topic _confluent-metrics --bootstrap-server <bootstrap-server> --formatter io.confluent.metrics.reporter.ConfluentMetricsFormatter

Logging

As mentioned previously, the metrics reporter logs to the broker’s server.log by default. For more verbose logging, add the following line to ./etc/kafka/log4j.properties:

log4j.logger.io.confluent.metrics.reporter.ConfluentMetricsReporter=DEBUG

The log output would then include additional information, which may be helpful during debugging, such as:

[2017-07-19 00:54:02,619] DEBUG Metrics reporter topic _confluent-metrics already exists (io.confluent.metrics.reporter.ConfluentMetricsReporter)
[2017-07-19 00:54:02,622] DEBUG Begin publishing metrics (io.confluent.metrics.reporter.ConfluentMetricsReporter)
...
[2017-07-19 00:54:02,772] DEBUG Produced metrics message of size 52104 with offset 316 to topic partition _confluent-metrics-6 (io.confluent.metrics.reporter.ConfluentMetricsReporter)