Configure Confluent Platform Logging

Apache Kafka® and Confluent Platform components use the Java-based logging utility Apache Log4j 2 to help you troubleshoot issues with starting your cluster and monitor the ongoing health of the cluster when it is running.

View and configure logs

You can configure the log level, how logs are written, and where a log file is stored using the log4j2.yaml file for each Confluent Platform component. For a description of the file, see the Configuration with YAML section of the Log4j 2 documentation.

By default, the log4j2.yaml files are read from the CONFLUENT_HOME/etc/{component-name} directory where CONFLUENT_HOME is the install location of Confluent Platform, and {component-name} specifies the Confluent Platform component such as schema-registry.

For example, for a Kafka broker or controller, you can find the log4j2.yaml file under CONFLUENT_HOME/etc/kafka. The Kafka Connect Log4j 2 configuration file is also located under the kafka directory and is named connect-log4j2.yaml.

You can make configuration changes in the existing file or you can specify a configuration file at component start-up by specifying the component and file using the {COMPONENT}_LOG4J_OPTS environment variable.

For example, to specify a Log4j 2 configuration file at Kafka startup:

KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/path/to/log4j2.yaml" ./bin/kafka-server-start ./etc/kafka/server.properties

The following table lists the variable to configure each Confluent Platform component for the {COMPONENT}_LOG4J_OPTS variable:

Component Environment variable
Kafka KAFKA_LOG4J_OPTS
Kafka Connect KAFKA_LOG4J_OPTS
Confluent Control Center CONTROL_CENTER_LOG4J_OPTS
Schema Registry SCHEMA_REGISTRY_LOG4J_OPTS
REST Proxy KAFKA_REST_LOG4J_OPTS
ksqlDB KSQL_LOG4J_OPTS
Replicator REPLICATOR_LOG4J_OPTS

Example properties file

The following shows an an excerpt of the Log4j 2 YAML file for Kafka.

 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
 # The ASF licenses this file to You under the Apache License, Version 2.0
 # (the "License"); you may not use this file except in compliance with
 # the License.  You may obtain a copy of the License at
 #
 #    http://www.apache.org/licenses/LICENSE-2.0
 #
 # Unless required by applicable law or agreed to in writing, software
 # distributed under the License is distributed on an "AS IS" BASIS,
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.

 # Unspecified loggers and loggers with additivity=true output to server.log and stdout
 # Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
 Configuration:
   Properties:
     Property:
       # Fallback if the system property is not set
       - name: "kafka.logs.dir"
         value: "."
       - name: "logPattern"
         value: "[%d] %p %m (%c)%n"

   # Appenders configuration
   # See: https://logging.apache.org/log4j/2.x/manual/appenders.html
   Appenders:
     Console:
       name: STDOUT
       PatternLayout:
         pattern: "${logPattern}"

     RollingFile:
       - name: KafkaAppender
         fileName: "${sys:kafka.logs.dir}/server.log"
         filePattern: "${sys:kafka.logs.dir}/server.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # State Change appender
       - name: StateChangeAppender
         fileName: "${sys:kafka.logs.dir}/state-change.log"
         filePattern: "${sys:kafka.logs.dir}/stage-change.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # Request appender
       - name: RequestAppender
         fileName: "${sys:kafka.logs.dir}/kafka-request.log"
         filePattern: "${sys:kafka.logs.dir}/kafka-request.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # Cleaner appender
       - name: CleanerAppender
         fileName: "${sys:kafka.logs.dir}/log-cleaner.log"
         filePattern: "${sys:kafka.logs.dir}/log-cleaner.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # Controller appender
       - name: ControllerAppender
         fileName: "${sys:kafka.logs.dir}/controller.log"
         filePattern: "${sys:kafka.logs.dir}/controller.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # Authorizer appender
       - name: AuthorizerAppender
         fileName: "${sys:kafka.logs.dir}/kafka-authorizer.log"
         filePattern: "${sys:kafka.logs.dir}/kafka-authorizer.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # Metadata service appender
       - name: MetadataServiceAppender
         fileName: "${sys:kafka.logs.dir}/metadata-service.log"
         filePattern: "${sys:kafka.logs.dir}/metadata-service.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # AuditLog appender
       - name: AuditLogAppender
         fileName: "${sys:kafka.logs.dir}/metadata-service.log"
         filePattern: "${sys:kafka.logs.dir}/metadata-service.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1
       # DataBalancer appender
       - name: DataBalancerAppender
         fileName: "${sys:kafka.logs.dir}/data-balancer.log"
         filePattern: "${sys:kafka.logs.dir}/data-balancer.log.%d{yyyy-MM-dd-HH}"
         PatternLayout:
           pattern: "${logPattern}"
         TimeBasedTriggeringPolicy:
           modulate: true
           interval: 1

   # Loggers configuration
   # See: https://logging.apache.org/log4j/2.x/manual/configuration.html#configuring-loggers
   Loggers:
     Root:
       level: INFO
       AppenderRef:
         - ref: STDOUT
         - ref: KafkaAppender
     Logger:
       # Kafka logger
       - name: kafka
         level: INFO
       # Kafka org.apache logger
       - name: org.apache.kafka
         level: INFO
       # Kafka request logger
       - name: kafka.request.logger
         level: WARN
         additivity: false
         AppenderRef:
           ref: RequestAppender
       # Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE
       # for additional output related to the handling of requests
 #      - name: kafka.network.Processor
 #        level: TRACE
 #        additivity: false
 #        AppenderRef:
 #          ref: RequestAppender
 #      - name: kafka.server.KafkaApis
 #        level: TRACE
 #        additivity: false
 #        AppenderRef:
 #          ref: RequestAppender
       # Kafka network RequestChannel$ logger
       - name: kafka.network.RequestChannel$
         level: WARN
         additivity: false
         AppenderRef:
           ref: RequestAppender
       # Controller logger
       - name: org.apache.kafka.controller
         level: INFO
         additivity: false
         AppenderRef:
           ref: ControllerAppender
       # LogCleaner logger
       - name: kafka.log.LogCleaner
         level: INFO
         additivity: false
         AppenderRef:
           ref: CleanerAppender


. . .

Things to note in this file:

  • The example excerpt, for a ZIP/TAR installation, uses the log.dirs variable set in the properties file, like the following: log.dirs=/tmp/kraft-combined-logs.

  • There are several appenders configured for Kafka. Each of these overrides the root logger. The following table describes these appenders.

    Logger Description
    KafkaAppender Appends all log messages from the Kafka broker. A broker serving many requests will have a high log volume when this is set to INFO level.
    StateChangeAppender Appends information on state changes in the Kafka cluster and is not verbose for a healthy cluster.
    RequestAppender Appends all requests being served by the broker. A broker serving many requests will have a high log volume when this is set to INFO level.
    CleanerAppender Appends how and when log segment cleanup is occurring for topics that are using log compaction.
    ControllerAppender Appends all log messages from the Kafka controller. This may generate a high log volume when this is set to INFO level.
    AuthorizerAppender Appends items from the pluggable security authorizer. Increase verbosity of this log to help debug issues with authorization.
    MetadataServiceAppender Appends information on metadata changes in the Kafka cluster.
    DataBalancerAppender Appends information on data balancing in the Kafka cluster.
    AuditLogAppender Appends audit log messages that fail to write to the Kafka topic. A broker serving many requests will have a high log volume when this is set to INFO level.

For the complete file, install Confluent Platform and navigate to the CONFLUENT_HOME/etc/kafka/log4j2.yaml file.

Log levels

You might choose to increase the log level if an issue occurs and you are not getting sufficient log output to help you identify the cause. Note that increasing the log level can slow service operation due to increased I/O load. In addition, increasing the log level can use more disk space. If you have a size limit on the log files, you could lose older entries, which might be helpful for in debugging.

Following is a list of the supported log levels.

Level Description
OFF Turns off logging.
FATAL Severe errors that cause premature termination.
ERROR Other runtime errors or unexpected conditions.
WARN Runtime situations that are undesirable or unexpected, but not necessarily wrong.
INFO Runtime events of interest at startup and shutdown.
DEBUG Detailed diagnostic information about events.
TRACE Detailed diagnostic information about everything.

Appender types

When you configure a log, you can set a log level and how items are logged (an appender) for the root logger, and then configure a more specific log level and appender mechanism for other log appenders. If not set, loggers inherit the from the root logger.

Following is a description of some of the supported appender types.

  • ConsoleAppender - Writes output to the console; either to System.out or System.err with System.out being the default target. A Layout must be provided to format the log events.
  • FileAppender - This appender writes to the file named in the fileName parameter. parameter.
  • RollingFileAppender - is an OutputStreamAppender that writes to the File named in the fileName parameter and rolls the file over according to the TriggeringPolicy and RolloverPolicy.

Following are some of the properties you can set on an appender:

  • appender.{stdout|file}.layout The Layout to use to format the log events. If no layout is supplied the default pattern layout of %m%n is used.
  • appender.{stdout|file}.layout.ConversionPattern The configuration of the pattern defined above, for example: [%d] %p %m (%c)%n. Used with file and console (stdout) appenders.
  • appender.file.fileName The name of the file to write to. If the specified file or its parent directories don’t exist, they will be created.