Configure Confluent Platform Logging

Apache Kafka® and Confluent Platform components use the Java-based logging utility Apache Log4j to help you troubleshoot issues with starting your cluster and monitor the ongoing health of the cluster when it is running.

View and configure logs

You can configure the log level, how logs are written, and where a log file is stored using the file for each Confluent Platform component. These files use standard Java properties file format.

By default, the files are read from the CONFLUENT_HOME/etc/{component-name} directory where CONFLUENT_HOME is the install location of Confluent Platform, and {component-name} specifies the Confluent Platform component such as schema-registry.

For example, for a Kafka broker, you can find the file under CONFLUENT_HOME/etc/kafka. The Kafka Connect Log4j properties file is also located under the kafka directory and is named

You can make configuration changes in the existing file or you can specify a configuration file at component start-up by specifying the component and file using the {COMPONENT}_LOG4J_OPTS environment variable.

For example, to specify a Log4j properties file at Kafka startup:

KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/path/to/" ./bin/kafka-server-start ./etc/kafka/kraft/

The following table lists the variable to configure each Confluent Platform component for the {COMPONENT}_LOG4J_OPTS variable:

Component Environment variable
Kafka Connect KAFKA_LOG4J_OPTS
Confluent Control Center CONTROL_CENTER_LOG4J_OPTS

Example properties file

The following shows an an excerpt of the Log4j properties file for Kafka.

# Unspecified loggers and loggers with additivity=true output to server.log and stdout
# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
log4j.rootLogger=TRACE, stdout, kafkaAppender

log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

 . . .

Things to note in this file:

  • The example excerpt, for a ZIP/TAR installation, uses the kafka.log.dir variable, which by default is set to CONFLUENT_HOME/logs/ directory where CONFLUENT_HOME is the install location of Confluent Platform.

  • There are several appenders configured for Kafka. Each of these overrides the root logger. The following table describes these appenders.

    Logger Description
    requestAppender Appends all requests being served by the broker. A broker serving many requests will have a high log volume when this is set to INFO level.
    controllerAppender Appends information on state changes in the Kafka cluster and is not verbose for a healthy cluster.
    cleanerAppender Appends how and when log segment cleanup is occurring for topics that are using log compaction.
    authorizerAppender Appends items from the pluggable security authorizer. Increase verbosity of this log to help debug issues with authorization.
    stateChangeAppender Tracks the state changes in the cluster. Typically not verbose in a healthy cluster.
    auditLogAppender Appends audit log messages that fail to write to the Kafka topic.

For the complete file, install Confluent Platform and navigate to the CONFLUENT_HOME/etc/kafka/ file.

Log levels

You might choose to increase the log level if an issue occurs and you are not getting sufficient log output to help you identify the cause. Note that increasing the log level can slow service operation due to increased I/O load. In addition, increasing the log level can use more disk space. If you have a size limit on the log files, you could lose older entries, which might be helpful for in debugging.

Following is a list of the supported log levels.

Level Description
OFF Turns off logging.
FATAL Severe errors that cause premature termination.
ERROR Other runtime errors or unexpected conditions.
WARN Runtime situations that are undesirable or unexpected, but not necessarily wrong.
INFO Runtime events of interest at startup and shutdown.
DEBUG Detailed diagnostic information about events.
TRACE Detailed diagnostic information about everything.

Appender types

When you configure a log, you can set a log level and how items are logged (an appender) for the root logger, and then configure a more specific log level and appender mechanism for other log appenders. If not set, loggers inherit the from the root logger.

Following is a description of some of the supported appender types.

  • org.apache.log4j.ConsoleAppender - Writes output to the console. Set the layout and layout.ConversionPattern properties when using this appender.
  • org.apache.log4j.DailyRollingFileAppender - This is the default file appender for Kafka. It provides regular time-based files. Note that this log appender does not clean up files automatically and requires manual intervention to remove old log files. For more information, see DailyRollingFileAppender.
  • org.apache.log4j.RollingFileAppender - This appender rolls to new files and deletes old files based on file size. This appender deletes old files based on how many rolled files exist.

Following are some of the properties you can set on an appender:

  • log4j.appender.{stdout|file}.layout The layout logs will be written with, example: org.apache.log4j.PatternLayout. Used with file and console (stdout) appenders.
  • log4j.appender.{stdout|file}.layout.ConversionPattern The configuration of the pattern defined above, for example: [%d] %p %m (%c)%n. Used with file and console (stdout) appenders.
  • log4j.appender.file.File The file to be appended to, example: /var/log/kafka/server.log. Used with file appenders.
  • log4j.appender.file.DatePattern A date pattern for naming rolled files, example: '.'yyyy-MM-dd-HH. Used with the DailyRollingFileAppender
  • log4j.appender.file.MaxBackupIndex The number of rolled log files to keep, for example: 10. Used with the RollingFileAppender.
  • log4j.appender.file.MaxFileSize The maximum size of the current log file before it is rolled to a new file, for example: 100MB. Used with the RollingFileAppender.