Kafka Broker and Controller Configuration Reference for Confluent Platform

This topic provides configuration parameters for Kafka brokers and controllers when Kafka is running in KRaft mode, and for brokers when Apache Kafka® is running in ZooKeeper mode.

Consider that a KRaft controller is also a Kafka broker processing event records that contain metadata related to the Kafka cluster. This means that in most cases, if you set properties on brokers, you should apply the same property settings to your KRaft controllers.

Note that starting with Confluent Platform version 7.4, KRaft mode is the default for metadata management for new Kafka clusters, and as a result, there are some configuration properties specific to KRaft controllers listed in this topic.

See also

Each configuration entry contains a table with the following entries:

  • The type such as string, boolean, or integer
  • The default value
  • Valid values, which are provided if applicable
  • Importance, which is one of the following values:
    • high: An important configuration property; often a mandatory property that you may need to set
    • medium: A moderately important configuration property that you probably do not need to set, but you may need to tune depending on your use case.
    • low: A configuration property that is likely fine with the default value.
  • The update mode, which is one of the following values:
    • read-only: Requires a broker restart for update.
    • per-broker: May be updated dynamically for each broker.
    • cluster-wide: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.

Search by configuration property name

The configuration properties are in alphabetical order by importance level, but you can also search and filter by configuration property name.

advertised.listeners

Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners, it is not valid to advertise the 0.0.0.0 meta-address. Also unlike listeners, there can be duplicated ports in this property, so that one listener can be configured to advertise another listener’s address. This can be useful in some cases where external load balancers are used.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: per-broker

auto.create.topics.enable

Enable auto creation of topic on the server.

Type: boolean
Default: true
Valid Values:  
Importance: high
Update Mode: cluster-wide

auto.leader.rebalance.enable

Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds. If the leader imbalance exceeds leader.imbalance.per.broker.percentage, leader rebalance to the preferred leader for partitions is triggered.

Type: boolean
Default: true
Valid Values:  
Importance: high
Update Mode: read-only

background.threads

The number of threads to use for various background processing tasks

Type: int
Default: 10
Valid Values: [1,…]
Importance: high
Update Mode: cluster-wide

broker.id

The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between ZooKeeper generated broker id’s and user configured broker id’s, generated broker ids start from reserved.broker.max.id + 1.

Type: int
Default: -1
Valid Values:  
Importance: high
Update Mode: read-only

compression.type

Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (‘gzip’, ‘snappy’, ‘lz4’, ‘zstd’). It additionally accepts ‘uncompressed’ which is equivalent to no compression; and ‘producer’ which means retain the original compression codec set by the producer.

Type: string
Default: producer
Valid Values: [uncompressed, zstd, lz4, snappy, gzip, producer]
Importance: high
Update Mode: cluster-wide

confluent.balancer.consumer.out.max.bytes.per.second

This config specifies the upper capacity limit for consumer outgoing bytes per second per leader broker. The Confluent DataBalancer will attempt to keep outgoing data throughput below this limit. Note that fetch from follower traffic is not accounted for in this first release.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.balancer.disk.max.load

This config specifies the maximum load for disk usage as a proportion of disk capacity. Valid values are between 0 and 1.

Type: double
Default: 0.85
Valid Values: [0.0,…,1.0]
Importance: high
Update Mode: read-only

confluent.balancer.enable

This config controls whether the balancer is enabled

Type: boolean
Default: false
Valid Values:  
Importance: high
Update Mode: cluster-wide

confluent.balancer.heal.broker.failure.threshold.ms

This config specifies how long the balancer will wait after detecting a broker failure before triggering a balancing action. -1 means that broker failures will not trigger balancing actions

Type: long
Default: 3600000 (1 hour)
Valid Values: [-1,…]
Importance: high
Update Mode: read-only

confluent.balancer.heal.uneven.load.trigger

Controls what causes the Confluent DataBalancer to start rebalance operations. Acceptable values are ANY_UNEVEN_LOAD and EMPTY_BROKER

Type: string
Default: EMPTY_BROKER
Valid Values: [ANY_UNEVEN_LOAD, EMPTY_BROKER]
Importance: high
Update Mode: cluster-wide

confluent.balancer.max.replicas

The replica capacity is the maximum number of replicas the balancer will place on a single broker.

Type: long
Default: 2147483647
Valid Values: [0,…]
Importance: high
Update Mode: read-only

confluent.balancer.network.in.max.bytes.per.second

This config specifies the upper capacity limit for network incoming bytes per second per broker. The Confluent DataBalancer will attempt to keep incoming data throughput below this limit.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.balancer.network.out.max.bytes.per.second

This config specifies the upper capacity limit for network outgoing bytes per second per broker. The Confluent DataBalancer will attempt to keep outgoing data throughput below this limit.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.balancer.producer.in.max.bytes.per.second

This config specifies the upper capacity limit for producer incoming bytes per second per broker. The Confluent DataBalancer will attempt to keep incoming data throughput below this limit.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.balancer.replication.in.max.bytes.per.second

This config specifies the upper capacity limit for replication incoming bytes per second per broker. The Confluent DataBalancer will attempt to keep incoming data throughput below this limit.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.balancer.throttle.bytes.per.second

This config specifies the upper bound for bandwidth in bytes to move replicas around for replica reassignment. A value of -1 disables throttling entirely.

Type: long
Default: 10485760
Valid Values: [-2,…]
Importance: high
Update Mode: cluster-wide

confluent.offsets.log.cleaner.delete.retention.ms

The delete retention timeout for the log cleaner compacting the __consumer_offsets topic.

Type: long
Default: 86400000 (1 day)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.offsets.log.cleaner.max.compaction.lag.ms

The maximum time a message will remain ineligible for compaction in the log. Only applicable for __consumer_offsets logs that are being compacted.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.offsets.log.cleaner.min.cleanable.dirty.ratio

The minimum cleanable dirty ratio config for the __consumer_offsets topic.

Type: double
Default: 0.5
Valid Values: [0,…]
Importance: high
Update Mode: read-only

confluent.offsets.topic.placement.constraints

This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. The format of JSON is:{ “version”: 1, “replicas”: [ { “count”: 2, “constraints”: {“rack”: “east-1”} }, { “count”: 1, “constraints”: {“rack”: “east-2”} } ], “observers”:[ { “count”: 1, “constraints”: {“rack”: “west-1”} } ]}

Type: string
Default: “”
Valid Values: org.apache.kafka.metadata.TopicPlacement$TopicPlacementValidator@78d6692f
Importance: high
Update Mode: read-only

confluent.security.event.logger.authentication.enable

Enable authentication audit logs

Type: boolean
Default: false
Valid Values:  
Importance: high
Update Mode: cluster-wide

confluent.security.event.logger.enable

Whether the event logger is enabled

Type: boolean
Default: true
Valid Values:  
Importance: high
Update Mode: cluster-wide

confluent.tier.azure.block.blob.container

The Azure Block Blob Container to use for tiered storage.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.azure.block.blob.cred.file.path

The path to the credentials file used to create the Azure Block Blob client. It uses a JSON file with one of the following options:- connectionString for the target confluent.tier.azure.block.blob.container.- azureClientId, azureTenantId and azureClientSecret for the target confluent.tier.azure.block.blob.container.Please refer to Azure documentation for further information. If this property is not specified, the Azure Block Blob client will use the DefaultAzureCredential to locate the credentials across several well-known locations.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.azure.block.blob.endpoint

The Azure Storage Account endpoint, in the format of https://{accountName}.blob.core.windows.net.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.azure.block.blob.prefix

This prefix will be added to tiered storage objects stored in the target Azure Block Blob Container.

Type: string
Default: “”
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.gcs.bucket

The GCS bucket to use for tiered storage.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.gcs.prefix

This prefix will be added to tiered storage objects stored in GCS.

Type: string
Default: “”
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.gcs.region

The GCS region to use for tiered storage.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.local.hotset.bytes

When tiering is enabled, this configuration controls the maximum size a partition (which consists of log segments) can grow to on broker-local storage before we will discard old log segments to free up space. Log segments retained on broker-local storage is referred as the “hotset”. Segments discarded from local store could continue to exist in tiered storage and remain available for fetches depending on retention configurations. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic hotset in bytes.

Type: long
Default: -1
Valid Values:  
Importance: high
Update Mode: cluster-wide

confluent.tier.local.hotset.ms

When tiering is enabled, this configuration controls the maximum time we will retain a log segment on broker-local storage before we will discard it to free up space. Segments discarded from local store could continue to exist in tiered storage and remain available for fetches depending on retention configurations. If set to -1, no time limit is applied.

Type: long
Default: 86400000 (1 day)
Valid Values:  
Importance: high
Update Mode: cluster-wide

confluent.tier.metadata.replication.factor

The replication factor for the tier metadata topic (set higher to ensure availability).

Type: short
Default: 3
Valid Values: [1,…]
Importance: high
Update Mode: read-only

confluent.tier.s3.bucket

The S3 bucket to use for tiered storage.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.s3.prefix

This prefix will be added to tiered storage objects stored in S3.

Type: string
Default: “”
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.s3.region

The S3 region to use for tiered storage.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

confluent.tier.s3.sse.algorithm

The S3 server side encryption algorithm to use to protect objects at rest. Currently supports AES256, aws:kms, and none. Defaults to AES256.

Type: string
Default: AES256
Valid Values: [AES256, aws:kms, none]
Importance: high
Update Mode: read-only

confluent.transaction.state.log.placement.constraints

This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. The format of JSON is:{ “version”: 1, “replicas”: [ { “count”: 2, “constraints”: {“rack”: “east-1”} }, { “count”: 1, “constraints”: {“rack”: “east-2”} } ], “observers”:[ { “count”: 1, “constraints”: {“rack”: “west-1”} } ]}

Type: string
Default: “”
Valid Values: org.apache.kafka.metadata.TopicPlacement$TopicPlacementValidator@78d6692f
Importance: high
Update Mode: read-only

control.plane.listener.name

Name of listener used for communication between controller and brokers. A broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker’s config is:listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSLcontrol.plane.listener.name = CONTROLLER``On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL".On the controller side, when it discovers a broker's published endpoints through ZooKeeper, it will use the ``control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker.For example, if the broker’s published endpoints on ZooKeeper are: "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"] and the controller’s config is:listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSLcontrol.plane.listener.name = CONTROLLER``then the controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker.If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections.If explicitly configured, the value cannot be the same as the value of ``inter.broker.listener.name.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

controller.listener.names

A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When communicating with the controller quorum, the broker will always use the first listener in this list. Note: The ZooKeeper-based controller should not set this configuration.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

controller.quorum.election.backoff.max.ms

Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections

Type: int
Default: 1000 (1 second)
Valid Values:  
Importance: high
Update Mode: read-only

controller.quorum.election.timeout.ms

Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election

Type: int
Default: 1000 (1 second)
Valid Values:  
Importance: high
Update Mode: read-only

controller.quorum.fetch.timeout.ms

Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time a leader can go without receiving valid fetch or fetchSnapshot request from a majority of the quorum before resigning.

Type: int
Default: 2000 (2 seconds)
Valid Values:  
Importance: high
Update Mode: read-only

controller.quorum.voters

Map of id/endpoint information for the set of voters in a comma-separated list of {id}@{host}:{port} entries. For example: 1@localhost:9092,2@localhost:9093,3@localhost:9094

Type: list
Default: “”
Valid Values: non-empty list
Importance: high
Update Mode: read-only

delete.topic.enable

Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off

Type: boolean
Default: true
Valid Values:  
Importance: high
Update Mode: read-only

early.start.listeners

A comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful when the authorizer is dependent on the cluster itself for bootstrapping, as is the case for the StandardAuthorizer (which stores ACLs in the metadata log.) By default, all listeners included in controller.listener.names will also be early start listeners. A listener should not appear in this list if it accepts external traffic.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

eligible.leader.replicas.enable

Enable the Eligible leader replicas

Type: boolean
Default: false
Valid Values:  
Importance: high
Update Mode: read-only

leader.imbalance.check.interval.seconds

The frequency with which the partition rebalance check is triggered by the controller

Type: long
Default: 300
Valid Values: [1,…]
Importance: high
Update Mode: read-only

leader.imbalance.per.broker.percentage

The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.

Type: int
Default: 10
Valid Values:  
Importance: high
Update Mode: read-only

listeners

Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Listener names and port numbers must be unique unless one listener is an IPv4 address and the other listener is an IPv6 address (for the same port). Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093 PLAINTEXT://127.0.0.1:9092,SSL://[::1]:9092

Type: string
Default: PLAINTEXT://:9092
Valid Values:  
Importance: high
Update Mode: per-broker

log.dir

The directory in which the log data is kept (supplemental for log.dirs property)

Type: string
Default: /tmp/kafka-logs
Valid Values:  
Importance: high
Update Mode: read-only

log.dirs

A comma-separated list of the directories where the log data is stored. If not set, the value in log.dir is used.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

log.flush.interval.messages

The number of messages accumulated on a log partition before messages are flushed to disk.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: high
Update Mode: cluster-wide

log.flush.interval.ms

The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used

Type: long
Default: null
Valid Values:  
Importance: high
Update Mode: cluster-wide

log.flush.offset.checkpoint.interval.ms

The frequency with which we update the persistent record of the last flush which acts as the log recovery point.

Type: int
Default: 60000 (1 minute)
Valid Values: [0,…]
Importance: high
Update Mode: read-only

log.flush.scheduler.interval.ms

The frequency in ms that the log flusher checks whether any log needs to be flushed to disk

Type: long
Default: 9223372036854775807
Valid Values:  
Importance: high
Update Mode: read-only

log.flush.start.offset.checkpoint.interval.ms

The frequency with which we update the persistent record of log start offset

Type: int
Default: 60000 (1 minute)
Valid Values: [0,…]
Importance: high
Update Mode: read-only

log.retention.bytes

The maximum size of the log before deleting it

Type: long
Default: -1
Valid Values:  
Importance: high
Update Mode: cluster-wide

log.retention.hours

The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property

Type: int
Default: 168
Valid Values:  
Importance: high
Update Mode: read-only

log.retention.minutes

The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used

Type: int
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

log.retention.ms

The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

Type: long
Default: null
Valid Values:  
Importance: high
Update Mode: cluster-wide

log.roll.hours

The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property

Type: int
Default: 168
Valid Values: [1,…]
Importance: high
Update Mode: read-only

log.roll.jitter.hours

The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property

Type: int
Default: 0
Valid Values: [0,…]
Importance: high
Update Mode: read-only

log.roll.jitter.ms

The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used

Type: long
Default: null
Valid Values:  
Importance: high
Update Mode: cluster-wide

log.roll.ms

The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used

Type: long
Default: null
Valid Values:  
Importance: high
Update Mode: cluster-wide

log.segment.bytes

The maximum size of a single log file

Type: int
Default: 1073741824 (1 gibibyte)
Valid Values: [14,…]
Importance: high
Update Mode: cluster-wide

log.segment.delete.delay.ms

The amount of time to wait before deleting a file from the filesystem. If the value is 0 and there is no file to delete, the system will wait 1 millisecond. Low value will cause busy waiting

Type: long
Default: 60000 (1 minute)
Valid Values: [0,…]
Importance: high
Update Mode: cluster-wide

message.max.bytes

The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers’ fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.

Type: int
Default: 1048588
Valid Values: [0,…]
Importance: high
Update Mode: cluster-wide

metadata.log.dir

This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is placed in the first log directory from log.dirs.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

metadata.log.max.record.bytes.between.snapshots

This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot. The default value is 20971520. To generate snapshots based on the time elapsed, see the metadata.log.max.snapshot.interval.ms configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.

Type: long
Default: 20971520
Valid Values: [1,…]
Importance: high
Update Mode: read-only

metadata.log.max.snapshot.interval.ms

This is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not included in the latest snapshot. A value of zero disables time based snapshot generation. The default value is 3600000. To generate snapshots based on the number of metadata bytes, see the metadata.log.max.record.bytes.between.snapshots configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.

Type: long
Default: 3600000 (1 hour)
Valid Values: [0,…]
Importance: high
Update Mode: read-only

metadata.log.segment.bytes

The maximum size of a single metadata log file.

Type: int
Default: 1073741824 (1 gibibyte)
Valid Values: [12,…]
Importance: high
Update Mode: read-only

metadata.log.segment.ms

The maximum time before a new metadata log file is rolled out (in milliseconds).

Type: long
Default: 604800000 (7 days)
Valid Values:  
Importance: high
Update Mode: read-only

metadata.max.retention.bytes

The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

Type: long
Default: 104857600 (100 mebibytes)
Valid Values:  
Importance: high
Update Mode: read-only

metadata.max.retention.ms

The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

Type: long
Default: 604800000 (7 days)
Valid Values:  
Importance: high
Update Mode: read-only

min.insync.replicas

When a producer sets acks to “all” (or “-1”), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of “all”. This will ensure that the producer raises an exception if a majority of replicas do not receive a write.

Type: int
Default: 1
Valid Values: [1,…]
Importance: high
Update Mode: cluster-wide

node.id

The node ID associated with the roles this process is playing when process.roles is non-empty. This is required configuration when running in KRaft mode.

Type: int
Default: -1
Valid Values:  
Importance: high
Update Mode: read-only

num.io.threads

The number of threads that the server uses for processing requests, which may include disk I/O

Type: int
Default: 8
Valid Values: [1,…]
Importance: high
Update Mode: cluster-wide

num.network.threads

The number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: each listener (except for controller listener) creates its own thread pool.

Type: int
Default: 3
Valid Values: [1,…]
Importance: high
Update Mode: cluster-wide

num.recovery.threads.per.data.dir

The number of threads per data directory to be used for log recovery at startup and flushing at shutdown

Type: int
Default: 1
Valid Values: [1,…]
Importance: high
Update Mode: cluster-wide

num.replica.alter.log.dirs.threads

The number of threads that can move replicas between log directories, which may include disk I/O

Type: int
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

num.replica.fetchers

Number of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound by num.replica.fetchers multiplied by the number of brokers in the cluster.Increasing this value can increase the degree of I/O parallelism in the follower and leader broker at the cost of higher CPU and memory utilization.

Type: int
Default: 1
Valid Values:  
Importance: high
Update Mode: cluster-wide

offset.metadata.max.bytes

The maximum size for a metadata entry associated with an offset commit.

Type: int
Default: 4096 (4 kibibytes)
Valid Values:  
Importance: high
Update Mode: read-only

offsets.commit.required.acks

DEPRECATED: The required acks before the commit can be accepted. In general, the default (-1) should not be overridden.

Type: short
Default: -1
Valid Values:  
Importance: high
Update Mode: read-only

offsets.commit.timeout.ms

Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.

Type: int
Default: 5000 (5 seconds)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

offsets.load.buffer.size

Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large).

Type: int
Default: 5242880
Valid Values: [1,…]
Importance: high
Update Mode: read-only

offsets.retention.check.interval.ms

Frequency at which to check for stale offsets

Type: long
Default: 600000 (10 minutes)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

offsets.retention.minutes

For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic. For standalone consumers (using manual assignment), offsets will be expired after this retention period has elapsed since the time of last commit. Note that when a group is deleted via the delete-group request, its committed offsets will also be deleted without extra retention period; also when a topic is deleted via the delete-topic request, upon propagated metadata update any group’s committed offsets for that topic will also be deleted without extra retention period.

Type: int
Default: 10080
Valid Values: [1,…]
Importance: high
Update Mode: read-only

offsets.topic.compression.codec

Compression codec for the offsets topic - compression may be used to achieve “atomic” commits.

Type: int
Default: 0
Valid Values:  
Importance: high
Update Mode: read-only

offsets.topic.num.partitions

The number of partitions for the offset commit topic (should not change after deployment).

Type: int
Default: 50
Valid Values: [1,…]
Importance: high
Update Mode: read-only

offsets.topic.replication.factor

The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

Type: short
Default: 3
Valid Values: [1,…]
Importance: high
Update Mode: read-only

offsets.topic.segment.bytes

The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads.

Type: int
Default: 104857600 (100 mebibytes)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

process.roles

The roles that this process plays: ‘broker’, ‘controller’, or ‘broker,controller’ if it is both. This configuration is only applicable for clusters in KRaft (Kafka Raft) mode (instead of ZooKeeper). Leave this config undefined or empty for ZooKeeper clusters.

Type: list
Default: “”
Valid Values: [broker, controller]
Importance: high
Update Mode: read-only

queued.max.requests

The number of queued requests allowed for data-plane, before blocking the network threads

Type: int
Default: 500
Valid Values: [1,…]
Importance: high
Update Mode: read-only

replica.fetch.min.bytes

Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config).

Type: int
Default: 1
Valid Values:  
Importance: high
Update Mode: read-only

replica.fetch.wait.max.ms

The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics

Type: int
Default: 500
Valid Values:  
Importance: high
Update Mode: read-only

replica.high.watermark.checkpoint.interval.ms

The frequency with which the high watermark is saved out to disk

Type: long
Default: 5000 (5 seconds)
Valid Values:  
Importance: high
Update Mode: read-only

replica.lag.time.max.ms

If a follower hasn’t sent any fetch requests or hasn’t consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr

Type: long
Default: 30000 (30 seconds)
Valid Values:  
Importance: high
Update Mode: read-only

replica.socket.receive.buffer.bytes

The socket receive buffer for network requests to the leader for replicating data

Type: int
Default: 65536 (64 kibibytes)
Valid Values:  
Importance: high
Update Mode: read-only

replica.socket.timeout.ms

The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms

Type: int
Default: 30000 (30 seconds)
Valid Values:  
Importance: high
Update Mode: read-only

request.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

Type: int
Default: 30000 (30 seconds)
Valid Values:  
Importance: high
Update Mode: read-only

sasl.mechanism.controller.protocol

SASL mechanism used for communication with controllers. Default is GSSAPI.

Type: string
Default: GSSAPI
Valid Values:  
Importance: high
Update Mode: read-only

socket.receive.buffer.bytes

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

Type: int
Default: 102400 (100 kibibytes)
Valid Values:  
Importance: high
Update Mode: read-only

socket.request.max.bytes

The maximum number of bytes in a socket request

Type: int
Default: 104857600 (100 mebibytes)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

socket.send.buffer.bytes

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

Type: int
Default: 102400 (100 kibibytes)
Valid Values:  
Importance: high
Update Mode: cluster-wide

transaction.max.timeout.ms

The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.

Type: int
Default: 900000 (15 minutes)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

transaction.state.log.load.buffer.size

Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).

Type: int
Default: 5242880
Valid Values: [1,…]
Importance: high
Update Mode: read-only

transaction.state.log.min.isr

The minimum number of replicas that must acknowledge a write to transaction topic in order to be considered successful.

Type: int
Default: 2
Valid Values: [1,…]
Importance: high
Update Mode: read-only

transaction.state.log.num.partitions

The number of partitions for the transaction topic (should not change after deployment).

Type: int
Default: 50
Valid Values: [1,…]
Importance: high
Update Mode: read-only

transaction.state.log.replication.factor

The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

Type: short
Default: 3
Valid Values: [1,…]
Importance: high
Update Mode: read-only

transaction.state.log.segment.bytes

The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads

Type: int
Default: 104857600 (100 mebibytes)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

transactional.id.expiration.ms

The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. Transactional IDs will not expire while a the transaction is still ongoing.

Type: int
Default: 604800000 (7 days)
Valid Values: [1,…]
Importance: high
Update Mode: read-only

unclean.leader.election.enable

Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss

Type: boolean
Default: false
Valid Values:  
Importance: high
Update Mode: cluster-wide

zookeeper.connect

Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3.The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path.

Type: string
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

zookeeper.connection.timeout.ms

The max time that the client waits to establish a connection to ZooKeeper. If not set, the value in zookeeper.session.timeout.ms is used

Type: int
Default: null
Valid Values:  
Importance: high
Update Mode: read-only

zookeeper.max.in.flight.requests

The maximum number of unacknowledged requests the client will send to ZooKeeper before blocking.

Type: int
Default: 10
Valid Values: [1,…]
Importance: high
Update Mode: read-only

zookeeper.metadata.migration.enable

Enable ZK to KRaft migration

Type: boolean
Default: false
Valid Values:  
Importance: high
Update Mode: read-only

zookeeper.session.timeout.ms

Zookeeper session timeout

Type: int
Default: 18000 (18 seconds)
Valid Values:  
Importance: high
Update Mode: read-only

zookeeper.set.acl

Set client to use secure ACLs

Type: boolean
Default: false
Valid Values:  
Importance: high
Update Mode: read-only

broker.heartbeat.interval.ms

The length of time in milliseconds between broker heartbeats. Used when running in KRaft mode.

Type: int
Default: 2000 (2 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

broker.id.generation.enable

Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.

Type: boolean
Default: true
Valid Values:  
Importance: medium
Update Mode: read-only

broker.rack

Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1, us-east-1d

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

broker.session.timeout.ms

The length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode.

Type: int
Default: 9000 (9 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

compression.gzip.level

The compression level to use if compression.type is set to ‘gzip’.

Type: int
Default: -1
Valid Values: [1,…,9] or -1
Importance: medium
Update Mode: cluster-wide

compression.lz4.level

The compression level to use if compression.type is set to ‘lz4’.

Type: int
Default: 9
Valid Values: [1,…,17]
Importance: medium
Update Mode: cluster-wide

compression.zstd.level

The compression level to use if compression.type is set to ‘zstd’.

Type: int
Default: 3
Valid Values: [-131072,…,22]
Importance: medium
Update Mode: cluster-wide

confluent.balancer.capacity.threshold.upper.limit

Upper limit on capacity threshold config. If balancing fail with original capacity thresholds defined for each resource, SBC will try to bump it up to this limit (maybe in multiple stages) and try to balance again. Balancing will fail only if capacity threshold is set to this value and brokers can’t be balanced.

Type: double
Default: 0.95
Valid Values: [0,…,1]
Importance: medium
Update Mode: read-only

confluent.balancer.disk.min.free.space.gb

The minimum amount of disk space, in GB, that needs to remain unused on a broker. Valid values are between 0 and disk size. The balancer will enforce the stricter bound between this config and ‘confluent.balancer.disk.max.load’.

Type: int
Default: 0
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

confluent.balancer.disk.min.free.space.lower.limit.gb

The lower limit on minimum amount of free disk space, in gigabytes, that needs to remain unused on a broker. On failing to balance, SBC will reset the min disk space config DISK_CAPACITY_MIN_FREE_SPACE_CONFIG to this value and will try to balance again.The balancer will enforce the stricter bound between this config and ‘confluent.balancer.disk.max.load’.

Type: int
Default: 0
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

confluent.balancer.exclude.topic.names

This config accepts a list of topic names that will be excluded from rebalancing. For example, ‘confluent.balancer.exclude.topic.names=[topic1, topic2]’

Type: list
Default: “”
Valid Values:  
Importance: medium
Update Mode: cluster-wide

confluent.balancer.exclude.topic.prefixes

This config accepts a list of topic prefixes that will be excluded from rebalancing. For example, ‘confluent.balancer.exclude.topic.prefixes=[prefix1, prefix2]’ would exclude topics ‘prefix1-suffix1’, ‘prefix1-suffix2’, ‘prefix2-suffix3’, but not ‘abc-prefix1-xyz’ and ‘def-prefix2’

Type: list
Default: “”
Valid Values:  
Importance: medium
Update Mode: cluster-wide

confluent.group.metadata.load.threads

The number of threads group metadata load / unload can use to concurrently load / unload metadata.

Type: int
Default: 32
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

confluent.log.cleaner.timestamp.validation.enable

When enabled, this configuration enables timestamp validation checks for created records. Otherwise, skips the timestamp validation check. Defaults to true.

Type: boolean
Default: true
Valid Values:  
Importance: medium
Update Mode: cluster-wide

confluent.replica.fetch.backoff.max.ms

The maximum amount of time in milliseconds to wait when fetch partition fails repeatedly. If provided, the backoff will increase exponentially for each consecutive failure, up to this maximum.

Type: int
Default: 1000 (1 second)
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

confluent.request.pipelining.enable

Setting this configuration to true enables the broker to process multiple in-flight produce requests per connection.

Type: boolean
Default: true
Valid Values:  
Importance: medium
Update Mode: cluster-wide

confluent.request.pipelining.max.in.flight.requests.per.connection

This configures the maximum number of in-flight requests per connection if request pipelining is enabled.

Type: int
Default: 5
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

confluent.tier.archiver.num.threads

The size of the thread pool used for tiering data to remote storage. This thread pool is also used to garbage collect data in tiered storage that has been deleted.

Type: int
Default: 2
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

confluent.tier.backend

Tiered storage backend to use

Type: string
Default: “”
Valid Values: [S3, GCS, AzureBlockBlob, mock, ]
Importance: medium
Update Mode: read-only

confluent.tier.cleaner.enable

Enables tiering and tiered cleaning of compacted topics. If disabled, tiering for those topics will be disabled and topics will be cleaned by local log cleaner.

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: cluster-wide

confluent.tier.cleaner.feature.enable

Feature flag that enables tiered cleaning components. This must be enabled before tiered cleaning could be enabled by using confluent.tier.cleaner.enable property. This configuration is not reversible and will apply to all non-internal metadata topics.

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: read-only

confluent.tier.cleaner.min.cleanable.ratio

The minimum ratio of dirty log to total log for a tiered log to eligible for cleaning if the conditions for confluent.tier.cleaner.min.cleanable.ratio have not been met.

Type: double
Default: 0.75
Valid Values: [0.0,…]
Importance: medium
Update Mode: cluster-wide

confluent.tier.enable

Allow tiering for topic(s). This enables tiering and fetching of data to and from the configured remote storage. When set to true, this causes all existing, non-compacted topics to also have this configuration set to true. Only topics explicitly set to false will remain false.It is not required to set confluent.tier.enable=true to enable Tiered Storage.

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: cluster-wide

confluent.tier.feature

Feature flag that enables components related to tiered storage. This must be enabled before tiering could be enabled by using confluent.tier.enable property.

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: read-only

confluent.tier.fetcher.num.threads

The size of the thread pool used by the TierFetcher. Roughly corresponds to number of concurrent fetch requests that can be served from tiered storage.

Type: int
Default: 4
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

confluent.tier.max.partition.fetch.bytes.override

For tier fetches, this configuration allows overriding the consumer’s max.partition.fetch.bytes configuration. When fetching tiered data, we will use the maximum of the consumer’s configuration and this override. Setting this to a value higher than that of the consumer’s could improve batching and effective throughput of tiered fetches. The override is disabled when set to 0.

Type: int
Default: 0
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

confluent.tier.metadata.bootstrap.servers

The bootstrap servers used to read from and write to the tier metadata topic. If this is not configured, the configured inter-broker listener would be used.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

connections.max.idle.ms

Idle connections timeout: the server socket processor threads close the connections that idle more than this

Type: long
Default: 600000 (10 minutes)
Valid Values:  
Importance: medium
Update Mode: read-only

connections.max.reauth.ms

When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000

Type: long
Default: 0
Valid Values:  
Importance: medium
Update Mode: read-only

controlled.shutdown.enable

Enable controlled shutdown of the server.

Type: boolean
Default: true
Valid Values:  
Importance: medium
Update Mode: read-only

controlled.shutdown.max.retries

Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens

Type: int
Default: 3
Valid Values:  
Importance: medium
Update Mode: read-only

controlled.shutdown.retry.backoff.ms

Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.

Type: long
Default: 5000 (5 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

controller.quorum.append.linger.ms

The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.

Type: int
Default: 25
Valid Values:  
Importance: medium
Update Mode: read-only

controller.quorum.request.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

Type: int
Default: 2000 (2 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

controller.socket.timeout.ms

The socket timeout for controller-to-broker channels.

Type: int
Default: 30000 (30 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

default.replication.factor

The default replication factors for automatically created topics.

Type: int
Default: 1
Valid Values:  
Importance: medium
Update Mode: read-only

delegation.token.expiry.time.ms

The token validity time in milliseconds before the token needs to be renewed. Default value 1 day.

Type: long
Default: 86400000 (1 day)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

delegation.token.master.key

DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

delegation.token.max.lifetime.ms

The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.

Type: long
Default: 604800000 (7 days)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

delegation.token.secret.key

Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If using Kafka with KRaft, the key must also be set across all controllers. If the key is not set or set to empty string, brokers will disable the delegation token support.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

delete.records.purgatory.purge.interval.requests

The purge interval (in number of requests) of the delete records request purgatory

Type: int
Default: 1
Valid Values:  
Importance: medium
Update Mode: read-only

fetch.max.bytes

The maximum number of bytes we will return for a fetch request. Must be at least 1024.

Type: int
Default: 57671680 (55 mebibytes)
Valid Values: [1024,…]
Importance: medium
Update Mode: cluster-wide

fetch.purgatory.purge.interval.requests

The purge interval (in number of requests) of the fetch request purgatory

Type: int
Default: 1000
Valid Values:  
Importance: medium
Update Mode: read-only

group.consumer.assignors

The server side assignors as a list of full class names. The first one in the list is considered as the default assignor to be used in the case where the consumer does not specify an assignor.

Type: list
Default: org.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignor
Valid Values:  
Importance: medium
Update Mode: read-only

group.consumer.heartbeat.interval.ms

The heartbeat interval given to the members of a consumer group.

Type: int
Default: 5000 (5 seconds)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.consumer.max.heartbeat.interval.ms

The maximum heartbeat interval for registered consumers.

Type: int
Default: 15000 (15 seconds)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.consumer.max.session.timeout.ms

The maximum allowed session timeout for registered consumers.

Type: int
Default: 60000 (1 minute)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.consumer.max.size

The maximum number of consumers that a single consumer group can accommodate. This value will only impact the new consumer coordinator. To configure the classic consumer coordinator check group.max.size instead.

Type: int
Default: 2147483647
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.consumer.migration.policy

The config that enables converting the non-empty classic group using the consumer embedded protocol to the non-empty consumer group using the consumer group protocol and vice versa; conversions of empty groups in both directions are always enabled regardless of this policy. bidirectional: both upgrade from classic group to consumer group and downgrade from consumer group to classic group are enabled, upgrade: only upgrade from classic group to consumer group is enabled, downgrade: only downgrade from consumer group to classic group is enabled, disabled: neither upgrade nor downgrade is enabled.

Type: string
Default: disabled
Valid Values: (case insensitive) [DISABLED, DOWNGRADE, UPGRADE, BIDIRECTIONAL]
Importance: medium
Update Mode: read-only

group.consumer.min.heartbeat.interval.ms

The minimum heartbeat interval for registered consumers.

Type: int
Default: 5000 (5 seconds)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.consumer.min.session.timeout.ms

The minimum allowed session timeout for registered consumers.

Type: int
Default: 45000 (45 seconds)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.consumer.session.timeout.ms

The timeout to detect client failures when using the consumer group protocol.

Type: int
Default: 45000 (45 seconds)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.coordinator.append.linger.ms

The duration in milliseconds that the coordinator will wait for writes to accumulate before flushing them to disk. Transactional writes are not accumulated.

Type: int
Default: 10
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

group.coordinator.rebalance.protocols

The list of enabled rebalance protocols. Supported protocols: consumer,classic,unknown. The consumer rebalance protocol is in early access and therefore must not be used in production.

Type: list
Default: classic
Valid Values: [consumer, classic, unknown]
Importance: medium
Update Mode: read-only

group.coordinator.threads

The number of threads used by the group coordinator.

Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.initial.rebalance.delay.ms

The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.

Type: int
Default: 3000 (3 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

group.max.session.timeout.ms

The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

Type: int
Default: 1800000 (30 minutes)
Valid Values:  
Importance: medium
Update Mode: read-only

group.max.size

The maximum number of consumers that a single consumer group can accommodate.

Type: int
Default: 2147483647
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

group.min.session.timeout.ms

The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.

Type: int
Default: 6000 (6 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

initial.broker.registration.timeout.ms

When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process.

Type: int
Default: 60000 (1 minute)
Valid Values:  
Importance: medium
Update Mode: read-only

inter.broker.listener.name

Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocolIt is an error to set this and security.inter.broker.protocol properties at the same time.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

inter.broker.protocol.version

Specify which version of the inter-broker protocol will be used.. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check MetadataVersion for the full list.

Type: string
Default: 3.8-IV0A
Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0A, 3.9-IV0A]
Importance: medium
Update Mode: read-only

log.cleaner.backoff.ms

The amount of time to sleep when there are no logs to clean

Type: long
Default: 15000 (15 seconds)
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.cleaner.dedupe.buffer.size

The total memory used for log deduplication across all cleaner threads

Type: long
Default: 134217728
Valid Values:  
Importance: medium
Update Mode: cluster-wide

log.cleaner.delete.retention.ms

The amount of time to retain tombstone message markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise tombstones messages may be collected before a consumer completes their scan).

Type: long
Default: 86400000 (1 day)
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.cleaner.enable

Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.

Type: boolean
Default: true
Valid Values:  
Importance: medium
Update Mode: read-only

log.cleaner.io.buffer.load.factor

Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions

Type: double
Default: 0.9
Valid Values:  
Importance: medium
Update Mode: cluster-wide

log.cleaner.io.buffer.size

The total memory used for log cleaner I/O buffers across all cleaner threads

Type: int
Default: 524288
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.cleaner.io.max.bytes.per.second

The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average

Type: double
Default: 1.7976931348623157E308
Valid Values:  
Importance: medium
Update Mode: cluster-wide

log.cleaner.max.compaction.lag.ms

The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: medium
Update Mode: cluster-wide

log.cleaner.min.cleanable.ratio

The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.

Type: double
Default: 0.5
Valid Values: [0,…,1]
Importance: medium
Update Mode: cluster-wide

log.cleaner.min.compaction.lag.ms

The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

Type: long
Default: 0
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.cleaner.threads

The number of background threads to use for log cleaning

Type: int
Default: 1
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.cleanup.policy

The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: “delete” and “compact”

Type: list
Default: delete
Valid Values: [compact, delete]
Importance: medium
Update Mode: cluster-wide

log.deletion.max.segments.per.run

The maximum eligible segments that can be deleted during every check.

Type: int
Default: 2147483647
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.deletion.throttler.disk.free.headroom.bytes

The headroom for the disk space available (in bytes) that will be added toconfluent.backpressure.disk.free.threshold.bytes (if enabled) to determine the threshold for the minimum available disk space across all the log dirs. This configuration acts as a safety net enabling the broker to reclaim disk space quickly when the broker’s available disk space is running low. When the available disk space is below the threshold value, the broker auto disables the effect oflog.deletion.max.segments.per.run and deletes all eligible segments during periodic retention. When the available disk space is at or above the threshold, the broker auto enables the effect of log.deletion.max.segments.per.run.

Type: long
Default: 21474836480 (20 gibibytes)
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.index.interval.bytes

The interval with which we add an entry to the offset index.

Type: int
Default: 4096 (4 kibibytes)
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.index.size.max.bytes

The maximum size in bytes of the offset index

Type: int
Default: 10485760 (10 mebibytes)
Valid Values: [4,…]
Importance: medium
Update Mode: cluster-wide

log.message.format.version

Specify the message format version the broker will use to append messages to the logs. The value should be a valid MetadataVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check MetadataVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don’t understand.

Type: string
Default: 3.0-IV1
Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0A, 3.9-IV0A]
Importance: medium
Update Mode: read-only

log.message.timestamp.after.max.ms

This configuration sets the allowable timestamp difference between the message timestamp and the broker’s timestamp. The message timestamp can be later than or equal to the broker’s timestamp, with the maximum allowable difference determined by the value set in this configuration. If log.message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.

Type: long
Default: 9223372036854775807
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.message.timestamp.before.max.ms

This configuration sets the allowable timestamp difference between the broker’s timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker’s timestamp, with the maximum allowable difference determined by the value set in this configuration. If log.message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.

Type: long
Default: 9223372036854775807
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.message.timestamp.difference.max.ms

[DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.

Type: long
Default: 9223372036854775807
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

log.message.timestamp.type

Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime.

Type: string
Default: CreateTime
Valid Values: [CreateTime, LogAppendTime]
Importance: medium
Update Mode: cluster-wide

log.preallocate

Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: cluster-wide

log.retention.check.interval.ms

The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion

Type: long
Default: 300000 (5 minutes)
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

max.connection.creation.rate

The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connection.creation.rate.Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached.

Type: double
Default: 1.7976931348623157E308
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

max.connections

The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections.per.ip. Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.

Type: int
Default: 2147483647
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

max.connections.per.ip

The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.

Type: int
Default: 2147483647
Valid Values: [0,…]
Importance: medium
Update Mode: cluster-wide

max.connections.per.ip.overrides

A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is “hostName:100,127.0.0.1:200”

Type: string
Default: “”
Valid Values:  
Importance: medium
Update Mode: cluster-wide

max.incremental.fetch.session.cache.slots

The maximum number of total incremental fetch sessions that we will maintain. FetchSessionCache is sharded into 8 shards and the limit is equally divided among all shards. Sessions are allocated to each shard in round-robin. Only entries within a shard are considered eligible for eviction.

Type: int
Default: 1000
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

max.request.partition.size.limit

The maximum number of partitions can be served in one request.

Type: int
Default: 2000
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

num.partitions

The default number of log partitions per topic

Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
Update Mode: cluster-wide

password.encoder.old.secret

The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

password.encoder.secret

The secret used for encoding dynamically configured passwords for this broker.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

principal.builder.class

The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.

Type: class
Default: org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
Valid Values:  
Importance: medium
Update Mode: per-broker

producer.purgatory.purge.interval.requests

The purge interval (in number of requests) of the producer request purgatory

Type: int
Default: 1000
Valid Values:  
Importance: medium
Update Mode: read-only

queued.max.request.bytes

The number of queued bytes allowed before no more requests are read

Type: long
Default: -1
Valid Values:  
Importance: medium
Update Mode: read-only

remote.fetch.max.wait.ms

The maximum amount of time the server will wait before answering the remote fetch request

Type: int
Default: 500
Valid Values: [1,…]
Importance: medium
Update Mode: cluster-wide

remote.log.manager.copy.max.bytes.per.second

The maximum number of bytes that can be copied from local storage to remote storage per second. This is a global limit for all the partitions that are being copied from local storage to remote storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be copied per second.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: medium
Update Mode: cluster-wide

remote.log.manager.copy.quota.window.num

The number of samples to retain in memory for remote copy quota management. The default value is 11, which means there are 10 whole windows + 1 current window.

Type: int
Default: 11
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

remote.log.manager.copy.quota.window.size.seconds

The time span of each sample for remote copy quota management. The default value is 1 second.

Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

remote.log.manager.fetch.max.bytes.per.second

The maximum number of bytes that can be fetched from remote storage to local storage per second. This is a global limit for all the partitions that are being fetched from remote storage to local storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be fetched per second.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: medium
Update Mode: cluster-wide

remote.log.manager.fetch.quota.window.num

The number of samples to retain in memory for remote fetch quota management. The default value is 11, which means there are 10 whole windows + 1 current window.

Type: int
Default: 11
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

remote.log.manager.fetch.quota.window.size.seconds

The time span of each sample for remote fetch quota management. The default value is 1 second.

Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

replica.fetch.backoff.ms

The base amount of time to wait when fetch partition error occurs. The backoff increases exponentially for each consecutive failure up to replica.fetch.backoff.ms

Type: int
Default: 1000 (1 second)
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

replica.fetch.max.bytes

The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).

Type: int
Default: 1048576 (1 mebibyte)
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

replica.fetch.response.max.bytes

Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).

Type: int
Default: 10485760 (10 mebibytes)
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

replica.selector.class

The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

reserved.broker.max.id

Max number that can be used for a broker.id

Type: int
Default: 1000
Valid Values: [0,…]
Importance: medium
Update Mode: read-only

sasl.client.callback.handler.class

The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

Type: class
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.enabled.mechanisms

The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.

Type: list
Default: GSSAPI
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.jaas.config

JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.kerberos.kinit.cmd

Kerberos kinit command path.

Type: string
Default: /usr/bin/kinit
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.

Type: long
Default: 60000
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.kerberos.principal.to.local.rules

A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.

Type: list
Default: DEFAULT
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS config or in Kafka’s config.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.

Type: double
Default: 0.05
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket.

Type: double
Default: 0.8
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.login.callback.handler.class

The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

Type: class
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.login.class

The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

Type: class
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.login.refresh.buffer.seconds

The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

Type: short
Default: 300
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.login.refresh.min.period.seconds

The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

Type: short
Default: 60
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.login.refresh.window.factor

Login refresh thread will sleep until the specified window factor relative to the credential’s lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

Type: double
Default: 0.8
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.login.refresh.window.jitter

The maximum amount of random jitter relative to the credential’s lifetime that is added to the login refresh thread’s sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

Type: double
Default: 0.05
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.mechanism.inter.broker.protocol

SASL mechanism used for inter-broker communication. Default is GSSAPI.

Type: string
Default: GSSAPI
Valid Values:  
Importance: medium
Update Mode: per-broker

sasl.oauthbearer.jwks.endpoint.url

The OAuth/OIDC provider URL from which the provider’s JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a “kid” header claim value that isn’t yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a “kid” header value that isn’t in the JWKS file, the broker will reject the JWT and authentication will fail.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.oauthbearer.token.endpoint.url

The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer’s token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.server.authn.async.enable

Setting this configuration to true allows the SASL authentication to attempt to perform authentication asynchronously.

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.server.authn.async.max.threads

Maximum number of threads in async authentication thread pool to perform authentication asynchronously.

Type: int
Default: 1
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

sasl.server.authn.async.timeout.ms

The broker will attempt to forcibly stop authentication that runs longer than this.

Type: long
Default: 30000 (30 seconds)
Valid Values: [0,…,60000]
Importance: medium
Update Mode: read-only

sasl.server.callback.handler.class

The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.

Type: class
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

sasl.server.max.receive.size

The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits requests to 64K, but we allow upto 512KB by default for custom SASL mechanisms. In practice, PLAIN, SCRAM and OAUTH mechanisms can use much smaller limits.

Type: int
Default: 524288
Valid Values:  
Importance: medium
Update Mode: read-only

security.inter.broker.protocol

Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.

Type: string
Default: PLAINTEXT
Valid Values: [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
Importance: medium
Update Mode: read-only

socket.connection.setup.timeout.max.ms

The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

Type: long
Default: 30000 (30 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

socket.connection.setup.timeout.ms

The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value.

Type: long
Default: 10000 (10 seconds)
Valid Values:  
Importance: medium
Update Mode: read-only

socket.listen.backlog.size

The maximum number of pending connections on the socket. In Linux, you may also need to configure somaxconn and tcp_max_syn_backlog kernel parameters accordingly to make the configuration takes effect.

Type: int
Default: 50
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

ssl.cipher.suites

A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

Type: list
Default: “”
Valid Values:  
Importance: medium
Update Mode: cluster-wide

ssl.client.auth

Configures kafka broker to request client authentication. The following settings are common:

  • ssl.client.auth=required If set to required client authentication is required.
  • ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
  • ssl.client.auth=none This means client authentication is not needed.
Type: string
Default: none
Valid Values: [required, requested, none]
Importance: medium
Update Mode: per-broker

ssl.enabled.protocols

The list of protocols enabled for SSL connections. The default is ‘TLSv1.2,TLSv1.3’ when running with Java 11 or newer, ‘TLSv1.2’ otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol.

Type: list
Default: TLSv1.2
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.key.password

The password of the private key in the key store file or the PEM key specified in ‘ssl.keystore.key’.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

Type: string
Default: SunX509
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.keystore.certificate.chain

Certificate chain in the format specified by ‘ssl.keystore.type’. Default SSL engine factory supports only PEM format with a list of X.509 certificates

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.keystore.key

Private key in the format specified by ‘ssl.keystore.type’. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using ‘ssl.key.password’

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.keystore.location

The location of the key store file. This is optional for client and can be used for two-way authentication for client.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ‘ssl.keystore.location’ is configured. Key store password is not supported for PEM format.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.keystore.type

The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM].

Type: string
Default: JKS
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.protocol

The SSL protocol used to generate the SSLContext. The default is ‘TLSv1.3’ when running with Java 11 or newer, ‘TLSv1.2’ otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are ‘TLSv1.2’ and ‘TLSv1.3’. ‘TLS’, ‘TLSv1.1’, ‘SSL’, ‘SSLv2’ and ‘SSLv3’ may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and ‘ssl.enabled.protocols’, clients will downgrade to ‘TLSv1.2’ if the server does not support ‘TLSv1.3’. If this config is set to ‘TLSv1.2’, clients will not use ‘TLSv1.3’ even if it is one of the values in ssl.enabled.protocols and the server only supports ‘TLSv1.3’.

Type: string
Default: TLSv1.2
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.provider

The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.trustmanager.algorithm

The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

Type: string
Default: PKIX
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.truststore.certificates

Trusted certificates in the format specified by ‘ssl.truststore.type’. Default SSL engine factory supports only PEM format with X.509 certificates.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.truststore.location

The location of the trust store file.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.truststore.password

The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: per-broker

ssl.truststore.type

The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM].

Type: string
Default: JKS
Valid Values:  
Importance: medium
Update Mode: per-broker

transaction.metadata.load.threads

The number of threads that are used to concurrently load transaction metadata.

Type: int
Default: 32
Valid Values: [1,…]
Importance: medium
Update Mode: read-only

zookeeper.clientCnxnSocket

Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named zookeeper.clientCnxnSocket system property.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.client.enable

Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure system property (note the different name). Defaults to false if neither is set; when true, zookeeper.clientCnxnSocket must be set (typically to org.apache.zookeeper.ClientCnxnSocketNetty); other values to set may include zookeeper.ssl.cipher.suites, zookeeper.ssl.crl.enable, zookeeper.ssl.enabled.protocols, zookeeper.ssl.endpoint.identification.algorithm, zookeeper.ssl.keystore.location, zookeeper.ssl.keystore.password, zookeeper.ssl.keystore.type, zookeeper.ssl.ocsp.enable, zookeeper.ssl.protocol, zookeeper.ssl.truststore.location, zookeeper.ssl.truststore.password, zookeeper.ssl.truststore.type

Type: boolean
Default: false
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.keystore.location

Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.location system property (note the camelCase).

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.keystore.password

Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.password system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail.

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.keystore.type

Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the keystore.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.truststore.location

Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.location system property (note the camelCase).

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.truststore.password

Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.password system property (note the camelCase).

Type: password
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

zookeeper.ssl.truststore.type

Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the truststore.

Type: string
Default: null
Valid Values:  
Importance: medium
Update Mode: read-only

alter.config.policy.class.name

The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface.

Type: class
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

alter.log.dirs.replication.quota.window.num

The number of samples to retain in memory for alter log dirs replication quotas

Type: int
Default: 11
Valid Values: [1,…]
Importance: low
Update Mode: read-only

alter.log.dirs.replication.quota.window.size.seconds

The time span of each sample for alter log dirs replication quotas

Type: int
Default: 1
Valid Values: [1,…]
Importance: low
Update Mode: read-only

authorizer.class.name

The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization.

Type: string
Default: “”
Valid Values: non-null string
Importance: low
Update Mode: read-only

auto.include.jmx.reporter

Deprecated. Whether to automatically include JmxReporter even if it’s not listed in metric.reporters. This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter.

Type: boolean
Default: true
Valid Values:  
Importance: low
Update Mode: read-only

client.quota.callback.class

The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, the <user> and <client-id> quotas that are stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.

Type: class
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

client.quota.max.throttle.time.ms

The time period in ms upto which a client is throttled if it hits the client produce bandwidth quota.

Type: long
Default: 5000 (5 seconds)
Valid Values:  
Importance: low
Update Mode: cluster-wide

confluent.authorizer.authority.name

The DNS name of the authority that this clusteruses to authorize. This should be a name for the cluster hosting metadata topics.

Type: string
Default: “”
Valid Values:  
Importance: low
Update Mode: read-only

confluent.balancer.cpu.utilization.detector.enabled

Specify if cpu optimization detector is enabled

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

confluent.balancer.disk.utilization.detector.enabled

Specify if disk optimization detector is enabled

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

confluent.defer.isr.shrink.enable

Defer ISR shrinking for partitions that only have messages with acks = “all” if shrinking ISR would make partition fall under min ISR.

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

confluent.log.placement.constraints

This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. The format of JSON is:{ “version”: 1, “replicas”: [ { “count”: 2, “constraints”: {“rack”: “east-1”} }, { “count”: 1, “constraints”: {“rack”: “east-2”} } ], “observers”:[ { “count”: 1, “constraints”: {“rack”: “west-1”} } ]}

Type: string
Default: “”
Valid Values: org.apache.kafka.metadata.TopicPlacement$TopicPlacementValidator@78d6692f
Importance: low
Update Mode: read-only

confluent.metadata.server.cluster.registry.clusters

JSON defining initial state of Cluster Registry. This should not be set manually, instead Cluster Registry http apis should be used.

Type: string
Default: []
Valid Values:  
Importance: low
Update Mode: cluster-wide

confluent.producer.id.quota.window.num

The number of samples to retain in memory for producer id quotas

Type: int
Default: 11
Valid Values: [1,…]
Importance: low
Update Mode: read-only

confluent.producer.id.quota.window.size.seconds

The time span of each sample for producer id quotas

Type: int
Default: 1
Valid Values: [1,…]
Importance: low
Update Mode: read-only

confluent.reporters.telemetry.auto.enable

Auto-enable telemetry on the broker. This will add the telemetry reporter to the broker’s ‘metric.reporters’ property if it is not already present. Disabling this property will prevent Self-balancing Clusters from working properly.

Type: boolean
Default: true
Valid Values:  
Importance: low
Update Mode: cluster-wide

confluent.schema.registry.url

Comma-separated list of URLs for schema registry instances that can be used to look up schemas.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.security.event.router.config

JSON configuration for routing events to topics

Type: string
Default: “”
Valid Values:  
Importance: low
Update Mode: cluster-wide

confluent.telemetry.enabled

True if telemetry data can to be reported to Confluent Cloud

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: cluster-wide

confluent.telemetry.external.client.metrics.push.enabled

True if client metrics are enabled, which can to be reported to Confluent Cloud

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.fenced.segment.delete.delay.ms

Segments uploaded by fenced leaders may still be being uploaded when retention occurs on a newly elected leader. Storage backends like AWS S3 return success for delete operations if the object is not found, so to address this edge case the deletion of segments uploaded by fenced leaders is delayed by confluent.tier.fenced.segment.delete.delay.ms with the assumption that the upload will be completed by the time the deletion occurs.

Type: long
Default: 600000 (10 minutes)
Valid Values: [0,…]
Importance: low
Update Mode: read-only

confluent.tier.gcs.cred.file.path

The path to the credentials file used to create the GCS client. This uses the default GCS configuration file format; please refer to GCP documentation on how to generate the credentials file. If not specified, the GCS client will be instantiated using the default service account available.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.aws.endpoint.override

Override picking an S3 endpoint. Normally this is performed automatically by the client.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.cred.file.path

The path to the credentials file used to create the S3 client. It uses a Java properties file and extracts the AWS access key from the “accessKey” property and AWS secret access key from the “secretKey” property. Please refer to AWS documentation for further information. If this property is not specified, the S3 client will use the DefaultAWSCredentialsProviderChain to locate the credentials.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.force.path.style.access

Configures the client to use path-style access for all requests. This flag is not enabled by default. The default behavior is to detect which access style to use based on the configured endpoint and the bucket being accessed. Setting this flag will result in path-style access being forced for all requests.

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.security.providers

One or more comma separated security providers to be used.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.enabled.protocols

The list of protocols enabled for SSL connections. The default is ‘TLSv1.2,TLSv1.3’ when running with Java 11 or newer, ‘TLSv1.2’ otherwise.

Type: list
Default: TLSv1.2
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.key.password

Key password when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyPassword system property (note the camelCase).

Type: password
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.keystore.location

Keystore location when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyStore system property (note the camelCase).

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.keystore.password

Keystore password when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyStorePassword system property (note the camelCase).

Type: password
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.keystore.type

Keystore type when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyStoreType system property (note the camelCase).

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.protocol

The SSL protocol used to generate the SSLContext. The default is ‘TLSv1.3’ when running with Java 11 or newer, ‘TLSv1.2’ otherwise.

Type: string
Default: TLSv1.2
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.provider

SSL provider to use for the client when connecting to AWS S3.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.truststore.location

Truststore location when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.trustStore system property (note the camelCase).

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.truststore.password

Truststore password when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.trustStorePassword system property (note the camelCase).

Type: password
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.s3.ssl.truststore.type

Truststore type when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.trustStoreType system property (note the camelCase).

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

confluent.tier.topic.delete.backoff.ms

Maximum amount of time to wait before deleting tiered objects for a deleted partition.

Type: long
Default: 21600000 (6 hours)
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

confluent.tier.topic.delete.check.interval.ms

Frequency at which tiered objects cleanup is run for deleted topics.

Type: long
Default: 300000 (5 minutes)
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

confluent.tier.topic.delete.max.inprogress.partitions

Maximum number of partitions deleted from remote storage in the deletion interval defined by confluent.tier.topic.delete.check.interval.ms

Type: int
Default: 100
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

connection.failed.authentication.delay.ms

Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.

Type: int
Default: 100
Valid Values: [0,…]
Importance: low
Update Mode: read-only

controller.quorum.retry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

Type: int
Default: 20
Valid Values:  
Importance: low
Update Mode: read-only

controller.quota.window.num

The number of samples to retain in memory for controller mutation quotas

Type: int
Default: 11
Valid Values: [1,…]
Importance: low
Update Mode: read-only

controller.quota.window.size.seconds

The time span of each sample for controller mutations quotas

Type: int
Default: 1
Valid Values: [1,…]
Importance: low
Update Mode: read-only

create.topic.policy.class.name

The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface.

Type: class
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

delegation.token.expiry.check.interval.ms

Scan interval to remove expired delegation tokens.

Type: long
Default: 3600000 (1 hour)
Valid Values: [1,…]
Importance: low
Update Mode: read-only

enable.fips

Enable FIPS mode on the server. If FIPS mode is enabled, broker listener security protocols, TLS versions and cipher suites will be validated based on FIPS compliance requirement.

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

follower.replication.throttled.rate

A long representing the upper bound (bytes/sec) on replication traffic for followers enumerated in the property follower.replication.throttled.replicas (for each topic). This property can be only set dynamically. It is suggested that the limit be kept above 1MB/s for accurate behaviour.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

follower.replication.throttled.replicas

A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:… or alternatively the wildcard ‘*’ can be used to throttle all replicas for this topic.

Type: string
Default: none
Valid Values: [none, *]
Importance: low
Update Mode: cluster-wide

kafka.metrics.polling.interval.secs

The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.

Type: int
Default: 10
Valid Values: [1,…]
Importance: low
Update Mode: read-only

kafka.metrics.reporters

A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention.

Type: list
Default: “”
Valid Values:  
Importance: low
Update Mode: read-only

leader.replication.throttled.rate

A long representing the upper bound (bytes/sec) on replication traffic for leaders enumerated in the property leader.replication.throttled.replicas (for each topic). This property can be only set dynamically. It is suggested that the limit be kept above 1MB/s for accurate behaviour.

Type: long
Default: 9223372036854775807
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

leader.replication.throttled.replicas

A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:… or alternatively the wildcard ‘*’ can be used to throttle all replicas for this topic.

Type: string
Default: none
Valid Values: [none, *]
Importance: low
Update Mode: cluster-wide

listener.security.protocol.map

Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location). Note that in KRaft a default mapping from the listener names defined by controller.listener.names to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use.

Type: string
Default: SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT
Valid Values:  
Importance: low
Update Mode: per-broker

log.dir.failure.timeout.ms

If the broker is unable to successfully communicate to the controller that some log directory has failed for longer than this time, the broker will fail and shut down.

Type: long
Default: 30000 (30 seconds)
Valid Values: [1,…]
Importance: low
Update Mode: read-only

log.message.downconversion.enable

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.

Type: boolean
Default: true
Valid Values:  
Importance: low
Update Mode: cluster-wide

metadata.max.idle.interval.ms

This configuration controls how often the active controller should write no-op records to the metadata partition. If the value is 0, no-op records are not appended to the metadata partition. The default value is 500

Type: int
Default: 500
Valid Values: [0,…]
Importance: low
Update Mode: read-only

metric.reporters

A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.

Type: list
Default: “”
Valid Values:  
Importance: low
Update Mode: cluster-wide

metrics.num.samples

The number of samples maintained to compute metrics.

Type: int
Default: 2
Valid Values: [1,…]
Importance: low
Update Mode: read-only

metrics.recording.level

The highest recording level for metrics.

Type: string
Default: INFO
Valid Values:  
Importance: low
Update Mode: read-only

metrics.sample.window.ms

The window of time a metrics sample is computed over.

Type: long
Default: 30000 (30 seconds)
Valid Values: [1,…]
Importance: low
Update Mode: read-only

password.encoder.cipher.algorithm

The Cipher algorithm used for encoding dynamically configured passwords.

Type: string
Default: AES/CBC/PKCS5Padding
Valid Values:  
Importance: low
Update Mode: read-only

password.encoder.iterations

The iteration count used for encoding dynamically configured passwords.

Type: int
Default: 4096
Valid Values: [1024,…]
Importance: low
Update Mode: read-only

password.encoder.key.length

The key length used for encoding dynamically configured passwords.

Type: int
Default: 128
Valid Values: [8,…]
Importance: low
Update Mode: read-only

password.encoder.keyfactory.algorithm

The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

producer.id.expiration.ms

The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transaction associated to them is still ongoing. Note that producer IDs may expire sooner if the last write from the producer ID is deleted due to the topic’s retention settings. Setting this value the same or higher than delivery.timeout.ms can help prevent expiration during retries and protect against message duplication, but the default should be reasonable for most use cases.

Type: int
Default: 86400000 (1 day)
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

quota.window.num

The number of samples to retain in memory for client quotas

Type: int
Default: 11
Valid Values: [1,…]
Importance: low
Update Mode: read-only

quota.window.size.seconds

The time span of each sample for client quotas

Type: int
Default: 1
Valid Values: [1,…]
Importance: low
Update Mode: read-only

remote.log.index.file.cache.total.size.bytes

The total size of the space allocated to store index files fetched from remote storage in the local storage.

Type: long
Default: 1073741824 (1 gibibyte)
Valid Values: [1,…]
Importance: low
Update Mode: cluster-wide

replication.quota.window.num

The number of samples to retain in memory for replication quotas

Type: int
Default: 11
Valid Values: [1,…]
Importance: low
Update Mode: read-only

replication.quota.window.size.seconds

The time span of each sample for replication quotas

Type: int
Default: 1
Valid Values: [1,…]
Importance: low
Update Mode: read-only

sasl.login.connect.timeout.ms

The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

Type: int
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.login.read.timeout.ms

The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

Type: int
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.login.retry.backoff.max.ms

The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

Type: long
Default: 10000 (10 seconds)
Valid Values:  
Importance: low
Update Mode: read-only

sasl.login.retry.backoff.ms

The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

Type: long
Default: 100
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.audience

The value to be added to the Audience claim “aud” which will be included in the client assertion created locally

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.expiration

The (Optional) expiration time for the client assertion in minutesThe default value is 5 minutes

Type: int
Default: 5
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.include.jti.claim

The (optional) setting for specifying whether to include jti claim or not

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.include.nbf.claim

The (optional) setting for specifying whether to include “not before” (nbf) claim or notIf set to true, nbf claim with (current time - 1 minute) will be included in the client assertion

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.issuer

The value to be added to the Issuer claim “iss” which will be included in the client assertion created locally

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.location

The location/path on disc at which a signed client assertion is presentThis will be passed directly to the token endpoint.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.private.key

The location for the private key to be used for signing client assertion in PEM formatRequired for local client assertion creation

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.private.key.passphrase

The passphrase required for decrypting the private key (in case of PKCS#8 format pem file)

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clientassertion.subject

The value to be added to the Subject claim “sub” which will be included in the client assertion created locally

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.clock.skew.seconds

The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

Type: int
Default: 30
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.expected.audience

The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth “aud” claim and if this value is set, the broker will match the value from JWT’s “aud” claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

Type: list
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.expected.issuer

The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth “iss” claim and if this value is set, the broker will match it exactly against what is in the JWT’s “iss” claim. If there is no match, the broker will reject the JWT and authentication will fail.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.jwks.endpoint.refresh.ms

The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

Type: long
Default: 3600000 (1 hour)
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

Type: long
Default: 10000 (10 seconds)
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

Type: long
Default: 100
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.scope.claim.name

The OAuth claim for the scope is often named “scope”, but this (optional) setting can provide a different name to use for the scope included in the JWT payload’s claims if the OAuth/OIDC provider uses a different name for that claim.

Type: string
Default: scope
Valid Values:  
Importance: low
Update Mode: read-only

sasl.oauthbearer.sub.claim.name

The OAuth claim for the subject is often named “sub”, but this (optional) setting can provide a different name to use for the subject included in the JWT payload’s claims if the OAuth/OIDC provider uses a different name for that claim.

Type: string
Default: sub
Valid Values:  
Importance: low
Update Mode: read-only

security.providers

A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

ssl.allow.dn.changes

Indicates whether changes to the certificate distinguished name should be allowed during a dynamic reconfiguration of certificates or not.

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

ssl.allow.san.changes

Indicates whether changes to the certificate subject alternative names should be allowed during a dynamic reconfiguration of certificates or not.

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

ssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate server hostname using server certificate.

Type: string
Default: https
Valid Values:  
Importance: low
Update Mode: per-broker

ssl.engine.factory.class

The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

Type: class
Default: null
Valid Values:  
Importance: low
Update Mode: per-broker

ssl.principal.mapping.rules

A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.

Type: string
Default: DEFAULT
Valid Values:  
Importance: low
Update Mode: read-only

ssl.secure.random.implementation

The SecureRandom PRNG implementation to use for SSL cryptography operations.

Type: string
Default: null
Valid Values:  
Importance: low
Update Mode: per-broker

telemetry.max.bytes

The maximum size (after compression if compression is used) of telemetry metrics pushed from a client to the broker. The default value is 1048576 (1 MB).

Type: int
Default: 1048576 (1 mebibyte)
Valid Values: [1,…]
Importance: low
Update Mode: read-only

throughput.quota.window.num

The number of samples to retain in memory for produce and fetch quotas

Type: int
Default: 11
Valid Values: [1,…]
Importance: low
Update Mode: read-only

token.impersonation.validation

Indicates whether impersonation token validation should be enabled or not. If enabled, the broker will validate the incoming certificate subject with the cp_proxy claim in impersonation token.

Type: boolean
Default: true
Valid Values:  
Importance: low
Update Mode: read-only

transaction.abort.timed.out.transaction.cleanup.interval.ms

The interval at which to rollback transactions that have timed out

Type: int
Default: 10000 (10 seconds)
Valid Values: [1,…]
Importance: low
Update Mode: read-only

transaction.partition.verification.enable

Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition

Type: boolean
Default: true
Valid Values:  
Importance: low
Update Mode: cluster-wide

transaction.remove.expired.transaction.cleanup.interval.ms

The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing

Type: int
Default: 3600000 (1 hour)
Valid Values: [1,…]
Importance: low
Update Mode: read-only

zookeeper.acl.change.notification.expiration.ms

Deletes ACL change notification path which are created before this time.

Type: int
Default: 900000 (15 minutes)
Valid Values:  
Importance: low
Update Mode: read-only

zookeeper.ssl.cipher.suites

Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word “ciphersuites”). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.

Type: list
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

zookeeper.ssl.crl.enable

Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.crl system property (note the shorter name).

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

zookeeper.ssl.enabled.protocols

Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabledProtocols system property (note the camelCase). The default value of null means the enabled protocol will be the value of the zookeeper.ssl.protocol configuration property.

Type: list
Default: null
Valid Values:  
Importance: low
Update Mode: read-only

zookeeper.ssl.endpoint.identification.algorithm

Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) “https” meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any “true” or “false” value set via the zookeeper.ssl.hostnameVerification system property (note the different name and values; true implies https and false implies blank).

Type: string
Default: HTTPS
Valid Values:  
Importance: low
Update Mode: read-only

zookeeper.ssl.ocsp.enable

Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.ocsp system property (note the shorter name).

Type: boolean
Default: false
Valid Values:  
Importance: low
Update Mode: read-only

zookeeper.ssl.protocol

Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zookeeper.ssl.protocol system property.

Type: string
Default: TLSv1.2
Valid Values:  
Importance: low
Update Mode: read-only

Note

This website includes content developed at the Apache Software Foundation under the terms of the Apache License v2.