Kafka Broker Configurations

This topic provides configuration parameters available for Confluent Platform. The Apache Kafka® broker configuration parameters are organized by order of importance, ranked from high to low.

For details on Kafka internals refer to this interactive diagram. To learn about running Kafka without ZooKeeper read this article on KRaft.

  • advertised.listeners

    Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners, it is not valid to advertise the 0.0.0.0 meta-address.
    Also unlike listeners, there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:per-broker
  • auto.create.topics.enable

    Enable auto creation of topic on the server

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • auto.leader.rebalance.enable

    Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by `leader.imbalance.check.interval.seconds`. If the leader imbalance exceeds `leader.imbalance.per.broker.percentage`, leader rebalance to the preferred leader for partitions is triggered.

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:read-only
  • background.threads

    The number of threads to use for various background processing tasks

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • broker.id

    The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.

    Type:int
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • compression.type

    Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.

    Type:string
    Default:producer
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • confluent.balancer.disk.max.load

    This config specifies the maximum load for disk usage as a proportion of disk capacity. Valid values are between 0 and 1.

    Type:double
    Default:0.85
    Valid Values:[0.0,...,1.0]
    Importance:high
    Update Mode:read-only
  • confluent.balancer.enable

    This config controls whether the balancer is enabled

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • confluent.balancer.heal.broker.failure.threshold.ms

    This config specifies how long the balancer will wait after detecting a broker failure before triggering a balancing action. -1 means that broker failures will not trigger balancing actions

    Type:long
    Default:3600000 (1 hour)
    Valid Values:[-1,...]
    Importance:high
    Update Mode:read-only
  • confluent.balancer.heal.uneven.load.trigger

    Controls what causes the Confluent DataBalancer to start rebalance operations. Acceptable values are ANY_UNEVEN_LOAD and EMPTY_BROKER

    Type:string
    Default:EMPTY_BROKER
    Valid Values:[ANY_UNEVEN_LOAD, EMPTY_BROKER]
    Importance:high
    Update Mode:cluster-wide
  • confluent.balancer.max.replicas

    The replica capacity is the maximum number of replicas the balancer will place on a single broker.

    Type:long
    Default:2147483647
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • confluent.balancer.network.in.max.bytes.per.second

    This config specifies the upper capacity limit for network incoming bytes per second per broker. The Confluent DataBalancer will attempt to keep incoming data throughput below this limit.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • confluent.balancer.network.out.max.bytes.per.second

    This config specifies the upper capacity limit for network outgoing bytes per second per broker. The Confluent DataBalancer will attempt to keep outgoing data throughput below this limit.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • confluent.balancer.throttle.bytes.per.second

    This config specifies the upper bound for bandwidth in bytes to move replicas around for replica reassignment. A value of -1 disables throttling entirely.

    Type:long
    Default:10485760
    Valid Values:[-2,...]
    Importance:high
    Update Mode:cluster-wide
  • confluent.offsets.topic.placement.constraints

    This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. The format of JSON is:
    {
    "version": 1,
    "replicas": [
    {
    "count": 2,
    "constraints": {"rack": "east-1"}
    },
    {
    "count": 1,
    "constraints": {"rack": "east-2"}
    }
    ],
    "observers":[
    {
    "count": 1,
    "constraints": {"rack": "west-1"}
    }
    ]
    }

    Type:string
    Default:""
    Valid Values:kafka.common.TopicPlacement$TopicPlacementValidator@6e9175d8
    Importance:high
    Update Mode:read-only
  • confluent.security.event.logger.authentication.enable

    Enable authentication audit logs

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.security.event.logger.enable

    Whether the event logger is enabled

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.azure.block.blob.container

    The Azure Block Blob Container to use for tiered storage.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.azure.block.blob.cred.file.path

    The path to the credentials file used to create the Azure Block Blob client. It uses a JSON file with one of the following options:
    - `connectionString` for the target `confluent.tier.azure.block.blob.container`.
    - `azureClientId`, `azureTenantId` and `azureClientSecret` for the target `confluent.tier.azure.block.blob.container`.
    Please refer to Azure documentation for further information. If this property is not specified, the Azure Block Blob client will use the `DefaultAzureCredential` to locate the credentials across several well-known locations.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.azure.block.blob.endpoint

    The Azure Storage Account endpoint, in the format of https://{accountName}.blob.core.windows.net.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.azure.block.blob.prefix

    This prefix will be added to tiered storage objects stored in the target Azure Block Blob Container.

    Type:string
    Default:""
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.gcs.bucket

    The GCS bucket to use for tiered storage.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.gcs.prefix

    This prefix will be added to tiered storage objects stored in GCS.

    Type:string
    Default:""
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.gcs.region

    The GCS region to use for tiered storage.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.local.hotset.bytes

    When tiering is enabled, this configuration controls the maximum size a partition (which consists of log segments) can grow to on broker-local storage before we will discard old log segments to free up space. Log segments retained on broker-local storage is referred as the "hotset". Segments discarded from local store could continue to exist in tiered storage and remain available for fetches depending on retention configurations. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic hotset in bytes.

    Type:long
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • confluent.tier.local.hotset.ms

    When tiering is enabled, this configuration controls the maximum time we will retain a log segment on broker-local storage before we will discard it to free up space. Segments discarded from local store could continue to exist in tiered storage and remain available for fetches depending on retention configurations. If set to -1, no time limit is applied.

    Type:long
    Default:86400000 (1 day)
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • confluent.tier.metadata.replication.factor

    The replication factor for the tier metadata topic (set higher to ensure availability).

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • confluent.tier.s3.bucket

    The S3 bucket to use for tiered storage.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.s3.prefix

    This prefix will be added to tiered storage objects stored in S3.

    Type:string
    Default:""
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.tier.s3.region

    The S3 region to use for tiered storage.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • confluent.transaction.state.log.placement.constraints

    This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. The format of JSON is:
    {
    "version": 1,
    "replicas": [
    {
    "count": 2,
    "constraints": {"rack": "east-1"}
    },
    {
    "count": 1,
    "constraints": {"rack": "east-2"}
    }
    ],
    "observers":[
    {
    "count": 1,
    "constraints": {"rack": "west-1"}
    }
    ]
    }

    Type:string
    Default:""
    Valid Values:kafka.common.TopicPlacement$TopicPlacementValidator@6e9175d8
    Importance:high
    Update Mode:read-only
  • control.plane.listener.name

    Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is :
    listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094
    listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL
    control.plane.listener.name = CONTROLLER
    On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL".
    On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker.
    For example, if the broker's published endpoints on zookeeper are :
    "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"]
    and the controller's config is :
    listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL
    control.plane.listener.name = CONTROLLER
    then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker.
    If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.listener.names

    A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. The ZK-based controller will not use this configuration.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.election.backoff.max.ms

    Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections

    Type:int
    Default:1000 (1 second)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.election.timeout.ms

    Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election

    Type:int
    Default:1000 (1 second)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.fetch.timeout.ms

    Maximum time without a successful fetch from the current leader before becoming a candidate and triggering a election for voters; Maximum time without receiving fetch from a majority of the quorum before asking around to see if there's a new epoch for leader

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • controller.quorum.voters

    Map of id/endpoint information for the set of voters in a comma-separated list of `{id}@{host}:{port}` entries. For example: `1@localhost:9092,2@localhost:9093,3@localhost:9094`

    Type:list
    Default:""
    Valid Values:non-empty list
    Importance:high
    Update Mode:read-only
  • delete.topic.enable

    Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off

    Type:boolean
    Default:true
    Valid Values:
    Importance:high
    Update Mode:read-only
  • leader.imbalance.check.interval.seconds

    The frequency with which the partition rebalance check is triggered by the controller

    Type:long
    Default:300
    Valid Values:
    Importance:high
    Update Mode:read-only
  • leader.imbalance.per.broker.percentage

    The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.

    Type:int
    Default:10
    Valid Values:
    Importance:high
    Update Mode:read-only
  • listeners

    Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set.
    Listener names and port numbers must be unique.
    Specify hostname as 0.0.0.0 to bind to all interfaces.
    Leave hostname empty to bind to default interface.
    Examples of legal listener lists:
    PLAINTEXT://myhost:9092,SSL://:9091
    CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093

    Type:string
    Default:PLAINTEXT://:9092
    Valid Values:
    Importance:high
    Update Mode:per-broker
  • log.dir

    The directory in which the log data is kept (supplemental for log.dirs property)

    Type:string
    Default:/tmp/kafka-logs
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.dirs

    The directories in which the log data is kept. If not set, the value in log.dir is used

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.flush.interval.messages

    The number of messages accumulated on a log partition before messages are flushed to disk

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • log.flush.interval.ms

    The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.flush.offset.checkpoint.interval.ms

    The frequency with which we update the persistent record of the last flush which acts as the log recovery point

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • log.flush.scheduler.interval.ms

    The frequency in ms that the log flusher checks whether any log needs to be flushed to disk

    Type:long
    Default:9223372036854775807
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.flush.start.offset.checkpoint.interval.ms

    The frequency with which we update the persistent record of log start offset

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • log.retention.bytes

    The maximum size of the log before deleting it

    Type:long
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.retention.hours

    The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property

    Type:int
    Default:168
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.retention.minutes

    The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used

    Type:int
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • log.retention.ms

    The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.roll.hours

    The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property

    Type:int
    Default:168
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • log.roll.jitter.hours

    The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property

    Type:int
    Default:0
    Valid Values:[0,...]
    Importance:high
    Update Mode:read-only
  • log.roll.jitter.ms

    The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.roll.ms

    The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used

    Type:long
    Default:null
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • log.segment.bytes

    The maximum size of a single log file

    Type:int
    Default:1073741824 (1 gibibyte)
    Valid Values:[14,...]
    Importance:high
    Update Mode:cluster-wide
  • log.segment.delete.delay.ms

    The amount of time to wait before deleting a file from the filesystem

    Type:long
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:high
    Update Mode:cluster-wide
  • message.max.bytes

    The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.

    Type:int
    Default:1048588
    Valid Values:[0,...]
    Importance:high
    Update Mode:cluster-wide
  • metadata.log.dir

    This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is placed in the first log directory from log.dirs.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • metadata.log.max.record.bytes.between.snapshots

    This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot.

    Type:long
    Default:20971520
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • metadata.log.segment.bytes

    The maximum size of a single metadata log file.

    Type:int
    Default:1073741824 (1 gibibyte)
    Valid Values:[12,...]
    Importance:high
    Update Mode:read-only
  • metadata.log.segment.ms

    The maximum time before a new metadata log file is rolled out (in milliseconds).

    Type:long
    Default:604800000 (7 days)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • metadata.max.retention.bytes

    The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

    Type:long
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • metadata.max.retention.ms

    The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

    Type:long
    Default:604800000 (7 days)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • min.insync.replicas

    When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
    When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • node.id

    The node ID associated with the roles this process is playing when `process.roles` is non-empty. This is required configuration when running in KRaft mode.

    Type:int
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • num.io.threads

    The number of threads that the server uses for processing requests, which may include disk I/O

    Type:int
    Default:8
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • num.network.threads

    The number of threads that the server uses for receiving requests from the network and sending responses to the network

    Type:int
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • num.recovery.threads.per.data.dir

    The number of threads per data directory to be used for log recovery at startup and flushing at shutdown

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
    Update Mode:cluster-wide
  • num.replica.alter.log.dirs.threads

    The number of threads that can move replicas between log directories, which may include disk I/O

    Type:int
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • num.replica.fetchers

    Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker.

    Type:int
    Default:1
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • offset.metadata.max.bytes

    The maximum size for a metadata entry associated with an offset commit

    Type:int
    Default:4096 (4 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • offsets.commit.required.acks

    The required acks before the commit can be accepted. In general, the default (-1) should not be overridden

    Type:short
    Default:-1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • offsets.commit.timeout.ms

    Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.

    Type:int
    Default:5000 (5 seconds)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.load.buffer.size

    Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large).

    Type:int
    Default:5242880
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.retention.check.interval.ms

    Frequency at which to check for stale offsets

    Type:long
    Default:600000 (10 minutes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.retention.minutes

    After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period.

    Type:int
    Default:10080
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.topic.compression.codec

    Compression codec for the offsets topic - compression may be used to achieve "atomic" commits

    Type:int
    Default:0
    Valid Values:
    Importance:high
    Update Mode:read-only
  • offsets.topic.num.partitions

    The number of partitions for the offset commit topic (should not change after deployment)

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.topic.replication.factor

    The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • offsets.topic.segment.bytes

    The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads

    Type:int
    Default:104857600 (100 mebibytes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • process.roles

    The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applicable for clusters in KRaft (Kafka Raft) mode (instead of ZooKeeper). Leave this config undefined or empty for Zookeeper clusters.

    Type:list
    Default:""
    Valid Values:[broker, controller]
    Importance:high
    Update Mode:read-only
  • queued.max.requests

    The number of queued requests allowed for data-plane, before blocking the network threads

    Type:int
    Default:500
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • replica.fetch.min.bytes

    Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config).

    Type:int
    Default:1
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.fetch.wait.max.ms

    The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics

    Type:int
    Default:500
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.high.watermark.checkpoint.interval.ms

    The frequency with which the high watermark is saved out to disk

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.lag.time.max.ms

    If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.socket.receive.buffer.bytes

    The socket receive buffer for network requests

    Type:int
    Default:65536 (64 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • replica.socket.timeout.ms

    The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms

    Type:int
    Default:30000 (30 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • sasl.mechanism.controller.protocol

    SASL mechanism used for communication with controllers. Default is GSSAPI.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:high
    Update Mode:read-only
  • socket.receive.buffer.bytes

    The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

    Type:int
    Default:102400 (100 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • socket.request.max.bytes

    The maximum number of bytes in a socket request

    Type:int
    Default:104857600 (100 mebibytes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • socket.send.buffer.bytes

    The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

    Type:int
    Default:102400 (100 kibibytes)
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • transaction.max.timeout.ms

    The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.

    Type:int
    Default:900000 (15 minutes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.load.buffer.size

    Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).

    Type:int
    Default:5242880
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.min.isr

    Overridden min.insync.replicas config for the transaction topic.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.num.partitions

    The number of partitions for the transaction topic (should not change after deployment).

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.replication.factor

    The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transaction.state.log.segment.bytes

    The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads

    Type:int
    Default:104857600 (100 mebibytes)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • transactional.id.expiration.ms

    The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings.

    Type:int
    Default:604800000 (7 days)
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • unclean.leader.election.enable

    Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:cluster-wide
  • zookeeper.connect

    Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3.
    The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path.

    Type:string
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.connection.timeout.ms

    The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used

    Type:int
    Default:null
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.max.in.flight.requests

    The maximum number of unacknowledged requests the client will send to Zookeeper before blocking.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:high
    Update Mode:read-only
  • zookeeper.session.timeout.ms

    Zookeeper session timeout

    Type:int
    Default:18000 (18 seconds)
    Valid Values:
    Importance:high
    Update Mode:read-only
  • zookeeper.set.acl

    Set client to use secure ACLs

    Type:boolean
    Default:false
    Valid Values:
    Importance:high
    Update Mode:read-only
  • broker.heartbeat.interval.ms

    The length of time in milliseconds between broker heartbeats. Used when running in KRaft mode.

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • broker.id.generation.enable

    Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • broker.rack

    Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • broker.session.timeout.ms

    The length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode.

    Type:int
    Default:9000 (9 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • confluent.balancer.exclude.topic.names

    This config accepts a list of topic names that will be excluded from rebalancing. For example, 'confluent.balancer.exclude.topic.names=[topic1, topic2]'

    Type:list
    Default:""
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • confluent.balancer.exclude.topic.prefixes

    This config accepts a list of topic prefixes that will be excluded from rebalancing. For example, 'confluent.balancer.exclude.topic.prefixes=[prefix1, prefix2]' would exclude topics 'prefix1-suffix1', 'prefix1-suffix2', 'prefix2-suffix3', but not 'abc-prefix1-xyz' and 'def-prefix2'

    Type:list
    Default:""
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • confluent.cluster.link.allow.config.providers

    Allow cluster link to use config providers to resolve the cluster link configurations.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • confluent.cluster.link.enable

    Enable cluster linking feature.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • confluent.cluster.link.fetch.response.min.bytes

    Minimum fetch response size used by cluster link fetchers if the total size is limited by 'confluent.cluster.link.fetch.response.total.bytes'.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:medium
    Update Mode:cluster-wide
  • confluent.cluster.link.fetch.response.total.bytes

    Maximum amount of data fetched by all cluster link fetchers in a broker. If total 'replica.fetch.response.max.bytes' for all fetchers on the broker exceeds this value, all cluster link fetchers reduce their response size to meet this limit. Minimum value for each fetcher can be configured using 'confluent.cluster.link.fetch.response.min.bytes'.

    Type:int
    Default:2147483647
    Valid Values:[1,...]
    Importance:medium
    Update Mode:cluster-wide
  • confluent.cluster.link.io.max.bytes.per.second

    A long value representing the upper bound (bytes/sec) on throughput for cluster link replication. It is suggested that the limit be kept above 1MB/s for accurate behaviour.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:medium
    Update Mode:cluster-wide
  • confluent.cluster.link.metadata.topic.create.retry.delay.ms

    The retry delay in milliseconds when the attempt to create cluster linking metadata topic is failed

    Type:long
    Default:1000 (1 second)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • confluent.cluster.link.metadata.topic.enable

    Whether the cluster link metadata topic should be created and (or) used.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • confluent.cluster.link.metadata.topic.min.isr

    The minimum number of in sync replicas for the cluster linking metadata topic

    Type:short
    Default:2
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • confluent.cluster.link.metadata.topic.partitions

    Number of partitions for the cluster linking metadata topic

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • confluent.cluster.link.metadata.topic.replication.factor

    Replication factor the for the cluster linking metadata topic

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • confluent.tier.archiver.num.threads

    The size of the thread pool used for tiering data to remote storage. This thread pool is also used to garbage collect data in tiered storage that has been deleted.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • confluent.tier.backend

    Tiered storage backend to use

    Type:string
    Default:""
    Valid Values:[S3, GCS, AzureBlockBlob, mock, ]
    Importance:medium
    Update Mode:read-only
  • confluent.tier.enable

    Allow tiering for topic(s). This enables tiering and fetching of data to and from the configured remote storage.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • confluent.tier.feature

    Feature flag that enables components related to tiered storage. This must be enabled before tiering could be enabled by using `confluent.tier.enable` property.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • confluent.tier.fetcher.num.threads

    The size of the thread pool used by the TierFetcher. Roughly corresponds to number of concurrent fetch requests that can be served from tiered storage.

    Type:int
    Default:4
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • confluent.tier.max.partition.fetch.bytes.override

    For tier fetches, this configuration allows overriding the consumer's `max.partition.fetch.bytes` configuration. When fetching tiered data, we will use the maximum of the consumer's configuration and this override. Setting this to a value higher than that of the consumer's could improve batching and effective throughput of tiered fetches. The override is disabled when set to 0.

    Type:int
    Default:0
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • confluent.tier.metadata.bootstrap.servers

    The bootstrap servers used to read from and write to the tier metadata topic. If this is not configured, the configured inter-broker listener would be used.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • connections.max.idle.ms

    Idle connections timeout: the server socket processor threads close the connections that idle more than this

    Type:long
    Default:600000 (10 minutes)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • connections.max.reauth.ms

    When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000

    Type:long
    Default:0
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controlled.shutdown.enable

    Enable controlled shutdown of the server

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controlled.shutdown.max.retries

    Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens

    Type:int
    Default:3
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controlled.shutdown.retry.backoff.ms

    Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controller.quorum.append.linger.ms

    The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.

    Type:int
    Default:25
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controller.quorum.request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • controller.socket.timeout.ms

    The socket timeout for controller-to-broker channels

    Type:int
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • default.replication.factor

    The default replication factors for automatically created topics

    Type:int
    Default:1
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • delegation.token.expiry.time.ms

    The token validity time in miliseconds before the token needs to be renewed. Default value 1 day.

    Type:long
    Default:86400000 (1 day)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • delegation.token.master.key

    DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • delegation.token.max.lifetime.ms

    The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.

    Type:long
    Default:604800000 (7 days)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • delegation.token.secret.key

    Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • delete.records.purgatory.purge.interval.requests

    The purge interval (in number of requests) of the delete records request purgatory

    Type:int
    Default:1
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • fetch.max.bytes

    The maximum number of bytes we will return for a fetch request. Must be at least 1024.

    Type:int
    Default:57671680 (55 mebibytes)
    Valid Values:[1024,...]
    Importance:medium
    Update Mode:cluster-wide
  • fetch.purgatory.purge.interval.requests

    The purge interval (in number of requests) of the fetch request purgatory

    Type:int
    Default:1000
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • group.initial.rebalance.delay.ms

    The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • group.max.session.timeout.ms

    The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

    Type:int
    Default:1800000 (30 minutes)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • group.max.size

    The maximum number of consumers that a single consumer group can accommodate.

    Type:int
    Default:2147483647
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • group.min.session.timeout.ms

    The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.

    Type:int
    Default:6000 (6 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • initial.broker.registration.timeout.ms

    When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process.

    Type:int
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • inter.broker.listener.name

    Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • inter.broker.protocol.version

    Specify which version of the inter-broker protocol will be used.
    This is typically bumped after all brokers were upgraded to a new version.
    Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.

    Type:string
    Default:3.1-IV0
    Valid Values:[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0]
    Importance:medium
    Update Mode:read-only
  • log.cleaner.backoff.ms

    The amount of time to sleep when there are no logs to clean

    Type:long
    Default:15000 (15 seconds)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.dedupe.buffer.size

    The total memory used for log deduplication across all cleaner threads

    Type:long
    Default:134217728
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.delete.retention.ms

    Controls how long delete records and transaction markers are retained after they are eligible for deletion. This is used to ensure that consumers which are concurrently reading the log have an opportunity to read these records before they are removed. For example, `read_committed` consumers rely on reading transaction markers in order to detect the boundaries of each transaction. By delaying deletion, it is unlikely for a consumer to read part of a transaction before the corresponding marker is removed. Note that when the value is 0, there will be no delay before these records are removed. This should be reserved for special situations which already protect against concurrent reads while cleaning is ongoing.

    Type:long
    Default:86400000 (1 day)
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.enable

    Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • log.cleaner.io.buffer.load.factor

    Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions

    Type:double
    Default:0.9
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.io.buffer.size

    The total memory used for log cleaner I/O buffers across all cleaner threads

    Type:int
    Default:524288
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.io.max.bytes.per.second

    The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average

    Type:double
    Default:1.7976931348623157E308
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.max.compaction.lag.ms

    The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:9223372036854775807
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.min.cleanable.ratio

    The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.

    Type:double
    Default:0.5
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.min.compaction.lag.ms

    The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.cleaner.threads

    The number of background threads to use for log cleaning

    Type:int
    Default:1
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.cleanup.policy

    The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"

    Type:list
    Default:delete
    Valid Values:[compact, delete]
    Importance:medium
    Update Mode:cluster-wide
  • log.deletion.max.segments.per.run

    The maximum eligible segments that can be deleted during every check

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.index.interval.bytes

    The interval with which we add an entry to the offset index

    Type:int
    Default:4096 (4 kibibytes)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.index.size.max.bytes

    The maximum size in bytes of the offset index

    Type:int
    Default:10485760 (10 mebibytes)
    Valid Values:[4,...]
    Importance:medium
    Update Mode:cluster-wide
  • log.message.format.version

    Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.

    Type:string
    Default:3.0-IV1
    Valid Values:[0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0]
    Importance:medium
    Update Mode:read-only
  • log.message.timestamp.difference.max.ms

    The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.

    Type:long
    Default:9223372036854775807
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.message.timestamp.type

    Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`

    Type:string
    Default:CreateTime
    Valid Values:[CreateTime, LogAppendTime]
    Importance:medium
    Update Mode:cluster-wide
  • log.preallocate

    Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • log.retention.check.interval.ms

    The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • max.connection.creation.rate

    The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connection.creation.rate.Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached.

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • max.connections

    The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections. Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • max.connections.per.ip

    The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.

    Type:int
    Default:2147483647
    Valid Values:[0,...]
    Importance:medium
    Update Mode:cluster-wide
  • max.connections.per.ip.overrides

    A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200"

    Type:string
    Default:""
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • max.incremental.fetch.session.cache.slots

    The maximum number of incremental fetch sessions that we will maintain.

    Type:int
    Default:1000
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • num.partitions

    The default number of log partitions per topic

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:medium
    Update Mode:cluster-wide
  • password.encoder.old.secret

    The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • password.encoder.secret

    The secret used for encoding dynamically configured passwords for this broker.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • principal.builder.class

    The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.

    Type:class
    Default:org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • producer.purgatory.purge.interval.requests

    The purge interval (in number of requests) of the producer request purgatory

    Type:int
    Default:1000
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • queued.max.request.bytes

    The number of queued bytes allowed before no more requests are read

    Type:long
    Default:-1
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • replica.fetch.backoff.ms

    The amount of time to sleep when fetch partition error occurs.

    Type:int
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • replica.fetch.max.bytes

    The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).

    Type:int
    Default:1048576 (1 mebibyte)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • replica.fetch.response.max.bytes

    Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).

    Type:int
    Default:10485760 (10 mebibytes)
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • replica.selector.class

    The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • reserved.broker.max.id

    Max number that can be used for a broker.id

    Type:int
    Default:1000
    Valid Values:[0,...]
    Importance:medium
    Update Mode:read-only
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.enabled.mechanisms

    The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.

    Type:list
    Default:GSSAPI
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.principal.to.local.rules

    A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.

    Type:list
    Default:DEFAULT
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.mechanism.inter.broker.protocol

    SASL mechanism used for inter-broker communication. Default is GSSAPI.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.server.callback.handler.class

    The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • sasl.server.max.receive.size

    The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits requests to 64K, but we allow upto 512KB by default for custom SASL mechanisms. In practice, PLAIN, SCRAM and OAUTH mechanisms can use much smaller limits.

    Type:int
    Default:524288
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • security.inter.broker.protocol

    Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.

    Type:string
    Default:PLAINTEXT
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • socket.listen.backlog.size

    The maximum number of pending connections on the socket. In Linux, you may also need to configure `somaxconn` and `tcp_max_syn_backlog` kernel parameters accordingly to make the configuration takes effect.

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:medium
    Update Mode:read-only
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:""
    Valid Values:
    Importance:medium
    Update Mode:cluster-wide
  • ssl.client.auth

    Configures kafka broker to request client authentication. The following settings are common:

    • ssl.client.auth=required If set to required client authentication is required.
    • ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
    • ssl.client.auth=none This means client authentication is not needed.

    Type:string
    Default:none
    Valid Values:[required, requested, none]
    Importance:medium
    Update Mode:per-broker
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.key.password

    The password of the private key in the key store file orthe PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.keystore.type

    The file format of the key store file. This is optional for client.

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • ssl.truststore.type

    The file format of the trust store file.

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
    Update Mode:per-broker
  • zookeeper.clientCnxnSocket

    Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named zookeeper.clientCnxnSocket system property.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.client.enable

    Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure system property (note the different name). Defaults to false if neither is set; when true, zookeeper.clientCnxnSocket must be set (typically to org.apache.zookeeper.ClientCnxnSocketNetty); other values to set may include zookeeper.ssl.cipher.suites, zookeeper.ssl.crl.enable, zookeeper.ssl.enabled.protocols, zookeeper.ssl.endpoint.identification.algorithm, zookeeper.ssl.keystore.location, zookeeper.ssl.keystore.password, zookeeper.ssl.keystore.type, zookeeper.ssl.ocsp.enable, zookeeper.ssl.protocol, zookeeper.ssl.truststore.location, zookeeper.ssl.truststore.password, zookeeper.ssl.truststore.type

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.keystore.location

    Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.location system property (note the camelCase).

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.keystore.password

    Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.password system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail.

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.keystore.type

    Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the keystore.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.truststore.location

    Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.location system property (note the camelCase).

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.truststore.password

    Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.password system property (note the camelCase).

    Type:password
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • zookeeper.ssl.truststore.type

    Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the truststore.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
    Update Mode:read-only
  • alter.config.policy.class.name

    The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface.

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • alter.log.dirs.replication.quota.window.num

    The number of samples to retain in memory for alter log dirs replication quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • alter.log.dirs.replication.quota.window.size.seconds

    The time span of each sample for alter log dirs replication quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • authorizer.class.name

    The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization.

    Type:string
    Default:""
    Valid Values:
    Importance:low
    Update Mode:read-only
  • client.quota.callback.class

    The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, <user>, <client-id>, <user> or <client-id> quotas stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.authorizer.authority.name

    The DNS name of the authority that this clusteruses to authorize. This should be a name for the cluster hosting metadata topics.

    Type:string
    Default:""
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.cluster.link.replication.quota.mode

    The mode for cluster link quota that applies to 'confluent.cluster.link.io.max.bytes.per.second'. The mode indicates which inbound traffic is counted towards the limit. Valid values are 'CLUSTER_LINK_ONLY' and 'TOTAL_INBOUND'.

    Type:string
    Default:CLUSTER_LINK_ONLY
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • confluent.cluster.link.replication.quota.window.num

    The number of samples to retain in memory for cluster link replication quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • confluent.cluster.link.replication.quota.window.size.seconds

    The time span of each sample for cluster link replication quotas

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • confluent.defer.isr.shrink.enable

    Defer ISR shrinking for partitions that only have messages with acks = "all" if shrinking ISR would make partition fall under min ISR.

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.log.placement.constraints

    This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. The format of JSON is:
    {
    "version": 1,
    "replicas": [
    {
    "count": 2,
    "constraints": {"rack": "east-1"}
    },
    {
    "count": 1,
    "constraints": {"rack": "east-2"}
    }
    ],
    "observers":[
    {
    "count": 1,
    "constraints": {"rack": "west-1"}
    }
    ]
    }

    Type:string
    Default:""
    Valid Values:kafka.common.TopicPlacement$TopicPlacementValidator@6e9175d8
    Importance:low
    Update Mode:read-only
  • confluent.metadata.server.cluster.registry.clusters

    JSON defining initial state of Cluster Registry. This should not be set manually, instead Cluster Registry http apis should be used.

    Type:string
    Default:[]
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • confluent.reporters.telemetry.auto.enable

    Auto-enable telemetry on the broker. This will add the telemetry reporter to the broker's 'metric.reporters' property if it is not already present. Disabling this property will prevent Self-balancing Clusters from working properly.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • confluent.security.event.router.config

    JSON configuration for routing events to topics

    Type:string
    Default:""
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • confluent.telemetry.enabled

    True if telemetry data can to be reported to Confluent Cloud

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • confluent.tier.fenced.segment.delete.delay.ms

    Segments uploaded by fenced leaders may still be being uploaded when retention occurs on a newly elected leader. Storage backends like AWS S3 return success for delete operations if the object is not found, so to address this edge case the deletion of segments uploaded by fenced leaders is delayed by confluent.tier.fenced.segment.delete.delay.ms with the assumption that the upload will be completed by the time the deletion occurs.

    Type:long
    Default:600000 (10 minutes)
    Valid Values:[0,...]
    Importance:low
    Update Mode:read-only
  • confluent.tier.gcs.cred.file.path

    The path to the credentials file used to create the GCS client. This uses the default GCS configuration file format; please refer to GCP documentation on how to generate the credentials file. If not specified, the GCS client will be instantiated using the default service account available.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.aws.endpoint.override

    Override picking an S3 endpoint. Normally this is performed automatically by the client.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.cred.file.path

    The path to the credentials file used to create the S3 client. It uses a Java properties file and extracts the AWS access key from the "accessKey" property and AWS secret access key from the "secretKey" property. Please refer to AWS documentation for further information. If this property is not specified, the S3 client will use the `DefaultAWSCredentialsProviderChain` to locate the credentials.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.force.path.style.access

    Configures the client to use path-style access for all requests. This flag is not enabled by default. The default behavior is to detect which access style to use based on the configured endpoint and the bucket being accessed. Setting this flag will result in path-style access being forced for all requests.

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise.

    Type:list
    Default:TLSv1.2
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.key.password

    Key password when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyPassword system property (note the camelCase).

    Type:password
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.keystore.location

    Keystore location when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyStore system property (note the camelCase).

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.keystore.password

    Keystore password when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyStorePassword system property (note the camelCase).

    Type:password
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.keystore.type

    Keystore type when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.keyStoreType system property (note the camelCase).

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.truststore.location

    Truststore location when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.trustStore system property (note the camelCase).

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.truststore.password

    Truststore password when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.trustStorePassword system property (note the camelCase).

    Type:password
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.s3.ssl.truststore.type

    Truststore type when using TLS connectivity to AWS S3. Overrides any explicit value set via the javax.net.ssl.trustStoreType system property (note the camelCase).

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • confluent.tier.topic.delete.backoff.ms

    Maximum amount of time to wait before deleting tiered objects for a deleted partition.

    Type:long
    Default:21600000 (6 hours)
    Valid Values:[1,...]
    Importance:low
    Update Mode:cluster-wide
  • confluent.tier.topic.delete.check.interval.ms

    Frequency at which tiered objects cleanup is run for deleted topics.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[1,...]
    Importance:low
    Update Mode:cluster-wide
  • confluent.tier.topic.delete.max.inprogress.partitions

    Maximum number of partitions deleted from remote storage in the deletion interval defined by `confluent.tier.topic.delete.check.interval.ms`

    Type:int
    Default:100
    Valid Values:[1,...]
    Importance:low
    Update Mode:cluster-wide
  • connection.failed.authentication.delay.ms

    Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.

    Type:int
    Default:100
    Valid Values:[0,...]
    Importance:low
    Update Mode:read-only
  • controller.quorum.retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.

    Type:int
    Default:20
    Valid Values:
    Importance:low
    Update Mode:read-only
  • controller.quota.window.num

    The number of samples to retain in memory for controller mutation quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • controller.quota.window.size.seconds

    The time span of each sample for controller mutations quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • create.topic.policy.class.name

    The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface.

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • delegation.token.expiry.check.interval.ms

    Scan interval to remove expired delegation tokens.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • enable.fips

    Enable FIPS mode on the server. If FIPS mode is enabled, broker listener security protocols, TLS versions and cipher suites will be validated based on FIPS compliance requirement.

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • follower.replication.throttled.rate

    A long representing the upper bound (bytes/sec) on inbound replication traffic for follower replicas enumerated in the property follower.replication.throttled.replicas (for each topic). It is suggested that the limit be kept above 1MB/s for accurate behavior.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:low
    Update Mode:cluster-wide
  • follower.replication.throttled.replicas

    Enables throttling for log replication on follower replicas present on this broker. Valid values are 'none' for no throttling to occur and '*' for all replicas to be throttled.

    Type:string
    Default:none
    Valid Values:[none, *]
    Importance:low
    Update Mode:cluster-wide
  • kafka.metrics.polling.interval.secs

    The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • kafka.metrics.reporters

    A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention.

    Type:list
    Default:""
    Valid Values:
    Importance:low
    Update Mode:read-only
  • leader.replication.throttled.rate

    A long representing the upper bound (bytes/sec) on outbound replication traffic for leader replicas enumerated in the property leader.replication.throttled.replicas (for each topic). It is suggested that the limit be kept above 1MB/s for accurate behavior.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:low
    Update Mode:cluster-wide
  • leader.replication.throttled.replicas

    Enables throttling for log replication on leader replicas present on this broker. Valid values are 'none' for no throttling to occur and '*' for all replicas to be throttled.

    Type:string
    Default:none
    Valid Values:[none, *]
    Importance:low
    Update Mode:cluster-wide
  • listener.security.protocol.map

    Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: `INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location). Note that in KRaft a default mapping from the listener names defined by controller.listener.names to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use.

    Type:string
    Default:PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • log.message.downconversion.enable

    This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.

    Type:list
    Default:""
    Valid Values:
    Importance:low
    Update Mode:cluster-wide
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • metrics.recording.level

    The highest recording level for metrics.

    Type:string
    Default:INFO
    Valid Values:
    Importance:low
    Update Mode:read-only
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • password.encoder.cipher.algorithm

    The Cipher algorithm used for encoding dynamically configured passwords.

    Type:string
    Default:AES/CBC/PKCS5Padding
    Valid Values:
    Importance:low
    Update Mode:read-only
  • password.encoder.iterations

    The iteration count used for encoding dynamically configured passwords.

    Type:int
    Default:4096
    Valid Values:[1024,...]
    Importance:low
    Update Mode:read-only
  • password.encoder.key.length

    The key length used for encoding dynamically configured passwords.

    Type:int
    Default:128
    Valid Values:[8,...]
    Importance:low
    Update Mode:read-only
  • password.encoder.keyfactory.algorithm

    The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • quota.window.num

    The number of samples to retain in memory for client quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • quota.window.size.seconds

    The time span of each sample for client quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • replication.quota.window.num

    The number of samples to retain in memory for replication quotas

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • replication.quota.window.size.seconds

    The time span of each sample for replication quotas

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
    Update Mode:read-only
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
    Update Mode:read-only
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory

    Type:class
    Default:null
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • ssl.principal.mapping.rules

    A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.

    Type:string
    Default:DEFAULT
    Valid Values:
    Importance:low
    Update Mode:read-only
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
    Update Mode:per-broker
  • transaction.abort.timed.out.transaction.cleanup.interval.ms

    The interval at which to rollback transactions that have timed out

    Type:int
    Default:10000 (10 seconds)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • transaction.remove.expired.transaction.cleanup.interval.ms

    The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing

    Type:int
    Default:3600000 (1 hour)
    Valid Values:[1,...]
    Importance:low
    Update Mode:read-only
  • zookeeper.acl.change.notification.expiration.ms

    Deletes ACL change notification path which are created before this time.

    Type:int
    Default:900000 (15 minutes)
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.cipher.suites

    Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.

    Type:list
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.crl.enable

    Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.crl system property (note the shorter name).

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.enabled.protocols

    Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabledProtocols system property (note the camelCase). The default value of null means the enabled protocol will be the value of the zookeeper.ssl.protocol configuration property.

    Type:list
    Default:null
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.endpoint.identification.algorithm

    Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the zookeeper.ssl.hostnameVerification system property (note the different name and values; true implies https and false implies blank).

    Type:string
    Default:HTTPS
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.ocsp.enable

    Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.ocsp system property (note the shorter name).

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.ssl.protocol

    Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zookeeper.ssl.protocol system property.

    Type:string
    Default:TLSv1.2
    Valid Values:
    Importance:low
    Update Mode:read-only
  • zookeeper.sync.time.ms

    How far a ZK follower can be behind a ZK leader

    Type:int
    Default:2000 (2 seconds)
    Valid Values:
    Importance:low
    Update Mode:read-only

Note

This website includes content developed at the Apache Software Foundation under the terms of the Apache License v2.