Configuration Reference for Topics in Confluent Cloud

This page provides a reference of Apache Kafka® topic configurations in Confluent Cloud. Each definition includes default values, relevant minimum and maximum values, and whether parameters are editable. To edit topic settings,see Edit topics.

cleanup.policy

This config designates the retention policy to use on log segments. You cannot directly change cleanup.policy from delete to compact, delete. To set cleanup.policy to compact, delete, you must first change from delete to compact, then change to compact, delete.

Note

When switching the clean up policy from delete to compact, records without keys are considered invalid and deleted.

  • Default: delete
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

compression.type

Specify the final compression type for a given topic.

  • Default: producer
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

default.replication.factor

The default replication factors for automatically created topics.

  • Default: 3
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

delete.retention.ms

The amount of time to retain delete tombstone markers for log compacted topics. Maximum value: 63113904000

  • Default: 86400000
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

file.delete.delay.ms

The time to wait before deleting a file from the filesystem

  • Default: 60000
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

flush.messages

This setting allows specifying an interval at which we will force an fsync of data written to the log.

  • Default: 9223372036854775807
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

flush.ms

This setting allows specifying a time interval at which we will force an fsync of data written to the log.

  • Default: 9223372036854775807
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

index.interval.bytes

This setting controls how frequently Kafka adds an index entry to its offset index.

  • Default: 4096
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

max.message.bytes

The largest record batch size allowed by Kafka (after compression if compression is enabled). The maximum for this parameter are different based on Kafka cluster type.

  • Default: 2097164
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes
  • Dedicated and Enterprise Kafka clusters maximum value: 20971520
  • Basic and Standard Kafka clusters maximum value: 8388608

max.compaction.lag.ms

The maximum time a message will remain ineligible for compaction in the log. Minimum value: 21,600,000 ms (six hours) for Dedicated clusters and 604800000 ms (7 days) for Basic, Standard, and Enterprise clusters.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.downconversion.enable

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.

  • Default: true
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

message.timestamp.after.max.ms

The maximum allowable difference that the message timestamp can follow the broker’s timestamp.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.timestamp.before.max.ms

The maximum allowable difference that the message timestamp can precede or equal the broker’s timestamp.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.timestamp.difference.max.ms

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.timestamp.type

Define whether the timestamp in the message is message create time or log append time.

  • Default: CreateTime
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

min.cleanable.dirty.ratio

This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled).

  • Default: 0.5
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

min.compaction.lag.ms

The minimum time a message will remain uncompacted in the log.

  • Default: 0
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

min.insync.replicas

This configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. You can only set min.insync.replica to 1 or 2 in Confluent Cloud.

  • Default: 2
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

num.partitions

You can change the number of partitions for an existing topic (num.partitions) for all cluster types on a per- topic basis. You can only increase (not decrease) the num.partitions value after you create a topic, and you must make the increase using the kafka-topic script or the API.

Limits vary based on Kafka cluster type. For more information, see Kafka Cluster Types in Confluent Cloud.

  • Default: 6
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

To change the number of partitions, you can use the kafka-topic script that is a part of the Kafka command line tools. (installed with Confluent Platform) with the following command.

bin/kafka-topics --bootstrap-server <hostname:port> --command-config <config_file> --alter --topic <topic_name> --partitions <number_partitions>``

Alternatively, you can use the Kafka REST APIs to change the number of partitions for an existing topic (num.partitions). You will need the REST endpoint and the cluster ID for your cluster to make Kafka REST calls. To find this information with Cloud Console, see Find the REST endpoint address and cluster ID. For more on how to use the REST APIs, see Kafka REST API Quick Start for Confluent Cloud Developers.

You could also use Terraform Provider for Confluent to edit this topic setting.

For more details, sign in to the the Confluent Support Portal and search for “How to increase the partition count for a Confluent Cloud hosted topic.”

preallocate

True if we should preallocate the file on disk when creating a new log segment.

  • Default: false
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

retention.bytes

This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the “delete” retention policy.

  • Default: -1
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

retention.ms

This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the “delete” retention policy. Set to -1 for Infinite Storage.

  • Default: 604800000
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

segment.bytes

This configuration controls the segment file size for the log. Minimum: 52428800, Maximum: 1073741824 (1 gibibyte)

  • Default: 104857600
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

segment.index.bytes

This configuration controls the size of the index that maps offsets to file positions.

  • Default: 10485760
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

segment.jitter.ms

The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling.

  • Default: 0
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

segment.ms

This configuration controls the period of time after which Kafka will force the log to roll even if the segment file is not full to ensure that retention can delete or compact old data. Min: 14000000 (4 hours). You can set segment.ms as low as 600000 (10 minutes), but the minimum of 14400000 (4 hours) is still enforced.

  • Default: 604800000
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

unclean.leader.election.enable

Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

  • Default: false
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

Important

You can change editable settings after topic creation, but the limits that apply at topic creation still apply.