Manage Kafka Cluster and Topic Configuration Settings in Confluent Cloud¶
This topic describes the default Apache Kafka® cluster and topic configuration settings in Confluent Cloud as well as the topic settings that can be edited. For a complete description of all Kafka configurations, see Confluent Platform Configuration Reference.
When editing topic and cluster settings, remember the following:
- You cannot edit cluster settings on Confluent Cloud on Basic and Standard clusters, but you can edit certain topic configurations after a topic has been created. For a list of editable topic settings, see Custom topic settings for all cluster types.
- You can change some configuration settings on Dedicated clusters using the Kafka CLI or REST API. See Change cluster settings for Dedicated clusters.
- You cannot access internal Kafka topics, and therefore they cannot be edited. For example, the internal topic
__consumer_offsets
is not accessible. Also, topics that are not accessible do not count toward partition limits or partition billing charges. Topics created by managed Connectors and ksqlDB are accessible.
Access cluster settings in the Confluent Cloud Console¶
You can access settings for your clusters with the Cloud Console.
To do so:
Sign in to your Confluent account.
Select an environment and choose a cluster.
In the navigation menu, select Cluster Overview > Cluster settings and the Cluster settings page displays.
Note
You may also see Networking and Security tabs depending on the type of cluster and how it is configured.
Custom topic settings for all cluster types¶
The following table lists default parameter values for custom topics. The table also includes minimum and maximum values where they are relevant, and whether the parameters are editable. To edit topic settings, see Edit a Topic.
Important
The following limitations apply to changing parameter values after a topic has been created:
- The
num.partitions
value can only be increased and not decreased after a topic is created, and you must make the increase using the CLI or API. - All other editable settings can be changed after topic creation, but the limits that applied at topic creation still apply.
Parameter Name | Default | Editable | More Info |
---|---|---|---|
cleanup.policy | delete | Yes | See footnote [*] |
compression.type | producer | No | |
connection.failed.authentication.delay.ms | 5000 | No | |
default.replication.factor | 3 | No | |
delete.retention.ms | 86400000 | Yes | Max: 63113904000 |
file.delete.delay.ms | 60000 | No | |
flush.messages | 9223372036854775807 | No | |
flush.ms | 9223372036854775807 | No | |
group.max.session.timeout.ms | 1200000 | No | |
index.interval.bytes | 4096 | No | |
max.message.bytes (Dedicated) | 2097164 | Yes | Max: 20971520 |
max.message.bytes (Standard, Basic) | 2097164 | Yes | Max: 8388608 |
max.compaction.lag.ms | 9223372036854775807 | Yes | Min: 604800000 (7 days) |
message.downconversion.enable | true | No | |
message.format.version | 3.0-IV1 | No | |
message.timestamp.difference.max.ms | 9223372036854775807 | Yes | |
message.timestamp.type | CreateTime | Yes | |
min.cleanable.dirty.ratio | 0.5 | No | |
min.compaction.lag.ms | 0 | Yes | |
min.insync.replicas | 2 | Yes | |
num.partitions | 6 | Yes, see Custom topic settings for all cluster types supported by Kafka REST API and Terraform Provider | Limits vary, see: Confluent Cloud Cluster Types |
offsets.retention.minutes | 10080 | No | |
preallocate | false | No | |
retention.bytes | -1 | Yes | |
retention.ms | 604800000 | Yes | Set to -1 for infinite storage |
segment.bytes | 104857600 | Yes | Min: 52428800 Max: 1073741824 (1 gibibyte) |
segment.index.bytes | 10485760 | No | |
segment.jitter.ms | 0 | No | |
segment.ms | 604800000 | Yes | Min: 14000000 (4 hours) [†] |
unclean.leader.election.enable | false | No |
[*] | You cannot directly change cleanup.policy from delete to compact, delete . To set cleanup.policy to compact, delete , you must first change from delete to compact , then change to compact, delete . |
[†] | You can set segment.ms to as low as 600000 (10 minutes) but the minimum 14000000 (4 hours) is still enforced. |
Custom topic settings for all cluster types supported by Kafka REST API and Terraform Provider¶
The following table lists default parameter values for custom topics. The table also includes minimum and maximum values where they are relevant, and whether the parameters are editable. To edit topic settings, see Edit a Topic.
Important
The following limitations apply to changing parameter values after a topic has been created:
- All other editable settings can be changed after topic creation, but the limits that applied at topic creation still apply.
Parameter Name | Default | Editable | More Info |
---|---|---|---|
cleanup.policy | delete | Yes | See footnote. [‡] |
delete.retention.ms | 86400000 | Yes | Max: 63113904000 |
max.message.bytes (Dedicated) | 2097164 | Yes | Max: 20971520 |
max.message.bytes (Standard, Basic) | 2097164 | Yes | Max: 8388608 |
max.compaction.lag.ms | 9223372036854775807 | Yes | Min: 604800000 (7 days) |
message.timestamp.difference.max.ms | 9223372036854775807 | Yes | |
message.timestamp.type | CreateTime | Yes | |
min.compaction.lag.ms | 0 | Yes | |
min.insync.replicas | 2 | Yes | |
retention.bytes | -1 | Yes | |
retention.ms | 604800000 | Yes | Set to -1 for infinite storage |
segment.bytes | 104857600 | Yes | Min: 52428800 Max: 1073741824 (1 gibibyte) |
segment.ms | 604800000 | Yes | Min: 14000000 (4 hours) [§] |
[‡] | You cannot directly change cleanup.policy from delete to compact, delete . To set cleanup.policy to compact, delete , you must first change from delete to compact , then change to compact, delete . |
[§] | You can set segment.ms to as low as 600000 (10 minutes) but the minimum 14000000 (4 hours) is still enforced. |
Increase partitions¶
You can change the number of partitions for an existing topic (num.partitions
) for all cluster types on a per-
topic basis.
To change the number of partitions, you use the kafka-topic
script that is a part
of the Kafka command line tools.
(installed with Confluent Platform) with the following command.
bin/kafka-topics --bootstrap-server <hostname:port> --command-config <config_file> --alter --topic <topic_name> --partitions <number_partitions>``
Alternatively, you can use the Kafka REST APIs to
change the number of partitions for an existing topic (num.partitions
). You will need the REST endpoint and the cluster ID for your cluster to make
Kafka REST calls. If you don’t know where to find these, see
Find the REST endpoint address and cluster ID to access them in the Cloud Console.
For more on how to use the REST APIs, see Apache Kafka® API Quick Start for Confluent Cloud Developers.
You could also use Confluent CLI or Terraform Provider for Confluent to edit this topic setting.
For more details, sign in to the the Confluent Support Portal and search for “How to increase the partition count for a Confluent Cloud hosted topic.”
Change cluster settings for Dedicated clusters¶
The following table lists editable cluster settings for Dedicated clusters and their default parameter values.
Parameter Name | Default | Editable | More Info |
---|---|---|---|
auto.create.topics.enable | false | Yes | |
ssl.cipher.suites | “” | Yes | |
num.partitions | 6 | Yes | Limits vary, see: Confluent Cloud Cluster Types |
log.cleaner.max.compaction.lag.ms | 9223372036854775807 | Yes | Min: 604800000 |
log.retention.ms | 604800000 | Yes | Set to -1 for infinite storage |
To modify these settings, you can use the kafka-configs
script that
is a part of the
Kafka command line tools (installed as a part of Confluent Platform).
To use this script, you will need the bootstrap server for your cluster. See Access cluster settings in the Confluent Cloud Console for how to
retrieve the bootstrap server in the Confluent Cloud Console.
Alternatively, you can use the Kafka REST APIs to change these settings. You will need the REST endpoint and the cluster ID for your cluster to make Kafka REST calls. If you don’t know where to find these, see Find the REST endpoint address and cluster ID to access them in the Cloud Console. For more on how to use the REST APIs, see Apache Kafka® API Quick Start for Confluent Cloud Developers.
Changes to the settings are applied to your Confluent Cloud cluster without additional action on your part and are persistent until the setting is explicitly changed again.
Important
These settings apply only to Dedicated clusters and cannot be modified on Basic and Standard clusters.
Enable automatic topic creation¶
Automatic topic creation (auto.create.topics.enable
) is disabled (false
) by default to help prevent unexpected costs. The following commands show
how to enable it. For more on this property, see broker configurations.
bin/kafka-configs --bootstrap-server <bootstrap> --command-config <config-properties> --entity-type brokers --entity-default --alter --add-config auto.create.topics.enable=true
curl --location --request PUT 'https://<REST endpoint>/kafka/v3/clusters/<cluster-id>/broker-configs/auto.create.topics.enable' \
--header 'Authorization: Basic <base64-encoded-key-and-secret>' \
--header 'Content-Type: application/json' \
--data-raw '{
"value": "true"
}'
Restrict cipher suites¶
The following commands show how to restrict the allowed TLS/SSL cipher suites (ssl.cipher.suites
).
For more on this property, see broker configurations.
bin/kafka-configs --bootstrap-server <bootstrap> --command-config config.properties --entity-type brokers --entity-default --alter --add-config ssl.cipher.suites=["list-item-1,list-item-2"]
curl --location --request PUT 'https://<REST endpoint>/kafka/v3/clusters/<cluster-id>/broker-configs/ssl.cipher.suites' \
--header 'Authorization: Basic <base64-encoded-key-and-secret>' \
--header 'Content-Type: application/json' \
--data-raw '{
"value": "<int>"
}'
The valid list of ciphers includes:
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
Change the default number of partitions for new topics¶
The following commands show how to set the default number of partitions (num.partitions
) for newly
created topics. For more on this property, see num.partitions
in the table in
the previous section.
bin/kafka-configs --bootstrap-server <bootstrap> --command-config <auth-configs> --entity-type brokers --entity-default --alter --add-config num.partitions=<int>
curl --location --request PUT 'https://<REST endpoint>/kafka/v3/clusters/<cluster-id>/broker-configs/num.partitions' \
--header 'Authorization: Basic <base64-encoded-key-and-secret>' \
--header 'Content-Type: application/json' \
--data-raw '{
"value": "<int>"
}'
To change the number of partitions once a topic has been created, see Custom topic settings for all cluster types supported by Kafka REST API and Terraform Provider.
Change max compaction log time¶
The following commands show how to set the default maximum log compaction time (log.cleaner.max.compaction.lag.ms
) for new topics.
For more on this property, see topic configurations
and max.compaction.lag.ms
in the table in the previous section.
bin/kafka-configs --bootstrap-server <bootstrap> --command-config <auth-configs> --entity-type brokers --entity-default --alter --add-config log.cleaner.max.compaction.lag.ms=<int>
curl --location --request PUT 'https://<REST endpoint>/kafka/v3/clusters/<cluster-id>/broker-configs/log.cleaner.max.compaction.lag.ms' \
--header 'Authorization: Basic <base64-encoded-key-and-secret>' \
--header 'Content-Type: application/json' \
--data-raw '{
"value": "<int>"
}'
Change log retention time¶
The following commands show how to set the default log retention time (log.retention.ms
) for new topics.
For more on this property, see topic configurations
and retention.ms
in the table in the previous section.
bin/kafka-configs --bootstrap-server <bootstrap> --command-config <auth-configs> --entity-type brokers --entity-default --alter --add-config log.retention.ms=<int>
curl --location --request PUT 'https://<REST endpoint>/kafka/v3/clusters/<cluster-id>/broker-configs/log.retention.ms' \
--header 'Authorization: Basic <base64-encoded-key-and-secret>' \
--header 'Content-Type: application/json' \
--data-raw '{
"value": "<int>"
}'