Work With Topics in Confluent Cloud¶
This page provides a reference of Apache Kafka® topic configurations as well as steps to create, edit, and delete Kafka topics in Confluent Cloud using the Cloud Console or the Confluent CLI. You can also list, create or delete topics with REST APIs.
The topics are grouped by cluster within each environment.
Important
When private networking is enabled, some Cloud Console components, including topic management, use cluster endpoints that are not publicly reachable. You must configure your network to route requests for these components over the private connection. For details, see Use Confluent Cloud with Private Networking for details.
Create topics¶
The following steps describe how to create a topic using the Cloud Console or Confluent CLI.
To see a list of the default, maximum, and minimum Kafka configuration settings in Confluent Cloud, see Manage Kafka Cluster Configuration Settings in Confluent Cloud.
Follow these steps to create a topic with the Cloud Console:
If you have more than one environment, select an environment.
Select a cluster.
Click the Topics in the navigation menu. The Topics page appears.
If there aren’t any topics created yet, click Create topic. Otherwise, click Add a topic.
Specify your topic details and click Create with defaults. For advanced topic settings, click Customize settings.
Note
To create a topic with infinite storage, on the New Topic page, click
Show advanced settings and choose Infinite for Retention time.
This sets retention.ms
to -1.
Follow these steps to create a topic with the Confluent CLI:
Sign in to your Confluent Cloud account with the Confluent CLI. When prompted, enter a valid email and password for your Confluent Cloud account.
confluent login
Use the
confluent kafka topic create
command to create a topic. The following example creates a topic namedusers
in the clusterlkc-someID
:confluent kafka topic create users --cluster lkc-someID
The output is similar to the following:
Created topic "users".
Note
You can create a topic with infinite storage by specifying -1 for retention.ms
.
confluent kafka topic create users --cluster lkc-someID --config retention.ms=-1
See the full list of options in the
command reference for confluent kafka topic create
.
To learn more about using the Confluent CLI with Confluent Cloud, see Connect the Confluent CLI to Confluent Cloud.
Edit topics¶
When editing topic settings, remember the following:
- You can edit certain topic configurations after a topic has been created. For a list of editable topic settings, see Kafka topic configurations for all Confluent Cloud cluster types.
- You cannot access internal Kafka topics, and therefore they cannot be edited. For example, the internal topic
__consumer_offsets
is not accessible. Also, topics that are not accessible do not count toward partition limits or partition billing charges. Topics created by managed Connectors and ksqlDB are accessible.
The following steps describe how to edit a topic using the Cloud Console or Confluent CLI. Not all topic parameters can be edited. See topic parameters for a list of parameters.
Follow these steps to update a topic with the Cloud Console:
If you have more than one environment, select an environment.
Select a cluster.
Click the Topics from the navigation menu. The Topics page appears.
Select the topic name link for the topic you want to modify.
Select the Configuration tab and Edit settings.
Make your changes and click Save changes. By default, only the most commonly modified settings are shown. For advanced settings, click Switch to expert mode.
Important
- You can’t use Cloud Console to modify parameters with a lock
symbol (
) next to them.
- You can modify some parameters only after create. For more information, see: Kafka topic configurations for all Confluent Cloud cluster types
- You can’t use Cloud Console to modify parameters with a lock
symbol (
Follow these steps to update a topic with the Confluent CLI:
Sign in to your Confluent Cloud account with the Confluent CLI. When prompted, enter a valid email and password for your Confluent Cloud account:
confluent login
Use the
confluent kafka topic update
command to change a topic. The following example changes the retention of a topic namedusers
in the clusterlkc-someID
:confluent kafka topic update users --cluster lkc-someID --config "retention.ms=172800000"
The output is similiar to the following:
Updated the following configs for topic "users": Name | Value ----------------------+------------ retention.ms | 172800000
See the full list of options in the update command reference
To learn more about using the Confluent CLI with Confluent Cloud, see Connect the Confluent CLI to Confluent Cloud.
Delete topics¶
When you request to delete a topic, the topic is marked for deletion. The topic is not deleted immediately unless it is devoid of data, such as a newly created topic. In the interim, you cannot recreate a topic with the same name as the topic being deleted until the original topic and its data is finished being deleted.
When a topic is deleted, it cannot be restored.
Follow these steps to delete a topic using the Cloud Console:
- If you have more than one environment, select an environment.
- Select a cluster.
- Click the Topics in the navigation menu. The Topics page appears.
- Choose the topic name link for the topic you want to delete, and then select the Configuration tab.
- Click *Delete topic.
- Confirm the topic deletion by typing the topic name and click Continue.
Follow these steps to delete a topic with the Confluent CLI:
Sign in to your Confluent Cloud account with the Confluent CLI. When prompted, enter a valid email and password for your Confluent Cloud account.
confluent login
Use the
confluent kafka topic delete
command to delete a topic. The following example requests the deletion of a topic namedusers
in the clusterlkc-someID
:confluent kafka topic delete users --cluster lkc-someID
The output is similiar to the following:
Deleted topic "users".
See the full list of options in the delete command reference
To learn more about using the Confluent CLI with Confluent Cloud, see Connect the Confluent CLI to Confluent Cloud.
Kafka topic configurations for all Confluent Cloud cluster types¶
The following reference lists default parameter values for custom topics. Each definition includes minimum and maximum values where they are relevant, and whether the parameters are editable. To edit topic settings, see Edit topics.
cleanup.policy¶
This config designates the retention policy to use on log segments. You
cannot directly change cleanup.policy
from delete
to compact, delete
. To
set cleanup.policy
to compact, delete
, you must first change from delete
to compact
, then change to compact, delete
.
- Default: delete
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
compression.type¶
Specify the final compression type for a given topic.
- Default: producer
- Editable: No
- Kafka REST API and Terraform Provider Support: No
connection.failed.authentication.delay.ms¶
Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure.
- Default: 5000
- Editable: No
- Kafka REST API and Terraform Provider Support: No
default.replication.factor¶
The default replication factors for automatically created topics.
- Default: 3
- Editable: No
- Kafka REST API and Terraform Provider Support: No
delete.retention.ms¶
The amount of time to retain delete tombstone markers for log compacted topics. Maximum value: 63113904000
- Default: 86400000
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
file.delete.delay.ms¶
The time to wait before deleting a file from the filesystem
- Default: 60000
- Editable: No
- Kafka REST API and Terraform Provider Support: No
flush.messages¶
This setting allows specifying an interval at which we will force an fsync of data written to the log.
- Default: 9223372036854775807
- Editable: No
- Kafka REST API and Terraform Provider Support: No
flush.ms¶
This setting allows specifying a time interval at which we will force an fsync of data written to the log.
- Default: 9223372036854775807
- Editable: No
- Kafka REST API and Terraform Provider Support: No
group.max.session.timeout.ms¶
The maximum allowed session timeout for registered consumers.
- Default: 1200000
- Editable: No
- Kafka REST API and Terraform Provider Support: No
index.interval.bytes¶
This setting controls how frequently Kafka adds an index entry to its offset index.
- Default: 4096
- Editable: No
- Kafka REST API and Terraform Provider Support: No
max.message.bytes¶
The largest record batch size allowed by Kafka (after compression if compression is enabled). The maximum for this parameter are different based on Kafka cluster type.
- Default: 2097164
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
- Dedicated Kafka clusters maximum value: 20971520
- Basic, Standard, and Enterprise Kafka clusters maximum value: 8388608
max.compaction.lag.ms¶
The maximum time a message will remain ineligible for compaction in the log. Minimum value: 604800000 (7 days)
- Default: 9223372036854775807
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
message.downconversion.enable¶
This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.
- Default: true
- Editable: No
- Kafka REST API and Terraform Provider Support: No
message.timestamp.difference.max.ms¶
The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.
- Default: 9223372036854775807
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
message.timestamp.type¶
Define whether the timestamp in the message is message create time or log append time.
- Default: CreateTime
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
min.cleanable.dirty.ratio¶
This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled).
- Default: 0.5
- Editable: No
- Kafka REST API and Terraform Provider Support: No
min.compaction.lag.ms¶
The minimum time a message will remain uncompacted in the log.
- Default: 0
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
min.insync.replicas¶
This configuration specifies the minimum number of replicas that must acknowledge a
write for the write to be considered successful. You can only set
min.insync.replica
to 1
or 2
in Confluent Cloud.
- Default: 2
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
num.partitions¶
You can change the number of partitions for an existing topic (num.partitions
) for all cluster types on a per-
topic basis. You can only increase (not decrease) the num.partitions
value after you create a topic,
and you must make the increase using the CLI or API.
Limits vary based on Kafka cluster type. For more information, see Kafka Cluster Types in Confluent Cloud.
- Default: 6
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
To change the number of partitions, you can use the kafka-topic
script that is a part
of the Kafka command line tools.
(installed with Confluent Platform) with the following command.
bin/kafka-topics --bootstrap-server <hostname:port> --command-config <config_file> --alter --topic <topic_name> --partitions <number_partitions>``
Alternatively, you can use the Kafka REST APIs to
change the number of partitions for an existing topic (num.partitions
). You will need the REST endpoint and the cluster ID for your cluster to make
Kafka REST calls. To find this information with Cloud Console, see
Find the REST endpoint address and cluster ID. For more on how to use the
REST APIs, see Kafka REST API Quick Start for Confluent Cloud Developers.
You could also use Confluent CLI or Terraform Provider for Confluent to edit this topic setting.
For more details, sign in to the the Confluent Support Portal and search for “How to increase the partition count for a Confluent Cloud hosted topic.”
offsets.retention.minutes¶
For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic.
- Default: 10080
- Editable: No
- Kafka REST API and Terraform Provider Support: No
preallocate¶
True if we should preallocate the file on disk when creating a new log segment.
- Default: false
- Editable: No
- Kafka REST API and Terraform Provider Support: No
retention.bytes¶
This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the “delete” retention policy.
- Default: -1
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
retention.ms¶
This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the “delete” retention policy. Set to -1 for infinite storage.
- Default: 604800000
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
segment.bytes¶
This configuration controls the segment file size for the log. Minimum: 52428800, Maximum: 1073741824 (1 gibibyte)
- Default: 104857600
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
segment.index.bytes¶
This configuration controls the size of the index that maps offsets to file positions.
- Default: 10485760
- Editable: No
- Kafka REST API and Terraform Provider Support: No
segment.jitter.ms¶
The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling.
- Default: 0
- Editable: No
- Kafka REST API and Terraform Provider Support: No
segment.ms¶
This configuration controls the period of time after which Kafka will force the log to
roll even if the segment file is not full to ensure that retention can delete or compact old data.
Min: 14000000 (4 hours). You can set segment.ms
as low as 600000
(10 minutes),
but the minimum of 14000000
(4 hours) is still enforced.
- Default: 604800000
- Editable: Yes
- Kafka REST API and Terraform Provider Support: Yes
unclean.leader.election.enable¶
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.
- Default: false
- Editable: No
- Kafka REST API and Terraform Provider Support: No
Important
You can change editable settings after topic creation, but the limits that apply at topic creation still apply.