Work With Topics in Confluent Cloud

This page provides a reference of Apache Kafka® topic configurations as well as steps to create, edit, and delete Kafka topics in Confluent Cloud using the Cloud Console or the Confluent CLI. You can also list, create or delete topics with REST APIs.

The topics are grouped by cluster within each environment.

Important

When private networking is enabled, some Cloud Console components, including topic management, use cluster endpoints that are not publicly reachable. You must configure your network to route requests for these components over the private connection. For details, see Use Confluent Cloud with Private Networking for details.

Create topics

The following steps describe how to create a topic using the Cloud Console or Confluent CLI.

To see a list of the default, maximum, and minimum Kafka configuration settings in Confluent Cloud, see Manage Kafka Cluster Configuration Settings in Confluent Cloud.

Follow these steps to create a topic with the Cloud Console:

  1. Sign in to Confluent Cloud.

  2. If you have more than one environment, select an environment.

  3. Select a cluster.

  4. Select Topics in the navigation menu. The Topics page appears.

  5. If there aren’t any topics created yet, select Create topic. Otherwise, select Add a topic.

    Create topic page Confluent Cloud
  6. Specify your topic details and click Create with defaults. For advanced topic settings, select Customize settings.

    Create topic with defaults page Confluent Cloud

Note

To create a topic with Infinite Storage, on the New Topic page, select Show advanced settings and choose Infinite for Retention time. This sets retention.ms to -1.

Edit topics

When editing topic settings, remember the following:

  • You can edit certain topic configurations after a topic has been created. For a list of editable topic settings, see Kafka topic configurations for all Confluent Cloud cluster types.
  • You cannot access internal Kafka topics, and therefore they cannot be edited. For example, the internal topic __consumer_offsets is not accessible. Also, topics that are not accessible do not count toward partition limits or partition billing charges. Topics created by managed Connectors and ksqlDB are accessible.

The following steps describe how to edit a topic using the Cloud Console or Confluent CLI. Not all topic parameters can be edited. See topic parameters for a list of parameters.

Follow these steps to update a topic with the Cloud Console:

  1. Sign in to Confluent Cloud.

  2. If you have more than one environment, select an environment.

  3. Select a cluster.

  4. Select Topics from the navigation menu. The Topics page appears.

  5. Select the topic you want to modify.

    Topics page Confluent Cloud
  6. Select the Configuration tab and Edit settings.

    Edit topic settings access in Confluent Cloud
  7. Make your changes and select Save changes. By default, only the most commonly modified settings are shown. For advanced settings, select Switch to expert mode.

    Important

    Edit topic settings in Confluent Cloud

Delete topics

When you request to delete a topic, the topic is marked for deletion. The topic is not deleted immediately unless it is devoid of data, such as a newly created topic. In the interim, you cannot recreate a topic with the same name as the topic being deleted until the original topic and its data is finished being deleted.

When a topic is deleted, it cannot be restored.

Follow these steps to delete a topic using the Cloud Console:

  1. Sign in to Confluent Cloud.
  2. If you have more than one environment, select an environment.
  3. Select a cluster.
  4. Select Topics in the navigation menu. The Topics page appears.
  5. Choose the topic name link for the topic you want to delete, and then select the Configuration tab.
  6. Select *Delete topic.
  7. Confirm the topic deletion by typing the topic name and select Continue.

Kafka topic configurations for all Confluent Cloud cluster types

The following reference lists default parameter values for custom topics. Each definition includes minimum and maximum values where they are relevant, and whether the parameters are editable. To edit topic settings, see Edit topics.

cleanup.policy

This config designates the retention policy to use on log segments. You cannot directly change cleanup.policy from delete to compact, delete. To set cleanup.policy to compact, delete, you must first change from delete to compact, then change to compact, delete.

  • Default: delete
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

compression.type

Specify the final compression type for a given topic.

  • Default: producer
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

connection.failed.authentication.delay.ms

Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure.

  • Default: 5000
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

default.replication.factor

The default replication factors for automatically created topics.

  • Default: 3
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

delete.retention.ms

The amount of time to retain delete tombstone markers for log compacted topics. Maximum value: 63113904000

  • Default: 86400000
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

file.delete.delay.ms

The time to wait before deleting a file from the filesystem

  • Default: 60000
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

flush.messages

This setting allows specifying an interval at which we will force an fsync of data written to the log.

  • Default: 9223372036854775807
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

flush.ms

This setting allows specifying a time interval at which we will force an fsync of data written to the log.

  • Default: 9223372036854775807
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

group.max.session.timeout.ms

The maximum allowed session timeout for registered consumers.

  • Default: 1200000
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

index.interval.bytes

This setting controls how frequently Kafka adds an index entry to its offset index.

  • Default: 4096
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

max.message.bytes

The largest record batch size allowed by Kafka (after compression if compression is enabled). The maximum for this parameter are different based on Kafka cluster type.

  • Default: 2097164
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes
  • Dedicated Kafka clusters maximum value: 20971520
  • Basic, Standard, and Enterprise Kafka clusters maximum value: 8388608

max.compaction.lag.ms

The maximum time a message will remain ineligible for compaction in the log. Minimum value: 21,600,000 ms (21600000) or six hours.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.downconversion.enable

This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests.

  • Default: true
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

message.timestamp.after.max.ms

The maximum allowable difference that the message timestamp can follow the broker’s timestamp.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.timestamp.before.max.ms

The maximum allowable difference that the message timestamp can precede or equal the broker’s timestamp.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.timestamp.difference.max.ms

The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message.

  • Default: 9223372036854775807
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

message.timestamp.type

Define whether the timestamp in the message is message create time or log append time.

  • Default: CreateTime
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

min.cleanable.dirty.ratio

This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled).

  • Default: 0.5
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

min.compaction.lag.ms

The minimum time a message will remain uncompacted in the log.

  • Default: 0
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

min.insync.replicas

This configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. You can only set min.insync.replica to 1 or 2 in Confluent Cloud.

  • Default: 2
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

num.partitions

You can change the number of partitions for an existing topic (num.partitions) for all cluster types on a per- topic basis. You can only increase (not decrease) the num.partitions value after you create a topic, and you must make the increase using the kafka-topic script or the API.

Limits vary based on Kafka cluster type. For more information, see Kafka Cluster Types in Confluent Cloud.

  • Default: 6
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

To change the number of partitions, you can use the kafka-topic script that is a part of the Kafka command line tools. (installed with Confluent Platform) with the following command.

bin/kafka-topics --bootstrap-server <hostname:port> --command-config <config_file> --alter --topic <topic_name> --partitions <number_partitions>``

Alternatively, you can use the Kafka REST APIs to change the number of partitions for an existing topic (num.partitions). You will need the REST endpoint and the cluster ID for your cluster to make Kafka REST calls. To find this information with Cloud Console, see Find the REST endpoint address and cluster ID. For more on how to use the REST APIs, see Kafka REST API Quick Start for Confluent Cloud Developers.

You could also use Terraform Provider for Confluent to edit this topic setting.

For more details, sign in to the the Confluent Support Portal and search for “How to increase the partition count for a Confluent Cloud hosted topic.”

offsets.retention.minutes

For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic.

  • Default: 10080
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

preallocate

True if we should preallocate the file on disk when creating a new log segment.

  • Default: false
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

retention.bytes

This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the “delete” retention policy.

  • Default: -1
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

retention.ms

This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the “delete” retention policy. Set to -1 for Infinite Storage.

  • Default: 604800000
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

segment.bytes

This configuration controls the segment file size for the log. Minimum: 52428800, Maximum: 1073741824 (1 gibibyte)

  • Default: 104857600
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

segment.index.bytes

This configuration controls the size of the index that maps offsets to file positions.

  • Default: 10485760
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

segment.jitter.ms

The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling.

  • Default: 0
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

segment.ms

This configuration controls the period of time after which Kafka will force the log to roll even if the segment file is not full to ensure that retention can delete or compact old data. Min: 14000000 (4 hours). You can set segment.ms as low as 600000 (10 minutes), but the minimum of 14000000 (4 hours) is still enforced.

  • Default: 604800000
  • Editable: Yes
  • Kafka REST API and Terraform Provider Support: Yes

unclean.leader.election.enable

Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

  • Default: false
  • Editable: No
  • Kafka REST API and Terraform Provider Support: No

Important

You can change editable settings after topic creation, but the limits that apply at topic creation still apply.