FAQ for Confluent Cloud Topics

Find answers to frequently asked questions about Confluent topics.

Can I maintain unlimited retention using log compacted topics with Confluent Cloud?

Yes. You can set retention per topic in Confluent Cloud, including unlimited retention with log compaction. You are only limited by the amount of total storage for your cluster. For more information, see Configuration Reference for Topics in Confluent Cloud.

Are there topic or partition limits?

Yes, these are described in Kafka Cluster Types in Confluent Cloud. If you try to create more partitions than you are allowed, you will see this error:

"You may not create more than the maximum number of partitions"

How can I change the replication factor for an existing topic in Confluent Cloud?

Confluent Cloud does not support changing the replication factor for an existing topic. Instead, you must create a new topic with the desired replication factor and migrate data.

How do I list all topics in a Confluent Cloud Kafka cluster?

Use the Confluent CLI with confluent kafka topic list or view topics in the Cloud Console.

Can I limit the throughput or rate of messages on a topic?

There’s no built-in throttling on topics, but producers can be configured to control rate using producer configuration settings.

Is it possible to migrate topics from self-managed Kafka to Confluent Cloud?

Yes, you can use Cluster Linking or MirrorMaker for migration.

Is there a way to bulk delete topics via CLI or API?

Topics must be deleted individually using the CLI or API. Bulk deletion is not currently supported.

Is there a way to schedule automatic topic creation or deletion?

There is no built-in scheduler for topic creation or deletion. You must script it via API or CLI externally.

What limits exist for the number of topics or partitions per cluster?

Default limits exist on topics and partitions. For information about limits, see Kafka Cluster Types in Confluent Cloud. For limit increases, contact Confluent Support.

How do I resolve a 403 Forbidden error when trying to access a topic using the Java client?

This is likely due to missing ACLs. Ensure your service account has appropriate READ and DESCRIBE permissions on the topic. For more information, see ACL Overview.

What does the error “unknown topic or partition” mean when consuming from a topic?

This error usually means the topic does not exist or the client does not have the correct permissions. To fix you should check if the topic exists and review your ACLS.

How can I validate that data is flowing from my producer to a topic?

You can use the message viewer in the Cloud Console or use kafka-console-consumer CLI to validate ingestion.

How can I find the exact timestamp of a Kafka message?

Use kafka-console-consumer with --property print.timestamp=true or inspect metadata with the API.

How do I validate that Kafka Connect has permissions to read from a topic?

Verify ACLs for the service account used by the connector and ensure it has READ and DESCRIBE permissions. For more information, see Manage Service Accounts for Connectors in Confluent Cloud.

Is there a way to backup data from a topic periodically?

Use the S3 Sink Connector or Kafka consumer jobs to write data to persistent storage.

Is Stream Sharing available for Dedicated Kafka clusters?

Yes, Stream Sharing is available for Dedicated Kafka clusters, but not all private networking options are supported. For more information, see Share Data with Stream Sharing from Confluent Cloud.