FAQ for Confluent Cloud Topics¶
Find answers to frequently asked questions about Confluent topics.
Can I maintain unlimited retention using log compacted topics with Confluent Cloud?¶
Yes. You can set retention per topic in Confluent Cloud, including unlimited retention with log compaction. You are only limited by the amount of total storage for your cluster. For more information, see Configuration Reference for Topics in Confluent Cloud.
Are there topic or partition limits?¶
Yes, these are described in Kafka Cluster Types in Confluent Cloud. If you try to create more partitions than you are allowed, you will see this error:
"You may not create more than the maximum number of partitions"
How can I change the replication factor for an existing topic in Confluent Cloud?¶
Confluent Cloud does not support changing the replication factor for an existing topic. Instead, you must create a new topic with the desired replication factor and migrate data.
How do I list all topics in a Confluent Cloud Kafka cluster?¶
Use the Confluent CLI with confluent kafka topic list
or view topics in the Cloud Console.
Can I limit the throughput or rate of messages on a topic?¶
There’s no built-in throttling on topics, but producers can be configured to control rate using producer configuration settings.
Is it possible to migrate topics from self-managed Kafka to Confluent Cloud?¶
Yes, you can use Cluster Linking or MirrorMaker for migration.
Is there a way to bulk delete topics via CLI or API?¶
Topics must be deleted individually using the CLI or API. Bulk deletion is not currently supported.
Is there a way to schedule automatic topic creation or deletion?¶
There is no built-in scheduler for topic creation or deletion. You must script it via API or CLI externally.
What limits exist for the number of topics or partitions per cluster?¶
Default limits exist on topics and partitions. For information about limits, see Kafka Cluster Types in Confluent Cloud. For limit increases, contact Confluent Support.
How do I resolve a 403 Forbidden error when trying to access a topic using the Java client?¶
This is likely due to missing ACLs. Ensure your service account has appropriate READ and DESCRIBE permissions on the topic. For more information, see ACL Overview.
What does the error “unknown topic or partition” mean when consuming from a topic?¶
This error usually means the topic does not exist or the client does not have the correct permissions. To fix you should check if the topic exists and review your ACLS.
How can I validate that data is flowing from my producer to a topic?¶
You can use the message viewer in the Cloud Console or use kafka-console-consumer CLI to validate ingestion.
How can I find the exact timestamp of a Kafka message?¶
Use kafka-console-consumer with --property print.timestamp=true
or
inspect metadata with the API.
How do I validate that Kafka Connect has permissions to read from a topic?¶
Verify ACLs for the service account used by the connector and ensure it has READ and DESCRIBE permissions. For more information, see Manage Service Accounts for Connectors in Confluent Cloud.
Is there a way to backup data from a topic periodically?¶
Use the S3 Sink Connector or Kafka consumer jobs to write data to persistent storage.
How do I connect Confluent Cloud for Flink SQL to a Confluent Cloud Kafka topic?¶
Connect using Apache Flink® connectors with proper Kafka properties and Schema Registry integration. For more information, see Stream Processing with Confluent Cloud for Apache Flink.
Is Stream Sharing available for Dedicated Kafka clusters?¶
Yes, Stream Sharing is available for Dedicated Kafka clusters, but not all private networking options are supported. For more information, see Share Data with Stream Sharing from Confluent Cloud.
How do I change the cleanup policy for an existing topic?¶
You cannot directly change cleanup.policy
from delete
to compact, delete
. You must first change from delete
to compact
, then change to compact, delete
. For more information, see Configuration Reference for Topics in Confluent Cloud.
What happens when I switch a topic’s cleanup policy from delete to compact?¶
Records without keys are considered invalid and deleted when switching the cleanup policy from delete
to compact
.
For more information, see Configuration Reference for Topics in Confluent Cloud.
Can I change the number of partitions for an existing topic?¶
Yes, you can increase (but not decrease) the number of partitions for an existing topic. For procedures, see num.partitions.
What are the maximum message size limits for different cluster types?¶
The max.message.bytes
limits vary by cluster type: Dedicated and Enterprise clusters support up to 20 MB (20,971,520 bytes),
while Basic and Standard clusters support up to 8 MB (8,388,608 bytes). For more information, see Configuration Reference for Topics in Confluent Cloud.
Why can’t I see metrics for all my topics in the Cloud Console?¶
If you have more than 1,000 topics, Cloud Console may not display metrics for all topics. For complete monitoring of all topics, use the Metrics API. For more information, see Confluent Cloud Metrics.
Why doesn’t message browser show the maximum number of results I selected?¶
The message browser may not always display results equal to the maximum selected due to filtering selections and data ordering in the topic. Change from Latest to From beginning or select a specific time or offset to see different result sets. For more information, see Use Message Browser in Confluent Cloud.
How much does using message browser cost?¶
You pay for data consumed and produced to your cluster by message browser. In terms of cost, message browser is like any other client you might deploy. For more information, see Use Message Browser in Confluent Cloud.
Can I filter message browser results?¶
Yes, you can filter across available columns, but you can only filter results that are currently displayed. To add more results, increase maximum results, manipulate start time, or add partitions. For more information, see Use Message Browser in Confluent Cloud.
What schema strategies does message the browser support?¶
The message browser displays messages with associated schemas and only uses TopicNameStrategy schemas to deserialize messages. Where multiple schema contexts are available, you can select between them with a dropdown. For more information, see Use Message Browser in Confluent Cloud.
What are the prerequisites for enabling Stream Sharing?¶
An administrator must select a Stream Governance package, enable Stream Sharing for the organization, and use Confluent Cloud Schema Registry for schema-enabled topics (self-managed Schema Registry cannot share schema-enabled topics). For procedures, see Share Data with Stream Sharing from Confluent Cloud.
What cluster types support Stream Sharing?¶
Basic, Standard, and Dedicated clusters support Stream Sharing. Enterprise and Freight clusters do not support Stream Sharing. Note that not all private networking options are supported for Dedicated clusters. For more information, see Limitations.
What are the data throughput limits for Stream Sharing?¶
There’s a limit of 10 MB per second per share. Contact Confluent Support if you need this limit adjusted. For more information, see Limitations.
How long do Stream Sharing invitations remain valid?¶
By default, you have one week to access the link to shared data or redeem the access token, although the provider can invalidate the link at any time. For more information, see Share Data with Stream Sharing from Confluent Cloud.
Can I redeem a Stream Sharing invitation multiple times?¶
No, you can only redeem a stream share once. After receiving credentials, you cannot redeem the same stream share again. Once redeemed, it remains active indefinitely until deleted or access is revoked. For more information, see Share Data with Stream Sharing from Confluent Cloud.
What happens if I don’t receive a Stream Sharing invitation email?¶
Check your junk or spam folders. If you still can’t find it, contact the data provider. The provider can resend invitations as often as needed until accepted. For more information, see Share Data with Stream Sharing from Confluent Cloud.
What networking requirements exist for private endpoint Stream Sharing?¶
Both data provider and recipient must use the same cloud provider. You must set up network connectivity to the Confluent Cloud network before accessing the topic. Supported options include AWS PrivateLink, Azure Private Link, and Google Cloud Private Service Connect. For more information, see Limitations.
What permissions do Stream Sharing consumers receive?¶
Consumers receive read-only access through internal RBAC roles: StreamShareRead (for topics and groups) and StreamShareSchemaRegistryRead (for Schema Registry subjects). For more information, see Share Data with Stream Sharing from Confluent Cloud.
Why can’t I access topic management features with private networking enabled?¶
When private networking is enabled, some Cloud Console components (including topic management) use cluster endpoints that aren’t publicly reachable. You must configure your network to route requests over the private connection. For details, see Use the Confluent Cloud Console with Private Networking.