Cluster Linking Quick Start on Confluent Cloud

In this quick how-to, you will create a multi-region or multi-cloud architecture with just a few commands.


Get the latest version of the Confluent CLI

To start off, you’ll need the latest version of the CLI.

Create source and destination clusters

Next, you’ll need two Confluent Cloud clusters. Data will flow from a “source” cluster to a “destination” cluster, which you’ll want to put in a different region or cloud.


The source cluster can be a Basic cluster, Standard cluster, or Dedicated cluster cluster with Internet networking.

To learn more about supported cluster types and combinations, see Supported cluster types and supported cluster combinations for private networking.

If you don’t already have a cluster you want to use, you can spin one up from the Confluent Cloud Console or directly from the Confluent CLI with this command:

confluent kafka cluster create ClusterLinkingSource --type basic --cloud aws --region us-west-2

Your destination cluster must be a Dedicated cluster.

If you don’t already have a dedicated cluster you want to use as the destination, you can create one from the Confluent Cloud Console or directly from the Confluent CLI with this command:

confluent kafka cluster create ClusterLinkingDestination --type dedicated --cloud aws --region us-east-1 --cku 1 --availability single-zone


  • Source and destination clusters should be in different regions to effectively demo Cluster Linking capabilities, so choose a different region or cloud for your destination cluster than your source cluster for a multi-region or multi-cloud setup. (For example, you might have a source cluster in Northern California, and a destination cluster in Northern Virginia.)
  • A Confluent Cloud cluster has an hourly charge, and charges for any data into, out of, or stored on the cluster. So, running this tutorial will accrue some charges. If you are only using the clusters for this demo, make sure to delete them once you have finished this walkthrough. This is covered in Teardown.

Replicate data across regions

Now that you have clusters coast-to-coast, you can geo-replicate some data.

Save cluster IDs and source endpoint

You will use your cluster details in each of the next few commands (when you create a cluster link and test out mirror topics), so put these in a handy place. Save your source cluster ID, your source cluster Endpoint, and your destination cluster ID as local variables in your terminal window.

You can get the IDs of your cluster(s) with confluent kafka cluster list, and the source cluster’s Endpoint with confluent kafka cluster describe <source_id>. If you’ve just created the clusters, you will get full descriptions of them with all this information as output to the confluent kafka cluster create commands.

Copy each of the following commands and substitute in your cluster IDs and endpoints to save them as local variables.


Create source and mirror topics

Now, that you’ve got a link running, try it out!

The next tasks can be accomplished through the Confluent CLI or the Confluent Cloud Console. This tutorial goes into more detail for the Confluent CLI steps, as they are a bit less intuitive, but feel free to use either method.

In Confluent, data is stored in topics. To move data across clusters, start with a topic on the source cluster, then use your cluster link to create a copy of it (a “mirror topic”) on the destination cluster.

Mirror topics reflect all data from their source topics. Consumers can read from mirror topics, giving them a local copy of all events contained in the topic. Mirror topics sync their source topic configurations, and stay up to date; so you don’t need to set up or change any configs on your mirror topics.



Mirror topics are read-only, so don’t try to produce to one!

Specify the API key to use on the source cluster for producing and consuming data from the CLI:

confluent api-key use <source-api-key> --resource $source_id

You’ll need a source topic. Make a one-partition topic called “topic-to-link” on your source cluster:

confluent kafka topic create topic-to-link --cluster $source_id --partitions 1

Then, put some data into it

seq 1 10 | confluent kafka topic produce topic-to-link --cluster $source_id

This produced the numbers 1 through 10 to your source cluster (which could be, for example, to AWS in Northern California).

You can mirror that data to your destination region (for example, Google Cloud in Northern Virginia), in one command:

confluent kafka mirror create topic-to-link --cluster $destination_id --link my-link

You just geo-replicated data!


No replication factors are synced to mirror topics. The replication factor defaults to 3 for all topics, and this is not configurable. Therefore, you do not specify replication factor flags when creating topics on Confluent Cloud. To learn more, see Mirror topic configurations not synced.

Consume from the mirror topic

Now, make use of your geo-replicated data by consuming those on the destination cluster.

You may need to first create an API key and secret for the CLI to use with the destination cluster:

confluent api-key create --resource <destination-cluster-id>
confluent api-key use <destination-api-key> --resource <destination-cluster-id>

Now, read from the mirror topic.

confluent kafka topic consume topic-to-link --cluster $destination_id --from-beginning

Here is a copy of the command and the output you will see:

confluent kafka topic consume topic-to-link --cluster $destination_id --from-beginning
Starting Kafka Consumer. Use Ctrl-C to exit.


You can quit the consumer at any time by hitting Ctrl + C at the same time.

Congrats! You created a multi-region or multi-cloud real-time streaming architecture.

Go exploring

In the latest version of Confluent Cloud, you can:

  • Create a cluster link
  • Create source and mirror topics
  • View source and mirror topics, and monitor messages coming into them
  • List and inspect existing cluster links in all environments
  • Drill down on a cluster link to view stats on its activity
  • Refer to an embedded cheat sheet on how to create cluster links

Log on to the Confluent Cloud Console, and view the clusters you created from there.


View source and mirror topics

Try producing more messages to topic-to-link on your source cluster and consuming them from its mirror on your destination cluster.

You can create more mirror topics; as many as you want, using the same cluster link.

You should be able to view the messages on both the source and destination (mirror) topics. To navigate to topic messages, select a cluster, click Topics, and select the Messages tab.



Choose “Jump to offset” or “Jump to timestamp”, type 1 and select 1/Parition 0. This shows all the messages on partition 1.


When you are ready to quit the demo, don’t forget to tear down the resources so as not to incur hourly charges.

  1. Delete topic-to-link on both clusters with these commands:

    confluent kafka topic delete topic-to-link --cluster $destination_id
    confluent kafka topic delete topic-to-link --cluster $source_id
  2. Delete any other mirror topics.

    If you created more mirror topics, you’ll need to delete those, too.

    You can see a list of all of the mirror topics on your destination cluster with:

    confluent kafka mirror list --cluster $destination_id

    Then, use this command to delete each mirror topic.

    confluent kafka topic delete <topic-name> --cluster $destination_id
  3. Delete the cluster link(s).

    Once all of the mirror topics are gone, you can delete the cluster link on your destination cluster:

    confluent kafka link delete my-link --cluster $destination_id

    If you created more cluster links, you can see all of the cluster links going to your destination cluster with this command:

    confluent kafka link list --cluster $destination_id

    Delete any additional cluster links.

  4. Delete any clusters you no longer need.

    If you were using existing Confluent Cloud clusters that you want to continue to use, then you’re done!

    If you spun new ones up for this demo, you can delete them with the command confluent kafka cluster delete <cluster-id>.


    Be careful; once you delete a cluster, you can’t get it back.

    If you were following along with the demo and created new clusters, just use $destination_id and $source_id:

    confluent kafka cluster delete $destination_id
    confluent kafka cluster delete $source_id