Cluster Linking with Private Networking on Confluent Cloud¶
Looking for Confluent Platform Cluster Linking docs? You are currently viewing Confluent Cloud documentation. If you are looking for Confluent Platform docs, check out Cluster Linking on Confluent Platform.
Supported Cluster Combinations¶
Cluster Linking is fundamentally a networking feature: it copies data over the network. As such, Cluster Linking requires that at least one of the clusters involved has connectivity to the other cluster. Therefore, the networking situation of each cluster determines whether the two clusters can be linked, and whether the destination cluster or the source cluster must initiate the connection. By default, the destination cluster will initiate the connection. A special mode called “source-initiated links” allows the source cluster to initiate the connection of the cluster link.
The following tables show which networking combinations are possible, and whether a source-initiated link is required.
Tip
When using the Confluent Cloud Console to create cluster links, only linkable clusters are shown in the drop-down options. Clusters that cannot be linked are filtered out.
Confluent Cloud Source and Destination Clusters¶
Source cluster | Destination cluster | Possible? | Notes |
---|---|---|---|
Confluent Cloud - A Basic or Standard cluster, or a Dedicated [1] cluster with Internet networking | Confluent Cloud - Any Dedicated cluster | Yes | |
Confluent Cloud - A cluster with private networking | Confluent Cloud - A Dedicated cluster in the same Confluent Cloud network | Yes | |
Confluent Cloud - A Dedicated cluster with Transit Gateway networking | Confluent Cloud - A Dedicated cluster with Transit Gateway networking | Yes |
|
Confluent Cloud - A cluster with private networking | Confluent Cloud - A cluster with private networking in a different Confluent Cloud network | No (except the Transit Gateway case above) | |
Confluent Cloud - A cluster with private networking | Confluent Cloud - A Dedicated cluster with public networking | Yes (in Preview) [2] |
|
[1] | Basic, Standard, and Dedicated cluster types are described in Supported Cluster Types. |
[2] | Preview scenarios are for testing and evaluation purposes or to give feedback to Confluent. They are not suitable for production use cases and do not fall into any Confluent Support SLAs. |
[3] | Classless Inter-Domain Routing (CIDR) is explained in the AWS documentation. |
Confluent Platform and Confluent Cloud¶
Source cluster | Destination cluster | Possible? | Notes |
---|---|---|---|
Confluent Platform 7.1.0 or later | Confluent Cloud - Any Dedicated cluster | Yes |
|
Confluent Platform 5.4 to 7.0 with public endpoints on all brokers | Confluent Cloud - Any Dedicated cluster | Yes |
|
Confluent Platform 5.4 to 7.0 without public endpoints | Confluent Cloud - A Dedicated cluster with VPC Peering, VNet Peering, or Transit Gateway | Yes |
|
Confluent Cloud - A Basic or Standard cluster, or a Dedicated cluster with Internet networking | Confluent Platform 7.0.0 or later | Yes | |
Confluent Cloud - A cluster with private networking | Confluent Platform 7.0.0 or later | Yes |
|
Confluent Cloud and Apache Kafka®¶
Source cluster | Destination cluster | Possible? | Notes |
---|---|---|---|
Kafka 2.4 or later with public endpoints on all brokers | Confluent Cloud - Any Dedicated cluster | Yes |
|
Kafka 2.4 or later without public endpoints | Confluent Cloud - A Dedicated cluster with VPC Peering, VNet Peering, or | Yes |
|
Diagrams of Supported Combinations for Private Networking¶
Confluent Cloud to Confluent Cloud¶

Confluent Cloud to Confluent Platform/Apache Kafka®¶

How to use Cluster Links with Private Networking¶
Using the Confluent Cloud Console for Cluster Linking with Private Networking¶
To view, create, and modify cluster links and mirror topics in the Confluent Cloud Console, the cluster link’s destination cluster must be accessible from your browser. If your destination cluster is a Confluent Cloud cluster with private networking, you must have completed the setup described in Access Confluent Cloud Console with Private Networking.
If your browser cannot access the destination cluster, you will not see any cluster links on that cluster, nor an indication of which topics are mirror topics. You will not be able to create a cluster link with that cluster as its destination cluster.
If your source cluster is also a Confluent Cloud cluster with private networking, then some features of the Confluent Cloud Console require your browser to be able to reach your source cluster, also:
- Creating a “Confluent Cloud to Confluent Cloud” cluster link as an OrgAdmin, which creates ACLs and an API key on the source cluster.
- Viewing the source cluster topics’ throughput metric on a cluster link
- Using the drop-down menu of source cluster topic names to create a mirror topic. Topic names must be entered manually.
When creating a cluster link in the Confluent Cloud Console, rest assured that the drop-down menus automatically filter for source clusters and destination clusters that can be linked. If you see a drop-down option for the cluster, then a cluster link is possible and generally available. Preview cases are excluded from the drop-downs, and require using the Confluent CLI or the Confluent Cloud REST API to create a cluster link.
Running Cluster Linking API and CLI commands with Private Networking¶
Cluster Linking commands require that the location where you are running them has access to the destination cluster of the cluster link:
- Creating, updating, listing, describing, and deleting cluster links
- Creating, listing, and describing mirror topics
For example, when running the Confluent CLI command, the shell must have access to the destination cluster. If your destination cluster has private networking, one way to achieve this is to SSH into a virtual machine that has network connectivity (by means of Privatelink, Transit Gateway, or Peering) to the destination cluster.
Managing cluster links with Private Networking¶
Aside from the above mentioned requirements, the user experience when using a cluster link with private networking is the same as when using a cluster link with public networking. The same commands work in the same ways. That means you can follow the same Cluster Linking tutorials using the same steps, regardless of whether your clusters use public or private networking.
Exception: When Cluster Linking from a cluster with private networking to a cluster with public networking, a source-initiated link is required. This introduces different configurations and an extra step.
Cluster Linking between AWS Transit Gateway attached Confluent Cloud clusters¶
Confluent provides Cluster Linking between AWS Transit Gateway Confluent Cloud clusters as a fully-managed solution for geo-replication, multi-region, high availability and disaster recovery, data sharing, or aggregation.
This section describes how to use Cluster Linking to sync data between two private Confluent Cloud clusters in different AWS regions that are each attached to an AWS Transit Gateway. You can provision new Confluent Cloud clusters or use existing AWS Transit Gateway attached or AWS virtual private cloud (VPC) Peered Confluent Cloud clusters.
Limitations¶
This is limited to Confluent Cloud clusters that use AWS Transit Gateway as their networking type.
Tip
AWS VPC Peered clusters can be seamlessly converted to AWS Transit Gateway clusters with a Confluent support ticket. The Confluent Cloud clusters can be in the same or in different Confluent Cloud Environments or Organizations. The Transit Gateways can be in the same or different AWS Accounts. Connecting clusters from different organizations is useful for data sharing between organizations.
The clusters must be provisioned with different CIDRs. The address ranges cannot overlap.
The CIDRs for both clusters must be within RFC 1918:
- 10.0.0.0/8
- 100.64.0.0/10
- 172.16.0.0/12
- 192.168.0.0/16
The CIDRs for either cluster cannot be 198.18.0.0/15, even though it is a valid Confluent Cloud CIDR.
This configuration does not support combinations with other networking types, such as Privatelink, or with other cloud providers, such as Google Cloud Platform or Microsoft Azure.
Setup¶
Step 1: Create the networks and clusters¶
Determine:
- the two regions to use
- the two non-overlapping /16 CIDRs for the two Confluent Cloud clusters to use
- the AWS account(s) to use
This decision will depend on your architecture and business requirements.
Note that:
- You can use only one region, but most use cases will involve two (or more) different regions.
- It is possible for the Cluster Linking to be between different AWS accounts or Confluent Cloud accounts, but most use cases will involve one AWS account and one Confluent Cloud account.
Provision two AWS Transit Gateways, one in each region, and create two resource shares; one for each Transit Gateway, as described in Use AWS Transit Gateway with Confluent Cloud under Networking.
Open two support tickets to Confluent, each requesting Confluent to provision a new Transit Gateway enabled Confluent Cloud network, as described in Use AWS Transit Gateway with Confluent Cloud.
You will need to specify the network ID and the ARN of the resource share containing that region’s Transit Gateway. It is possible to seamlessly convert an existing AWS VPC Peered Confluent Cloud cluster in that region to a Transit Gateway attached cluster. If you choose that option, let Confluent know the name and cluster ID of the existing cluster in the support ticket. Turn around time is generally three to five business days.
Connect a “Command VPC” from which to issue commands, and create topics on the Confluent Cloud cluster in “Region 1”.
Create a new VPC in Region 1, from which to run commands against your Confluent Cloud cluster. For purposes of this example, call this the “Command VPC”.
Attach the Command VPC to your Transit Gateway. (Make sure you have a route in the Transit Gateway’s route table that points to the Command VPC for the Command VPC’s CIDR range.)
In the Command VPC’s route table, create the following routes if they do not already exist:
Component Route to: Command VPC CIDR range local
Confluent Cloud CIDR range in this region Transit Gateway in this region Confluent Cloud CIDR range in the other region Transit Gateway in this region $0.0.0.0/0
An Internet Gateway (create one if needed) [4] [4] $0.0.0.0/0
enables this VPC to reach out to the internet to install the Confluent CLI.
Create and launch an EC2 instance in that VPC.
SSH into the EC2 instance. (If needed for this step, create an Elastic IP and assign it to this EC2 instance.)
Log on to the Confluent CLI with confluent login.
Select your Confluent environment with confluent environment use <environment-ID>. (You can list your environments with confluent environment list <environment-ID>.)
Select your Confluent cluster in this region with confluent kafka cluster use <cluster-id>. (You can list your clusters with confluent kafka cluster list.)
List the topics in your Confluent Cloud cluster with confluent kafka topic list. If this command fails, your Command EC2 instance may not be able to reach your Confluent Cloud cluster. Your networking may not be correctly set up. Make sure you followed the steps above. See the Troubleshooting section if needed.
Create a new topic with confluent kafka topic create my-topic –partitions 1. If you have more Apache Kafka® clients, you can spin them up in VPCs attached to the Transit Gateway, and produce to and consume from the cluster.
Step 2: Create a cluster link¶
After you have two Transit Gateway-attached Confluent Cloud clusters, you can set up Cluster Linking between the two clusters to copy data and metadata from one to the other.
Peer the Transit Gateways. This will create connectivity between the two. This can be done cross-account, if needed.
Set up routes and propagation so that the two clusters have connectivity to each other.
In each Transit Gateway’s Route Table, these routes are required:
Component Route to: To the local Confluent Cloud cluster - CIDR: This region’s Confluent Cloud cluster CID
- Type: Propagated
- Destination: a Confluent VPC [5]
To the other region’s Confluent Cloud cluster - CIDR: The other region’s Confluent Cloud cluster CID
- Type: Static
- Destination: Peering — the other region’s Transit Gateway, that was peered
To the other command VPC in this region [6] - CIDR: CIDR range of your Command VPC
- Type: Propagated
- Destination: Command VPC
To the other command VPC in the other regions (if applicable) - CIDR: CIDR range of your Command VPC
- Type: Propagated
- Destination: Command VPC
[5] You can find the VPC ID in the Confluent Cloud Console under this cluster’s Network).
[6] Region 1 needs a Command VPC for now, but in a DR scenario, you would need to be able to access your Confluent Cloud clusters from any region.
Check that the Command VPC from Region 1 can access the Confluent Cloud cluster in Region 2. This ensures that your networking is set up properly.
- List the cluster’s topics with
confluent kafka topic list --cluster <region-2-cluster-id>
. - If this command succeeds, then you’ve verified that Confluent Cloud has inter-region connectivity!
- List the cluster’s topics with
Set up privileges on the cluster in Region 1 for the cluster link, and create a cluster link on Region 2’s cluster.
You will use the bootstrap server of the cluster in Region 1. Because you have peered the Transit Gateways and set up routing between the two clusters, the cluster in Region 2 will be able to resolve the private CIDR of the cluster in Region 1.
As long as your Command VPC can reach the Destination Cluster (Region 2 in this picture), it does not matter which region the Command VPC is in. (It is okay to run the command from Region 1.)
Create mirror topics using the cluster link, and produce and consume the data. This will bring data from Region 1 to Region 2.
- Follow the Quick Start instructions or the more in-depth Data sharing tutorial instructions.
- It is okay to run these commands from Region 1, even if they are against Region 2’s cluster, as long as your Command VPC has connectivity to Region 2’s cluster. Successfully consuming messages from Region 2 proves the inter-region replication is working, no matter where your Command VPC itself lives.
Now, you can spin up more Kafka producers and consumers in Region 1, and Kafka consumers in Region 2. You can also create a Command VPC in Region 2, so that you can issue
confluent
commands should Region 1 experience an outage.You can also set up a cluster link in the opposite direction, too. You only need to repeat steps 4 through 6 above. You do not need to set up additional networking to create additional cluster links. You only need to set up the networking once.
Cluster Linking between two Confluent Cloud clusters in the same region¶
If you want to create a cluster link between two AWS Transit Gateway Confluent Cloud clusters in the same AWS region, this is a special case in which the requirements may be different, depending on the networking setup: Here are some scenarios and factors to consider:
- If both Confluent Cloud clusters are in the same Confluent Cloud network, the additional configuration described in previous sections is not necessary, Only one Transit Gateway is required; without any Transit Gateway peering or changes to its route table.
- If the Confluent Cloud clusters are in different Confluent Cloud networks, but both are attached to the same Transit Gateway, then the only requirement is that they use the same Transit Gateway Route Table. The routes from each CIDR to each Confluent VPC must be in the same route table. No Transit Gateway Peering is required.
- If the Confluent Cloud clusters are attached to different Transit Gateways, then the above configuration is required. The steps are no different for two Transit Gateways in the same region from two Transit Gateways in different regions.
Management Responsibilities¶
Every cluster link runs as a continuous service managed by Confluent Cloud. Keeping a cluster link running is a shared responsibility between Confluent and its customers:
- Confluent is responsible for the Cluster Linking service.
- The customer is responsible for the network that facilitates the connection between the two clusters.
To operate a cluster link between two AWS Transit Gateway Confluent Cloud clusters, Confluent requires that the AWS networking be configured as laid out in the sections on this page.
Troubleshooting Transit Gateway Cluster Linking¶
This section covers these issues:
For more troubleshooting help, see Troubleshooting Cluster Linking on Confluent Cloud.
My CLI commands / API calls are failing¶
Often, CLI commands may fail if you are not running them from a place that has connectivity to your Confluent Cloud cluster. Verify that your CLI (for example, which may be running in a Command EC2 instance) has connectivity to the Confluent Cloud cluster.
You can test connectivity to the cluster with:
confluent kafka cluster describe <destination-cluster-id>
Get the URL out of REST Endpoint.
telnet <url> 443
Tip
Note the space in front of
443
instead of a colon (:
).Example success showing connectivity:
telnet pkc-z3000.us-west-2.aws.confluent.cloud 443 Trying 10.18.72.172… Connected to pkc-z3000.us-west-2.aws.confluent.cloud. Escape character is '^]'
Example failure (the command will hang indefinitely):
telnet pkc-z3000.us-west-2.aws.confluent.cloud 443 ...
If this process ends in failure, you are not running the CLI commands from a location that has connectivity to your Confluent Cloud cluster. Work with your networking team to ensure your instance is attached and routed to your Transit Gateway to the Confluent Cloud cluster.
Troubleshooting Transit Gateway Connectivity¶
Ensure the Transit Gateways, Peering, and Route Tables are properly configured.
Assuming a transit gateway in Region A and one in Region B, with CIDRs CIDR-A and CIDR-B :
Transit Gateway Region A route table
Component | Route to: |
---|---|
CIDR-A | Confluent VPC A (Confluent is responsible for setting this) |
CIDR-B | Transit Gateway B via Peering connection |
Transit Gateway Region B route table
Component | Route to: |
---|---|
CIDR-A | Transit Gateway B via Peering connection |
CIDR-B | Confluent VPC B (Confluent is responsible for setting this) |
Note that each transit gateway has routes set for both CIDRs. This may be the issue. If the transit gateways are not both set up with both CIDRs, then the clusters will not have connectivity to each other.
You can test if the cross-region connectivity works like this:
Attach a Test VPC in Region A to A’s transit gateway.
Launch an EC2 instance in the Test VPC (or you can use the “Command VPC” if you set one up per the previous steps).
Route the EC2 instance into transit gateway in Region A:
Test VPC Route Table:
Component Route to: CIDR-Test-VPC: local CIDR-A Transit gateway A CIDR-B Transit gateway A (also) Transit gateway Region A Route Table: Add
CIDR-Test-VPC: Test VPC Attachment
Transit gateway Region B Route Table: Add
CIDR-Test-VPC: TGW A via Peering connection
(needed for bidirectional connectivity)
SSH into the EC2 instance. For example, put an elastic IP on the EC2 instance.
Check connectivity to Cluster A:
telnet CIDR-A 9092
If successful, you should see similar output:
Trying 10.18.72.172... Connected to pkc-z3000.us-west-2.aws.confluent.cloud. Escape character is '^]'.
Check connectivity across the peering connection to Cluster B:
telnet CIDR-A 9092
If successful, you should see similar output:
Trying 10.18.72.172... Connected to pkc-z3000.us-west-2.aws.confluent.cloud. Escape character is '^]'.
Confluent Cloud Billing Considerations¶
There are cost differences associated with private vs. public networking. These are detailed under Cluster Linking in the Billing documentation. Examples are provided there for public networking, with more details about private networking to follow soon.