Get Started Free
  • Get Started Free
  • Courses
      What are the courses?

      Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

      View Flink® courses
      Kafka® 101
      Kafka® Internal Architecture
      Kafka® Connect 101
      Kafka® Security
      Kafka Streams 101
      Designing Events and Event Streams
      Event Sourcing and Storage
      Schema Registry 101
      Governing Data Streams
      NewApache Flink® 101
      Data Mesh 101
      ksqlDB 101
      Inside ksqlDB
      Spring Frameworks and Kafka®
      NewKafka® for Python Developers
      NewKafka® for .NET Developers
      Building Data Pipelines
      Confluent Cloud Networking
      Confluent Cloud Security
      NewBuilding Flink® Apps in Java
  • Learn
      Pick your learning path

      A wide range of resources to get you started

      Start Learning
      Articles

      Deep-dives into key concepts

      Patterns

      Architectures for event streaming

      FAQs

      Q & A about Kafka® and its ecosystem

      100 Days of Code

      A self-directed learning path

      Blog

      The Confluent blog

      Podcast

      Our podcast, Streaming Audio

      Confluent Developer Live

      Free live professional training

      Coding in Motion

      Build a real-time streaming app

  • Build
      Design. Build. Run.

      Build a client app, explore use cases, and build on our demos and resources

      Start Building
      Language Guides

      Build apps in your favorite language

      Tutorials

      Hands-on stream processing examples

      Demos

      More resources to get you started

  • Community
      Join the Community

      Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

      Learn More
      Kafka Summit and Current Conferences

      Premier data streaming events

      Meetups & Events

      Kafka and data streaming community

      Ask the Community

      Community forums and Slack channels

      Community Catalysts

      Sharing expertise with the community

  • Docs
Courses
What are the courses?

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

View Flink® courses
Kafka® 101
Kafka® Internal Architecture
Kafka® Connect 101
Kafka® Security
Kafka Streams 101
Designing Events and Event Streams
Event Sourcing and Storage
Schema Registry 101
Governing Data Streams
NewApache Flink® 101
Data Mesh 101
ksqlDB 101
Inside ksqlDB
Spring Frameworks and Kafka®
NewKafka® for Python Developers
NewKafka® for .NET Developers
Building Data Pipelines
Confluent Cloud Networking
Confluent Cloud Security
NewBuilding Flink® Apps in Java
Learn
Pick your learning path

A wide range of resources to get you started

Start Learning
Articles

Deep-dives into key concepts

Patterns

Architectures for event streaming

FAQs

Q & A about Kafka® and its ecosystem

100 Days of Code

A self-directed learning path

Blog

The Confluent blog

Podcast

Our podcast, Streaming Audio

Confluent Developer Live

Free live professional training

Coding in Motion

Build a real-time streaming app

Build
Design. Build. Run.

Build a client app, explore use cases, and build on our demos and resources

Start Building
Language Guides

Build apps in your favorite language

Tutorials

Hands-on stream processing examples

Demos

More resources to get you started

Community
Join the Community

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Learn More
Kafka Summit and Current Conferences

Premier data streaming events

Meetups & Events

Kafka and data streaming community

Ask the Community

Community forums and Slack channels

Community Catalysts

Sharing expertise with the community

Docs
Get Started Free
Confluent Documentation
  1. Home
  2. Cloud
  3. Confluent Cloud Clusters
  4. Cluster Linking
  5. Cluster Linking Configuration and Management

CLOUD

  • Overview
  • Get Started
    • Quick Start
    • Kafka API Quick Start
    • Manage Schemas Quick Start
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview of Confluent Cloud Examples
      • Example: Create Fully-Managed Services
      • Example: Build an ETL Pipeline
      • Tutorial: Confluent CLI
      • Confluent Replicator to Confluent Cloud Configurations
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Manage Audit Logs
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Encrypt a Dedicated Cluster Using Self-managed Keys
      • Overview
      • Encrypt Clusters using Self-Managed Keys – AWS
      • Encrypt Clusters using Self-Managed Keys – Azure
      • Encrypt Clusters using Self-Managed Keys – Google Cloud
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
  • Build Client Applications
    • Overview
    • Quick Start
    • Guides
      • Python Client
      • .NET Client
      • Go Client
      • Java Client
      • C++ Client
      • JMS Client
    • Examples
      • Overview
      • C/C++ Example
      • .NET Example
      • Go Example
      • Spring Boot Example
      • Java Example
      • KafkaProducer Example
      • Python Example
      • REST Example
      • Node.js Example
      • Clojure Example
      • Groovy Example
      • Kafka Connect Datagen Example
      • kafkacat Example
      • Kotlin Example
      • Ruby Example
      • Rust Example
      • Scala Example
    • Architecture
    • Test
    • Monitor
    • Optimize and Tune
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
  • Manage Accounts and Access
    • Resource Hierarchy
      • Organizations
        • Overview
        • Manage Multiple Organizations
      • Environments
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • Use API keys
        • Overview
        • Best Practices
        • Troubleshoot
      • Use OAuth
        • OAuth for Confluent Cloud
        • Add an OAuth/OIDC identity provider
        • Use identity pools and filters
        • Refresh the JWKS URI
        • Configure OAuth clients
        • Access Kafka REST APIs
        • Best Practices
      • Use Single Sign-on (SSO)
        • Single Sign-on (SSO) for Confluent Cloud
        • Enable SSO
      • Authentication Security Protections
    • Control Access
      • Role-Based Access Control
      • Access Control Lists
      • Use the Confluent CLI with multiple credentials
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Create, Edit and Delete Topics
    • Use the Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
  • Govern Streams and Schemas
    • Overview
    • Tutorial
    • Stream Governance
      • Overview
      • Packages, Features, and Limits
      • Govern Data Streams with Stream Lineage
      • Manage Tags and Metadata with Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage Guide
        • GraphQL API Catalog Usage Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema Validation
      • Schema Linking
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Formats, Serializers, and Deserializers
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
    • Reference
      • Schema Registry REST API Usage
      • Clusters API Quick Start
      • Use AsyncAPI to Describe Topics and Schemas
      • Maven Plugin
  • Connect to External Systems
    • Overview
    • Install Managed Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon SQS Source
      • Amazon S3 Sink
      • Amazon S3 Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Source
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS)
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink
      • Google Cloud Functions Sink
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • Google Cloud Pub/Sub Source
      • HTTP Sink
      • HTTP Source
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium)
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium)
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • Oracle CDC Source
        • Connector Features
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • Configure and Launch the connector
        • SMT examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink
      • PostgreSQL CDC Source (Debezium)
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Source
      • RabbitMQ Sink
      • Redis Sink
      • Salesforce Bulk API Source
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Install Custom Connectors
      • About Custom Connectors
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
    • Networking, DNS, and Service Endpoints
    • Connect API
    • Manage Static Egress IP Addresses
    • Preview Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
  • Process Data Streams
    • Build Data Pipelines with Stream Designer
      • Stream Designer for Confluent Cloud
      • Quick Start for Stream Designer
      • Create a Join Pipeline
      • Create an Aggregation Pipeline
      • Import a Recipe Into a Pipeline
      • Import and Export a Pipeline
      • Edit and Update a Pipeline
      • Role-Based Access Control for Pipelines
      • Troubleshoot a Pipeline in Stream Designer
      • Manage Pipelines With the CLI
      • Manage Pipelines With the REST API
      • Manage Pipeline Secrets
    • Create Stream Processing Apps with ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the Confluent CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Quick Start
      • Grant Role-Based Access
  • Manage Networking
    • Networking in Confluent Cloud
    • Networking on AWS
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • AWS PrivateLink
        • AWS PrivateLink for Dedicated Clusters
      • VPC Peering on AWS
      • AWS Transit Gateway
    • Networking on Azure
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Azure Private Link
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Google Cloud Private Service Connect
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Schema Registry in Private Networking
      • Static Egress IP Address for Connectors and Cluster Linking
    • Access Confluent Cloud Console with Private Networking
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Auditable Event Methods
        • Kafka Cluster Authentication and Authorization
        • Schema Registry Authentication and Authorization
        • Kafka Cluster Management
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Networking
        • Pipeline (Stream Designer)
        • Role-based Access Control (RBAC)
        • Notifications Service
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Metrics
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Apache Kafka® Clients to Confluent Cloud
      • Observability overview and setup
      • Producer scenarios
        • Overview
        • Confluent Cloud Unreachable
        • Authorization Revoked
      • Consumer scenarios
        • Overview
        • Increasing consumer lag
      • General scenarios
        • Overview
        • Failing to create a new partition
        • Request rate limits
      • Clean up Confluent Cloud resources
      • Additional resources
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use GCP Pay As You Go
    • Use GCP Commits
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Overview
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Cluster Management with Kafka REST API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
    • Stream Designer Pipelines API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
  • Support
  • Glossary

Configure and Manage Cluster Links on Confluent Cloud¶

You can create, view, manage, and delete cluster links using the unified Confluent CLI and the Confluent Cloud Cluster Linking (v3) REST API.

Requirements to Create a Cluster Link¶

To create a cluster link, you will need:

  • Access to the Destination cluster. Cluster links are created on the destination cluster. For the specific security authorization needed, see Manage Security for Cluster Linking on Confluent Cloud.
  • The name you wish to give the cluster link.
  • The Source cluster’s bootstrap server and cluster ID.
    • If the Source cluster is a Confluent Cloud cluster, you can get those using these commands:
      • confluent kafka cluster list to get the Source cluster’s ID. For example, lkc-12345
      • confluent kafka cluster describe <source-cluster-id> to get the bootstrap server. It will be the value of the row called Endpoint. For example, SASL_SSL://pkc-12345.us-west-2.aws.confluent.cloud:9092
    • If the Source cluster is a Confluent Platform or Apache Kafka® cluster, your system administrator can give you your bootstrap server. You can get your cluster ID by doing confluent cluster describe or zookeeper-shell <bootstrap-server> get /cluster/id 2> /dev/null. For example, 0gXf5xg8Sp2G4FlxNriNaA.
    • Authentication credentials for the cluster link to use with the Source cluster. These can be updated, allowing you to rotate credentials used by your cluster link. See the section on configuring security credentials for more details.
  • (Optional) Any additional configuration parameters you wish to add to the cluster link. See the list of possible configurations below.

You will then pass this information into the CLI or API command to create a cluster link.

Note

You cannot update the cluster link’s name, Source cluster ID, or Destination cluster ID. These values are only set at creation.

Managing Cluster Links with the CLI¶

To create or configure a cluster link with the Confluent Cloud CLI, you will need to create a config file with the configurations you wish to set. Each configuration should be on one line in the file, with the format key=value, where key is the name of the config, and value is the value you wish to set on that cluster link. In the following commands, the location of this file is referred to as <config-file>.

Create a cluster link¶

To create a cluster link with the confluent CLI, use this command on the Destination cluster:

confluent kafka link create <link-name> \
  --source-cluster <source-cluster-id> \
  --source-bootstrap-server <source-bootstrap-server> \
  --config-file <config-file> \
  --cluster <destination-cluster-id>

Tip

--source-cluster-id was replaced with --source-cluster in version 3 of confluent CLI, as described in the command reference for confluent kafka link create.

Update a cluster link¶

To update the configuration for an existing cluster link with the cloud CLI, use this command on the Destination cluster:

confluent kafka link update <link-name>  --config-file my-update-configs.txt --cluster <destination-cluster-id>

Tip

  • When updating the configuration for an existing cluster link, pass in only those configs that change. Be especially mindful when you are using a --config-file (where it would be easy to pass in a full set of configs) that it contains only the configs you want to update. For example, my-update-configs.txt might include:

    consumer.offset.sync.ms=25384
    topic.config.sync.ms=38254
    
  • You can change several aspects of a cluster link configuration, but you cannot change its source cluster (source cluster ID), prefix, or the link name.

  • Examples of creating and configuring cluster links with the CLI are shown in the Cluster Linking Quick Start, and the tutorials for Share Data Across Clusters, Regions, and Clouds using Confluent Cloud and Cluster Linking Disaster Recovery and Failover on Confluent Cloud.

List all cluster links going to the destination cluster¶

To see a list of all cluster links going to a Destination cluster, use the command:

confluent kafka link list --cluster <destination-cluster-id>

View the configuration for a given cluster link¶

To view the configuration of a specific cluster link , use the command:

confluent kafka link describe <link-name> --cluster <destination-cluster-id>

Delete a cluster link¶

To delete a cluster link, use the command:

confluent kafka link delete <link-name> --cluster <destination-cluster-id>

Tip

Using the command confluent kafka cluster use <destination-cluster-id> sets the Destination cluster to the active cluster, so you won’t need to specify --cluster <destination-cluster-id> in any commands.

Important

  • When deleting a cluster link, first check that all mirror topics are in the STOPPED state. If any are in the PENDING_STOPPED state, deleting a cluster link can cause irrecoverable errors on those mirror topics due to a temporary limitation.
  • When a cluster link is deleted, the history of any STOPPED topics is also deleted. If you need the Last Source Fetch Offset or the Status Time of your promoted or failed-over mirror topics, make sure to save those before you delete the cluster link.
  • You cannot delete a cluster link that still has mirror topics on it (the delete operation will fail).
  • If you are using Confluent for Kubernetes (CFK), and you delete your cluster link resource, any mirror topics still attached to that cluster link will be forcibly converted to regular topics by use of the failover API. To learn more, see Modify a mirror topic in Cluster Linking using Confluent for Kubernetes.
  • Cluster links are automatically deleted if the destination cluster is deleted.

Managing Cluster Links with the REST API¶

REST API calls and examples are provided in the reference documentation for the Confluent Cloud Cluster Linking (v3) REST API.

Tip

To learn how to authenticate with the REST API, see the REST API documentation.

To use the Confluent Cloud REST API to configure cluster links, you can pass in configs in JSON format. Each config will take this shape:

{
  "name": "<config-name>",
  "value": <config-value>
}

Since this is JSON, if your config value is a string, surround it in double quotes. Then, pass in the name/value pairs as an array to the configs argument of the JSON payload.

For example, this JSON enables consumer offset sync for all consumer groups and ACL sync for all ACLs.

{
  "configs": [
    {
        "name": "consumer.offset.sync.enable",
        "value": "true"
    },
    {
        "name": "consumer.offset.sync.filters",
        "value": "{\"groupFilters\": [{\"name\": \"*\",\"patternType\": \"LITERAL\",\"filterType\": \"INCLUDE\"}]}"
    },
    {
        "name": "acl.sync.enable",
        "value": "true"
    },
    {
        "name": "acl.filters",
        "value": "{ \"aclFilters\": [ { \"resourceFilter\": { \"resourceType\": \"any\", \"patternType\": \"any\" }, \"accessFilter\": { \"operation\": \"any\", \"permissionType\": \"any\" } } ] }"
    }
  ]
}

Creating a Cluster Link through the REST API¶

See Create a cluster link in the Confluent Cloud REST API documentation.

To create a cluster link using the REST API, the JSON payload needs:

  • an entry called source_cluster_id set to your Source cluster’s ID

  • a config in the configs array called bootstrap.servers set to your Source cluster’s bootstrap server

  • your cluster link’s security configuration listed as configs in the configs array

    Tip

    The security configuration must include the API key and secret associated with the cluster resource against which you are running the REST API commands, not the Cloud user API key. For more information on creating a resource-specific key, see Create a resource-specific API key.

  • (optional) additional configurations in the configs array

Example JSON payload for a request to create a cluster link that syncs consumers offsets for all consumer groups, and syncs all ACLs:

{
  "source_cluster_id": "<source-cluster-id>",
  "configs": [
    {
      "name": "security.protocol",
      "value": "SASL_SSL"
    },
    {
      "name": "bootstrap.servers",
      "value": "<source-bootstrap-server>"
    },
    {
      "name": "sasl.mechanism",
      "value": "PLAIN"
    },
    {
      "name": "sasl.jaas.config",
      "value": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<source-api-key>\" password=\"<source-api-secret>\";"
    }
  ]
}

Include this JSON payload in a POST request to:

<REST-Endpoint>/kafka/v3/clusters/<cluster-ID>/links/?link_name=<link-name>

Where you replace the following:

  • <REST-Endpoint> is your cluster’s REST url. You can find this with confluent kafka cluster describe <cluster-ID>. It wil look like: https://pkc-XXXXX.us-west1.gcp.confluent.cloud:443
  • <cluster-ID> is your destination cluster’s ID. You can find this with confluent kafka cluster list. It will look like lkc-XXXXX.
  • <link-name> is the name of your cluster link. You can name it whatever you choose.

Updating and Viewing Cluster Link Configs with the REST API¶

Update an existing cluster link¶

See Alter the config under a cluster link n the Confluent Cloud REST API documentation.

To update an existing link, you will send the JSON payload described above as a PUT request to <REST-Endpoint>/clusters/<cluster-ID>/links/<link-name>/configs:alter.

Get the value a cluster link configuration¶

See Describe a cluster link in the Confluent Cloud REST API documentation.

To get the value of your cluster link’s configurations using the REST API, use one of these options:

  • GET <REST-Endpoint>/clusters/<cluster-ID>/links/<link-name>/configs to get the full list of configs.
  • GET <REST-Endpoint>/clusters/<cluster-ID>/links/<link-name>/<config-name> to get the value of a specific config.

List all cluster links going to the destination cluster¶

See List all cluster links in the destination cluster in the Confluent Cloud REST API documentation.

To view a list of all cluster links going to a Destination cluster, send a GET request to <REST-Endpoint>/clusters/<cluster-ID>/links.

Delete a cluster link¶

See Delete the cluster link in the Confluent Cloud REST API documentation.

To delete a cluster link, send a DELETE request to <REST-Endpoint>/clusters/<cluster-ID>/links/<link-name>.

Important

  • When deleting a cluster link, first check that all mirror topics are in the STOPPED state. If any are in the PENDING_STOPPED state, deleting a cluster link can cause irrecoverable errors on those mirror topics due to a temporary limitation.
  • When a cluster link is deleted, the history of any STOPPED topics is also deleted. If you need the Last Source Fetch Offset or the Status Time of your promoted or failed-over mirror topics, make sure to save those before you delete the cluster link.
  • You can’t directly call the “delete cluster link” command on a cluster link that still has mirror topics on it (it will fail).
  • If you are using Confluent for Kubernetes (CFK), and you delete your cluster link resource, any mirror topics still attached to that cluster link will be forcibly converted to regular topics by use of the failover API. To learn more, see Modify a mirror topic in Cluster Linking using Confluent for Kubernetes.
  • Cluster links are automatically deleted if the destination cluster is deleted.

Configuring Cluster Link Behavior¶

These properties are available to control the behavior of a cluster link that goes to a Confluent Cloud destination cluster.

If you disable a feature that has filters (ACL sync, consumer offset sync, auto create mirror topics) after having it enabled initially, then any existing filters will be cleared (deleted) from the cluster link.

acl.filters

JSON string that lists the ACLs to migrate. Define the ACLs in a file, acl.filters.json, and pass the file name as an argument to --acl-filters-json-file.

  • Type: string
  • Default: “”

Note

  • Populate acl.filters by passing a JSON file on the command line that specifies the ACLs.
  • Do not include in the filter ACLs that are managed independently on the destination cluster. This is to prevent cluster link migration from deleting ACLs that were added specifically on the destination and should not be deleted. See also, ACL Syncing.
acl.sync.enable

Whether or not to migrate ACLs. See also, Syncing ACLs from Source to Destination Cluster and ACL Syncing.

  • Type: boolean
  • Default: false
acl.sync.ms

How often to refresh the ACLs, in milliseconds (if ACL migration is enabled). The default is 5000 milliseconds (5 seconds).

  • Type: int
  • Default: 5000
auto.create.mirror.topics.enable
Whether or not to auto-create mirror topics based on topics on the source cluster. When set to “true”, mirror topics will be auto-created. Setting this option to “false” disables mirror topic creation and clears any existing filters. For details on this option, see Auto-create Mirror Topics.
auto.create.mirror.topics.filters
A JSON object with one property, topicFilters, that contains an array of filters to apply to indicate which topics should be mirrored. For details on this option, see Auto-create Mirror Topics.
cluster.link.prefix

A prefix that is applied to the names of the mirror topics. The same prefix is applied to consumer groups when consumer.group.prefix is set to true. To learn more, see “Prefixing Mirror Topics and Consumer Group Names” in Mirror Topics.

Note

The prefix cannot be changed after the cluster link is created.

  • Type: string
  • Default: null
consumer.group.prefix.enable

When set to true, the prefix specified for the cluster link prefix is also applied to the names of consumer groups. The cluster link prefix must be specified in order for the consumer group prefix to be applied. To learn more, see “Prefixing Mirror Topics and Consumer Group Names” in Mirror Topics.

  • Type: boolean
  • Default: false
consumer.offset.group.filters

JSON to denote the list of consumer groups to be migrated. To learn more, see Migrating Consumer Groups from Source to Destination Cluster.

  • Type: string
  • Default: “”
consumer.offset.sync.enable

Whether or not to migrate consumer offsets from the source cluster.

  • Type: boolean
  • Default: false
consumer.offset.sync.ms

How often to sync consumer offsets, in milliseconds, if enabled.

  • Type: int
  • Default: 30000
topic.config.sync.ms

How often to refresh the topic configs, in milliseconds.

  • Type: int
  • Default: 5000

Syncing ACLs from Source to Destination Cluster¶

Cluster Linking can sync some or all of the ACLs on the source cluster to the destination cluster. This is critical for Disaster Recovery (DR) architectures, so that when your client applications failover to the DR cluster, they have the same access that they had on their original cluster. When you create the cluster link, you can specify which ACLs to sync, filtering down by the name of the resource, principal (service account), operation, and/or permission. You can update this setting on the cluster link at any time. The cluster link will copy the initial set of matching ACLs, and then continuously sync any changes, additions, or deletions of matching ACLs from the source cluster to the destination cluster.

Note

  • In Confluent Cloud, ACL sync is only supported between two Confluent Cloud clusters that belong to the same Confluent Cloud organization. ACL sync is not supported between two Confluent Cloud clusters in different organizations, or between a Confluent Platform and a Confluent Cloud cluster. Do not include in the sync filter ACLs that are managed independently on the destination cluster. Before configuring and deploying with ACL sync, see ACL Syncing limitations in the overview for full details.
  • You can also migrate consumer group offsets from the source to destination cluster.

Prerequisites¶

You must have the appropriate authorizations:

  • DESCRIBE Cluster ACLs (DescribeAcls API) on the source cluster
  • ALTER Cluster ACLs (CreateAcls/DeleteAcls APIs) on the destination cluster

Configurations for ACL Sync¶

To enable ACL sync on a cluster link, specify these properties:

acl.sync.enable=true

This turns on ACL sync. Updating this config to false will turn off ACL sync.

If you set this up and run Cluster Linking, then later disable it, the filters will be cleared (deleted) from the cluster link.

acl.filters
A JSON object with one property, aclFilters, that contains an array of filters to include specific ACLs to sync. Examples are below.
acl.sync.ms (optional)
How frequently to sync ACLs. The default is 5000 (5 seconds). Minimum is 1000.

You can configure ACLs at the time you create the cluster link, or as an update to an existing configuration.

Here is an example of setting up a cluster link with ACL migration when you create the cluster link. Note that the link configurations (including ACL migration properties) are defined in a file, link-configs.properties, rather than specified directly on the command line.

confluent kafka link create from-aws-basic \
    --source-cluster lkc-12345 \
    --source-bootstrap-server SASL_SSL://pkc-abcde.us-west1.gcp.confluent.cloud:9092 \
    --source-api-key 1L1NK2RUL3TH3M4LL \
    --source-api-secret ******* \
    --config-file link-configs.properties

Here is an example of what you might have in link-configs.properties:

acl.sync.enable=true
acl.sync.ms=1000
acl.filters={ "aclFilters": [ { "resourceFilter": { "resourceType": "any", "patternType": "any" }, "accessFilter": { "operation": "any", "permissionType": "any" } } ] }

The following sections provide examples of how you might define the actual ACLs in acls.filters.

Examples¶

The following examples show how to configure the JSON for various types of ACL migration, with granularity on a topic, a resource, or a mixed set.

Migrate all ACLs from source to destination cluster¶

To migrate all ACLs from the source to the destination cluster, provide the following for acl.filters.

acl.filters={                   \
  "aclFilters": [               \
    {                           \
      "resourceFilter": {       \
        "resourceType": "any",  \
        "patternType": "any"    \
      },                        \
      "accessFilter": {         \
        "operation": "any",     \
        "permissionType": "any" \
      }                         \
    }                           \
  ]                             \
}

Each field in the JSON has the following options:

  • resourceType: Can be ANY (meaning any kind of resource) or topic, cluster, group, or transactionalID, which are the resources specified in Use Access Control Lists (ACLs) for Confluent Cloud.
  • patternType: Can be ANY, LITERAL, PREFIXED, or MATCH.
  • name: Name of resource.
    • If left empty, will default to the * wildcard, which will match all names of the specified resourceType.
    • If patternType is "LITERAL" then any resources with this exact name will be included.
    • If patternType is "PREFIXED", then any resources with a name that has this as a prefix will be included.
    • If patternType is "MATCH", then any resources that match a given pattern will be included. For example, a literal, wildcard (*), or prefixed pattern of the same name and type.
  • principal: (optional) Name of principal specified on the ACL. If left empty will default to the * wildcard, which will match all principals with the specified operation and permissionType.
    • For example, a service account with ID 12345 has this principal: User:sa-12345
  • operation: Can either be any or one of those specified in Operations under the “Operation” column.
  • permissionType: Can be any, allow, or deny

Sync all ACLs specific to a topic¶

To sync all ACLs specific to a single topic (in this case topic pineapple), provide the following configurations in acls.filters:

acl.filters={                     \
  "aclFilters": [                 \
    {                             \
      "resourceFilter": {         \
        "resourceType": "topic",  \
        "patternType": "literal", \
        "name": "pineapple”       \
      },                          \
      "accessFilter": {           \
        "operation": "any",       \
        "permissionType": "any"   \
      }                           \
    }                             \
   ]                              \
 }

Sync ACLs specific to a principal or permission¶

To sync all ACLs specific to a service account with ID 12345 and also all ACLs that deny access to the cluster, provide the following configurations in acls.filters:

acl.filters={                         \
  "aclFilters": [                     \
    {                                 \
      "resourceFilter": {             \
        "resourceType": "any",        \
        "patternType": "any"          \
      },                              \
      "accessFilter": {               \
        “principal”: “User:sa-12345”, \
        "operation": "any",           \
        "permissionType": "any"       \
      }                               \
    },                                \
    {                                 \
      "resourceFilter": {             \
        "resourceType": "any",        \
        "patternType": "any"          \
      },                              \
      "accessFilter": {               \
        "operation": "any",           \
        "permissionType": "deny"      \
      }                               \
     }                                \
   ]                                  \
 }

Configuring Security Credentials¶

When you create a cluster link, you will need to give the cluster link credentials with which to authenticate into the Source cluster. Read the Cluster Linking security overview for more information about Cluster Linking security requirements, including the required ACLs.

Configuring Security Credentials for Confluent Cloud Source Clusters¶

If your Source cluster is a Confluent Cloud cluster, your cluster link will need these properties configured:

  • security.protocol set to SASL_SSL
  • sasl.mechanism set to PLAIN
  • sasl.jaas.config set to org.apache.kafka.common.security.plain.PlainLoginModule required username="<source-api-key>" password="<source-api-secret>";
    • Replace <source-api-key> and <source-api-secret> with the API key to use on your Source cluster.
    • Be careful not to forget the " characters and the ending ;

Configuring Security Credentials for Confluent Platform and Kafka Source Clusters¶

Since cluster links consume from the source cluster using the same mechanism as a regular Kafka consumer, the cluster link’s authentication configuration is identical to the one needed by Kafka consumers.

Note

For Confluent Cloud cluster links, if your source cluster uses SSL (also known as TLS) and you need to pass a keystore and truststore; you must do so in-line, rather than with a keystore and truststore location. To learn more, see the security section on Mutual TLS (mTLS).

The following properties are supported by cluster links going to Confluent Destination clusters:

  • sasl.client.callback.handler.class
  • sasl.jaas.config
  • sasl.kerberos.kinit.cmd
  • sasl.kerberos.min.time.before.relogin
  • sasl.kerberos.service.name
  • sasl.kerberos.ticket.renew.jitter
  • sasl.kerberos.ticket.renew.window.factor
  • sasl.login.callback.handler.class
  • sasl.login.class
  • sasl.login.refresh.buffer.seconds
  • sasl.login.refresh.min.period.seconds
  • sasl.login.refresh.window.factor
  • sasl.login.refresh.window.jitter
  • sasl.mechanism
  • security.protocol
  • ssl.cipher.suites
  • ssl.enabled.protocols
  • ssl.endpoint.identification.algorithm
  • ssl.engine.factory.class
  • ssl.key.password
  • ssl.keymanager.algorithm
  • ssl.keystore.location
  • ssl.keystore.password
  • ssl.keystore.type
  • ssl.keystore.certificate.chain
  • ssl.protocol
  • ssl.provider
  • ssl.truststore.certificates

Suggested Reading¶

  • Unified Confluent CLI reference
  • Confluent Cloud Cluster Linking (v3) REST API
  • Manage Security for Cluster Linking on Confluent Cloud
  • Tutorials

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portal Ask the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree to receive occasional marketing emails from Confluent. You also agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- . Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation

On this page: