Get Confluent | Sign up for Confluent Cloud or download Confluent Platform
  • Home
  • Cloud
  • Platform
  • Connectors
  • Tools
  • Clients
  • Download Confluent
Back to home

CLOUD

  • Overview
  • Get Started
    • Guided Tutorial
    • Quick Start for Apache Kafka using Confluent Cloud
    • ksqlDB in Confluent Cloud
    • Manage Schemas on Confluent Cloud
    • Confluent Cloud Basics
    • Confluent Cloud Demos
  • Kafka Clusters
    • Create Cluster
    • Expand a Dedicated Cluster
    • Cluster Types
    • Encrypt clusters using self-managed keys
      • Overview
      • Encrypt clusters using self-managed keys – AWS
    • Cloud Providers and Regions
    • Confluent Cloud Broker and Topic Configuration Settings
    • Compliance for Confluent Cloud
    • Upgrade Policy for Confluent Cloud
    • Migrate Topics on Confluent Cloud Clusters
    • Migrate Schemas
  • Manage Access
    • User Accounts for Confluent Cloud
    • Service Accounts for Confluent Cloud
    • Access Control Lists (ACLs) for Confluent Cloud
    • Audit Logs for Confluent Cloud
    • Tutorial: User Management in Confluent Cloud
    • Restrict Access to Confluent Cloud
    • Single sign-on (SSO) for Confluent Cloud
    • Configure the CLI for Multiple Environments
    • Environments
  • Manage Topics
    • Overview
    • Topics
    • Message Browser
    • Data Flow
  • Manage Schemas
  • Explore Data Lineage
    • Data Lineage Preview
  • Cluster Linking
    • Cluster Linking (Early Access)
  • Develop Client Applications
    • Architectural Considerations
    • Monitoring
    • Optimizing and Tuning
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
    • Configure Clients
    • Create API Keys
    • Tools for Confluent Cloud Clusters
  • Connect to External Systems
    • Overview
    • Amazon Kinesis Source Connector
    • Amazon Redshift Sink Connector
    • Amazon S3 Sink Connector
    • AWS Lambda Sink Connector
    • Azure Blob Storage Sink Connector
    • Azure Data Lake Storage Gen2 Sink Connector
    • Azure Event Hubs Source Connector
    • Azure Functions Sink Connector
    • Datagen Source Connector (development and testing)
    • Elasticsearch Service Sink Connector
    • Google BigQuery Sink Connector
    • Google Cloud Dataproc Sink Connector
    • Google Cloud Functions Sink Connector
    • Google Cloud Spanner Sink Connector
    • Google Cloud Storage Sink Connector
    • Google Pub/Sub Source Connector
    • Microsoft SQL Server Sink Connector
    • Microsoft SQL Server Source CDC Connector (Debezium)
    • Microsoft SQL Server Source Connector
    • MongoDB Atlas Sink Connector
    • MongoDB Atlas Source Connector
    • MySQL Source CDC Connector (Debezium)
    • MySQL Sink Connector
    • MySQL Source Connector
    • Oracle Database Source Connector
    • PostgreSQL CDC Source Connector (Debezium)
    • PostgreSQL Sink Connector
    • PostgreSQL Source Connector
    • Salesforce CDC Source Connector
    • Snowflake Sink Connector
    • Internet Access to Resources
    • Service Accounts
    • Connect API for Confluent Cloud
    • Dead Letter Queue
    • Limitations
    • Confluent Cloud Connect Preview
  • ksqlDB Stream Processing
  • Networking
    • Overview
    • Configure Peering for AWS
    • Configure Peering for Azure
    • Configure Peering for GCP
    • Configuring Access to the Confluent Cloud Web UI with VPC peering
    • Using Confluent Cloud Schema Registry in a VPC Peered Environment
    • Configure AWS PrivateLink
    • Configure Azure Private Link
    • Configuring Access to the Confluent Cloud Web UI with AWS and Azure Private Link
  • Monitoring
    • Confluent Cloud Metrics API
    • Monitor Consumer Lag
    • Debug Confluent Cloud using kafkacat
  • Resource Limits
  • Billing
    • Confluent Cloud Billing
    • Consumption Metrics
    • Azure Marketplace Pay As You Go
    • Azure Marketplace Commits
    • AWS Marketplace Pay As You Go
    • AWS Marketplace Commits
    • GCP Marketplace Pay As You Go
    • GCP Marketplace Commits
  • Use Confluent Platform with Cloud
    • Overview
    • Connecting Control Center to Confluent Cloud
    • Connecting Clients to Confluent Cloud
    • Connect Kafka Connect to Confluent Cloud
    • Connecting REST Proxy to Confluent Cloud
    • Connecting ksqlDB to Confluent Cloud
    • Manage ksqlDB by using the Confluent Cloud CLI
    • Schema Registry and Confluent Cloud
    • Connecting Kafka Streams to Confluent Cloud
    • Auto-Generating Configurations for Components to Confluent Cloud
  • Confluent Cloud APIs
  • Confluent Cloud CLI
  • Release Notes
  • FAQ

Configure an Azure Private Link connection to Confluent Cloud¶

This topic describes how to configure Azure Private Link to connect to your Confluent Cloud cluster.

The following diagram summarizes the Azure Private Link architecture with the customer VNet/subscription and the Confluent Cloud VNet/subscription.

../_images/cloud-azure-privatelink.png

Overview of Azure Private Link

Prerequisite
A Dedicated Kafka cluster in Azure with Azure Private Link enabled. For more information about how to create a Dedicated cluster, see Create a Cluster in Confluent Cloud.

Follow this procedure to configure Azure Private Link for a Dedicated cluster in Azure.

  1. Register your Azure subscription with Confluent Cloud using the Confluent Cloud UI.
  2. Set up the Private Endpoint(s) to Confluent Cloud Private Link Service Alias(es) in your Azure subscription using the Azure portal.
  3. Set up Availability Zone mapped DNS records to use Azure Private Endpoints using the Azure portal.
  4. Validate connectivity to Confluent Cloud.

Requirements¶

  1. To use Azure Private Link with Confluent Cloud, your VNET must allow outbound internet connections for DNS resolution, Schema Registry and Confluent Cloud CLI to work.
    1. DNS requests to public authority traversing to private DNS zone is required.
    2. Confluent Cloud Schema Registry is only accessible over the internet.
    3. Confluent Cloud CLI requires internet access to authenticate with the Confluent Cloud control plane.
  2. Confluent Cloud web UI components, like topic management, need additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud web UI with Azure Private Link, follow this procedure.

Warning

For limitations of the Azure Private Link feature, see Limitations.

Register your Azure subscription with Confluent Cloud¶

To make an Azure Private Link connection to a cluster in Confluent Cloud you must register the Azure subscription ID you wish to use. This is a security measure that enables Confluent to ensure that only your organization can initiate Azure Private Link connections to the cluster. Azure Private Link connections from a VNET not contained in a registered Azure subscription will not be accepted by Confluent Cloud.

  1. Navigate to the Cluster Settings page, click the Networking tab, and click Add Subscription.
  2. Provide the Azure subscription Number for the subscription containing the VNETs you want to make the Private Link connection from and click Save. The Azure subscription number can be found on your Azure subscription page on the Azure portal. Your Azure Private Link connection status will transition from “Pending” to “Active” in the Confluent Cloud web UI. You still need to configure the Private Endpoints in your VNET before you can connect to the cluster.

Set up Private Endpoints for Azure Private Link in your Azure subscription¶

After the connection status is “Active” in the Confluent Cloud UI, you must configure Private Endpoints in your VNET from Azure portal to make the Private Link connection to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform configuration for setting up Private Link endpoints. This configuration automates the manual steps described below.

Prerequisite

In the Confluent Cloud UI you will find the following information for your Confluent Cloud cluster under the Cluster Settings section. This information is needed to configure Azure Private Link for a Dedicated cluster in Azure.

  • Kafka Bootstrap (in the General tab)
  • DNS domain Name (in the Networking tab)
  • Zonal DNS Subdomain Name(s) (in the Networking tab)
  • Service Alias(es) (in the Networking tab)

Create the following Private Endpoints through the Azure Private Link Center:

  1. For Confluent Cloud single availability zone clusters, create a single Private Endpoint to the Confluent Cloud Service Alias. For Confluent Cloud multi availability zone clusters, create a Private Endpoint to each of the Confluent Cloud zonal Service Aliases.
  2. Create a Private Endpoint for Confluent Cloud by clicking Create Private Endpoint.
  3. Fill in subscription, resource group, name, and region for the virtual endpoint and click next. The selected subscription must be the same as the one registered with Confluent Cloud.
  4. Select the Connect to an Azure resource by resource ID or alias option, paste in the Confluent Cloud Service Alias and click Next. You can find the Confluent Cloud Service Alias(es) in the Networking tab under Cluster settings in the Confluent Cloud UI.
  5. Fill in virtual network and subnet where the Private Endpoint is to be created.
  6. Click Review + create. Review the details and click Create to create the Private Endpoint.
  7. Wait for the Azure deployment to complete, go to the Private Endpoint resource and verify Private Endpoint connection status is Approved.

Set up DNS records to use Azure Private Endpoints¶

DNS changes must be made to ensure connectivity passes through Azure Private Link in the supported pattern. Any DNS provider that can ensure DNS is routed as follows is acceptable. Azure Private DNS Zone (used in this example) is one option.

Update DNS using Azure Private DNS Zone in the Azure console:

  1. Create the Private DNS Zone.

    1. Search for the Private DNS Zone resource in Azure portal.

    2. Click Add

    3. Copy the DNS Domain name from the Networking tab under Cluster Settings in the Confluent Cloud UI and use it as the name for the Private DNS Zone.

      For example:

      4kgzg.centralus.azure.confluent.cloud
      

      Note

      Notice there is no glb in the DNS Domain name

    4. Fill in subscription, resource group and name and click Review + create.

    5. Wait for the Azure deployment to complete.

  2. Create DNS records.

    1. Go to the Private DNS Zone resource as created above.
    2. Click + Record Set.
    3. Create the following record set for Confluent Cloud single availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
      1. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP address of the single virtual endpoint as created above.
    4. Create the following record sets for Confluent Cloud multi availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
      1. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP addresses of all three virtual endpoints as created above.
      2. Select name as “*.az1”, type as “A”, TTL as “1 Minute” and add IP address of the az1 virtual endpoint as created above.
      3. Select name as “*.az2”, type as “A”, TTL as “1 Minute” and add IP address of the az2 virtual endpoint as created above.
      4. Select name as “*.az3”, type as “A”, TTL as “1 Minute” and add IP address of the az3 virtual endpoint as created above.
  3. Attach the Private DNS Zone to the VNET(s) where clients/applications are present.

    1. Go to the Private DNS Zone resource and click Virtual network links under settings.
    2. Click Add.
    3. Fill in link name, subscription and virtual network.

Validate Connectivity to Confluent Cloud¶

  1. From an instance within the VNET (or anywhere the previous step’s DNS is set up), run the following to validate Kafka connectivity through Azure Private Link is working correctly.

    1. Set a variable with the cluster bootstrap URL.

      export BOOTSTRAP=$ConfluentCloudBootstrap
      

      For example:

      export BOOTSTRAP=lkc-222v1o-4kgzg.centralus.azure.glb.confluent.cloud
      
    2. Test connectivity to the cluster.

      openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      
    3. If the return output is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), connectivity to the bootstrap is confirmed.

    Note

    You might need to update the network security tools and firewalls to allow connectivity. If you have issues connecting after following these steps, confirm which network security systems your organization uses and whether their configurations need to be changed.

  2. Next, verify connectivity with the Confluent Cloud CLI.

    1. Log in to the Confluent Cloud CLI with your Confluent Cloud credentials.

      ccloud login
      
    2. List the clusters in your organization.

      ccloud kafka cluster list
      
    3. Select the cluster with Azure Private Link you wish to test.

      ccloud kafka cluster use ...
      

      For example:

      ccloud kafka cluster use lkc-222v1o
      
    4. Create a cluster API key to authenticate with the cluster.

      ccloud api-key create --resource ... --description ...
      

      For example:

      ccloud api-key create --resource lkc-222v1o --description "connectivity test"
      
    5. Select the API key you just created.

      ccloud api-key use ... --resource ...
      

      For example:

      ccloud api-key use R4XPKKUPLYZSHOAT --resource lkc-222v1o
      
    6. Create a test topic.

      ccloud kafka topic create test
      
    7. Start consuming events from the test topic.

      ccloud kafka topic consume test
      
    8. Open another terminal tab or window.

    9. Start a producer.

      ccloud kafka topic produce test
      
    10. Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    11. The tab running consume will print what was typed in the tab running produce.

  3. You’re done! The cluster is ready for use.

Note

The bootstrap and broker hostname DNS resolution for Confluent Cloud Cluster with Private Link is a two-step process:

  1. The bootstrap and broker hostnames have “glb” as a subdomain in their domain name (for example <cluster-subdomain-name>.eastus2.azure.glb.confluent.cloud). In the first step, Confluent Cloud Global DNS Resolver returns a CNAME for bootstrap and broker hostnames which doesn’t include the “glb” subdomain (for example <cluster-subdomain-name>.eastus2.azure.confluent.cloud).
  2. In the second step, the CNAME without the “glb” subdomain is resolved to private endpoint(s) IP address(es) using the Private DNS Zone that you configure by using the previous steps.

Some DNS systems, like Windows DNS service, lack the ability to recursively resolve the above mentioned two step resolution within a single DNS node. For such situations, you should use two DNS systems. The first DNS system sets up separate forwarding rules for a domain with the “glb” subdomain and a domain without it, and forwards it to the second DNS system. The second DNS system recursively resolves by forwarding the “glb” name resolution request to Confluent Cloud Global DNS Resolver, which creates the “non-glb” name resolution to the Cloud DNS that hosts the Private DNS Zone as shown previously. Another alternative is to host the “non-glb” DNS records locally in the second DNS system.

../_images/cloud-azure-privatelink_dns.png

Limitations¶

Warning

  1. Cross-region Azure Private Link connections are not supported.
  2. Azure Private Link is only available for use with Dedicated clusters.
  3. Existing Confluent Cloud clusters cannot be converted to use Azure Private Link.
  4. Fully-managed ksqlDB is not available for use with Azure Private Link clusters.
  5. Fully-managed Confluent Cloud connectors can connect to source(s) or sink(s) using a public IP. Source(s) or sink(s) in the customer network with private IP are not supported.
  6. Azure Private Link connections cannot be shared across multiple Confluent Cloud clusters. Separate Azure Private Link connections must be made to each Confluent Cloud cluster.
  7. Availability zone selection for placement of Confluent Cloud cluster and Azure Private Link service is not supported.
  8. For requirements of the Azure Private Link feature, see Requirements.

© Copyright , Confluent, Inc. Privacy Policy | Terms & Conditions. Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the Apache Software Foundation. All other trademarks, servicemarks, and copyrights are the property of their respective owners.

Please report any inaccuracies on this page or suggest an edit.

On this page: