documentation
Get Started Free
  • Get Started Free
  • Courses
      What are the courses?

      Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

      Learning pathways (21)
      New Courses
      NewApache Flink® 101
      NewBuilding Flink® Apps in Java
      NewKafka® for .NET Developers
      NewPractical Event Modeling
      NewHybrid and Multicloud Architecture
      NewMastering Production Data Streaming Systems with Apache Kafka®
      Featured Courses
      Kafka® 101
      Kafka® Connect 101
      Kafka Streams 101
      Schema Registry 101
      ksqlDB 101
      Data Mesh 101
  • Learn
      Pick your learning path

      A wide range of resources to get you started

      Start Learning
      Articles

      Deep-dives into key concepts

      Patterns

      Architectures for event streaming

      FAQs

      Q & A about Kafka® and its ecosystem

      100 Days of Code

      A self-directed learning path

      Blog

      The Confluent blog

      Podcast

      Our podcast, Streaming Audio

      Coding in Motion

      Build a real-time streaming app

      NewApache Kafka® on the Go

      One-minute guides to Kafka's core concepts

  • Build
      Design. Build. Run.

      Build a client app, explore use cases, and build on our demos and resources

      Start Building
      Language Guides

      Build apps in your favorite language

      Tutorials

      Hands-on stream processing examples

      Demos

      More resources to get you started

  • Community
      Join the Community

      Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

      Learn More
      Meetups & Events

      Kafka and data streaming community

      Ask the Community

      Community forums and Slack channels

      Community Catalysts

      Sharing expertise with the community

      DevX Newsletter

      Bi-weekly newsletter with Apache Kafka® resources, news from the community, and fun links.

      Current 2023

      View sessions and slides from Current 2023

      Kafka Summit 2023

      View sessions and slides from Kafka Summit 2023

      Current 2022

      View sessions and slides from 2022

      Data Streaming Awards

      Nominate amazing use cases and view previous winners

Courses
What are the courses?

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

Learning pathways (21)
New Courses
NewApache Flink® 101
NewBuilding Flink® Apps in Java
NewKafka® for .NET Developers
NewPractical Event Modeling
NewHybrid and Multicloud Architecture
NewMastering Production Data Streaming Systems with Apache Kafka®
Featured Courses
Kafka® 101
Kafka® Connect 101
Kafka Streams 101
Schema Registry 101
ksqlDB 101
Data Mesh 101
Learn
Pick your learning path

A wide range of resources to get you started

Start Learning
Articles

Deep-dives into key concepts

Patterns

Architectures for event streaming

FAQs

Q & A about Kafka® and its ecosystem

100 Days of Code

A self-directed learning path

Blog

The Confluent blog

Podcast

Our podcast, Streaming Audio

Coding in Motion

Build a real-time streaming app

NewApache Kafka® on the Go

One-minute guides to Kafka's core concepts

Build
Design. Build. Run.

Build a client app, explore use cases, and build on our demos and resources

Start Building
Language Guides

Build apps in your favorite language

Tutorials

Hands-on stream processing examples

Demos

More resources to get you started

Community
Join the Community

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Learn More
Meetups & Events

Kafka and data streaming community

Ask the Community

Community forums and Slack channels

Community Catalysts

Sharing expertise with the community

DevX Newsletter

Bi-weekly newsletter with Apache Kafka® resources, news from the community, and fun links.

Current 2023

View sessions and slides from Current 2023

Kafka Summit 2023

View sessions and slides from Kafka Summit 2023

Current 2022

View sessions and slides from 2022

Data Streaming Awards

Nominate amazing use cases and view previous winners

Get Started Free
Confluent Documentation
  1. Home
  2. Cloud
  3. Manage Networking on Confluent Cloud
  4. Networking with Azure

CLOUD

  • Overview
  • Get Started
    • Overview
    • Quick Start
    • Kafka API Quick Start
    • Manage Schemas Quick Start
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview of Confluent Cloud Examples
      • Example: Create Fully-Managed Services
      • Example: Build an ETL Pipeline
      • Confluent Replicator to Confluent Cloud Configurations
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
        • Manage Audit Logs
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Use Self-managed Encryption Keys
      • Protect Data at Rest using Self-managed Encryption Keys
      • Use Self-managed Encryption Keys on AWS
      • Use Self-managed Encryption Keys on Azure
      • Use Self-managed Encryption Keys on Google Cloud
      • Use Confluent CLI for Self-managed Encryption Keys
      • Use BYOK API for Self-managed Encryption Keys
      • Revoke Access to Data at Rest
      • Best Practices
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
  • Build Client Applications
    • Overview
    • Connect with Confluent (Partner Integration)
    • Client Quick Start
    • Configuration Settings
    • Kafka Consumer
    • Kafka Producer
    • Guides
      • Python Client
      • .NET Client
      • Go Client
      • Java Client
      • C++ Client
      • JMS Client
    • Examples
      • Overview
      • C/C++ Example
      • .NET Example
      • Go Example
      • Spring Boot Example
      • Java Example
      • KafkaProducer Example
      • Python Example
      • REST Example
      • Node.js Example
      • Clojure Example
      • Groovy Example
      • Kafka Connect Datagen Example
      • kafkacat Example
      • Kotlin Example
      • Ruby Example
      • Rust Example
      • Scala Example
    • Architecture
    • Test
    • Monitor
    • Optimize and Tune
      • Overview
      • Configuration Settings
      • Throughput
      • Latency
      • Durability
      • Availability
  • Manage Accounts and Access
    • Resource Hierarchy
      • Overview
      • Organizations
        • Overview
        • Manage Multiple Organizations
      • Environments
      • Confluent Resource Names (CRNs)
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • Overview
      • Use API Keys
        • Overview
        • Best Practices
        • Troubleshoot
      • Use OAuth
        • Overview
        • Add an OAuth/OIDC identity provider
        • Use identity pools and filters
        • Manage the JWKS URI
        • Configure OAuth clients
        • Access Kafka REST APIs
        • Use Confluent STS tokens with REST APIs
        • Best Practices
      • Use Single Sign-on (SSO)
        • Overview
        • Enable SAML SSO
        • Use Azure OIDC SSO
        • Just-in-time User Provisioning
        • Group Mapping
          • Overview
          • Enable Group Mapping
          • Manage Group Mappings
          • Troubleshooting
          • Best Practices
      • Authentication Security Protections
    • Control Access
      • Role-Based Access Control
        • Role-based Access Control (RBAC)
        • Predefined RBAC Roles
        • Manage Role Bindings
        • Use ACLs with RBAC
      • IP Filtering
        • Overview
        • Manage IP Groups
        • Manage IP Filters
        • Best Practices
      • Access Control Lists
      • Use the Confluent CLI with multiple credentials
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Work With Topics
    • Use the Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
  • Govern Data Streams
    • Overview
    • Stream Governance
      • Packages, Features, and Limits
      • Data Portal
      • Track Data with Stream Lineage
      • Manage Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage Guide
        • GraphQL API Catalog Usage Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema Validation
      • Schema Linking
      • Schema Registry Tutorial
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Formats, Serializers, and Deserializers
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
      • Security
    • Reference
      • Schema Registry REST API Usage Examples
      • Clusters API Quick Start
      • Use AsyncAPI to Describe Topics and Schemas
      • Maven Plugin
    • FAQ
  • Connect to External Systems
    • Overview
    • Install Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon SQS Source
      • Amazon S3 Sink
      • Amazon S3 Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Source
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS)
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink (Legacy)
      • Google BigQuery Sink V2
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink
      • Google Cloud Functions Sink
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • Google Cloud Pub/Sub Source
      • HTTP Sink
      • HTTP Source
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium)
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium)
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • Oracle CDC Source
        • Connector Features
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • Configure and Launch the connector
        • SMT examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink
      • Pinecone Sink
      • PostgreSQL CDC Source (Debezium)
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Source
      • RabbitMQ Sink
      • Redis Sink
      • Salesforce Bulk API Source
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Install Custom Plugins and Custom Connectors
      • About Custom Connectors
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
      • API and CLI
    • Networking, DNS, and Service Endpoints
    • Confluent Cloud API for Managed and Custom Connectors
    • Manage Public Egress IP Addresses
    • Preview Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • EventRouter (Debezium)
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
  • Process Data Streams
    • SQL Overview
    • Get Started with SQL
      • Get Started
      • Quick Start with Cloud Console
      • Quick Start with SQL Shell
    • How-To Guides
      • How-to Guides
      • Aggregate a Stream in a Tumbling Window
      • Convert the Serialization Format of a Topic
    • Concepts
      • Stream Processing Concepts
      • Compute Pools
      • Statements
      • Determinism in Continuous Queries
      • Dynamic Tables
      • Timely Stream Processing
      • Time Attributes
    • Operate and Deploy
      • Operate and Deploy
      • Manage statement life cycle with the Confluent CLI
      • Monitor Statements
      • Grant Role-Based Access
      • Billing
      • Generate a HAR file for Troubleshooting
    • Reference
      • SQL Reference
      • SQL Syntax
      • Statements
        • Overview
        • ALTER TABLE
        • CREATE TABLE
        • DESCRIBE
        • RESET
        • SET
        • SHOW
        • USE CATALOG
        • USE database_name
      • Queries
        • DML Queries
        • Deduplication
        • Group Aggregation
        • INSERT INTO FROM SELECT
        • INSERT VALUES
        • Joins
        • LIMIT
        • Pattern Recognition
        • ORDER BY
        • OVER Aggregation
        • SELECT
        • Set Logic
        • Top-N
        • Window Aggregation
        • Window Deduplication
        • Window Join
        • Window Top-N
        • Window Table-Valued Function
        • WITH
      • Functions
        • Functions
        • Aggregate
        • Collections
        • Comparison
        • Conditional
        • Datetime
        • Hashing
        • JSON
        • Numeric
        • String
      • Data Types
      • Time Zone
      • Serialize and Deserialize Data
      • Reserved Keywords
      • Notable Limitations in Public Preview
    • Get Help
    • Build Data Pipelines with Stream Designer
      • Stream Designer for Confluent Cloud
      • Quick Start for Stream Designer
      • Create a Join Pipeline
      • Create an Aggregation Pipeline
      • Import a Recipe Into a Pipeline
      • Import and Export a Pipeline
      • Edit and Update a Pipeline
      • Role-Based Access Control for Pipelines
      • Troubleshoot a Pipeline in Stream Designer
      • Manage Pipelines With the CLI
      • Manage Pipelines With the REST API
      • Manage Pipeline Secrets
    • Create Stream Processing Apps with ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the Confluent CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Quick Start
      • Grant Role-Based Access
  • Manage Networking
    • Networking in Confluent Cloud
    • Networking on AWS
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • AWS PrivateLink
        • AWS PrivateLink for Dedicated Clusters
        • AWS PrivateLink for Enterprise Clusters
      • VPC Peering on AWS
      • AWS Transit Gateway
    • Networking on Azure
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Azure Private Link
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Google Cloud Private Service Connect
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Schema Registry in Private Networking
      • Public Egress IP Address for Connectors and Cluster Linking
      • Cluster Linking in AWS PrivateLink
      • Follower Fetching in AWS VPC Peering
    • Use Confluent Cloud with Private Networking
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Auditable Event Methods
        • Connector
        • Custom connector plugin
        • Flink
        • IP Filter Authorization
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management
        • ksqlDB Cluster Authentication and Authorization
        • Networking
        • Notifications Service
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Pipeline (Stream Designer)
        • Role-based Access Control (RBAC)
        • Schema Registry Authentication and Authorization
        • Schema Registry Management and Operations
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Metrics
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Kafka Clients to Confluent Cloud
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use GCP Pay As You Go
    • Use GCP Commits
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Service Quotas
    • List Service Quotas using Confluent CLI
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Kafka Admin and Produce APIs
    • Connect API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
    • Stream Designer Pipelines API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
  • Support
  • Glossary

Use Azure Private Link with Confluent Cloud¶

Azure Private Link allows for one-way secure connection access from your VNet to Confluent Cloud with an added protection against data exfiltration. This networking option is popular for its unique combination of security and simplicity of setup.

The following diagram summarizes the Azure Private Link architecture between the VNet or subscription and the Confluent Cloud cluster.

Azure Private Link architecture between customer VNet or subscription and Confluent Cloud cluster

To set up to use Azure Private Link with Confluent Cloud:

  1. Register your Azure subscription with Confluent Cloud.
  2. Create an Azure Private Link connection to Confluent Cloud.
  3. Set up DNS records to use Azure Private Endpoints.
  4. Validate connectivity to Confluent Cloud.

Requirements and considerations¶

  • Have a Confluent Cloud network (CCN) of type Private Link in Azure available. If a network does not exist, see Confluent Cloud Network on Azure.
  • To use an Azure Private Link service with Confluent Cloud, your VNet must allow outbound internet connections for Confluent Cloud Schema Registry, ksqlDB, and Confluent CLI to work.
    • DNS requests to public authority traversing to private DNS zone is required.
    • Confluent Cloud Schema Registry is accessible over the internet.
    • Provisioning new ksqlDB instances requires Internet access. After ksqlDB instances are up and running, they are fully accessible over Azure Private Link connections.
    • Confluent CLI requires internet access to authenticate with the Confluent Cloud control plane.
  • Confluent Cloud Console components, such as topic management, need additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud Console with Azure Private Link, see Use Confluent Cloud with Private Networking.
  • Cross-region Azure Private Link connections are not supported.
  • Azure Private Link is only available for use with Dedicated clusters.
  • Existing Confluent Cloud clusters cannot be converted to use Azure Private Link.
  • Availability zone selection for placement of Confluent Cloud cluster and Azure Private Link service is not supported.

Connectors¶

Fully-managed Confluent Cloud connectors can connect to data sources or sinks using a public IP address. Sources or sinks in the customer network with private IP addresses are not supported.

Register your Azure subscription with Confluent Cloud¶

Register your Azure subscription with the Confluent Cloud network for automatic approval of private endpoint connections to the Confluent Cloud network. If required, you can register multiple subscriptions.

  1. Open the Confluent Cloud Console, in the Network Management tab, click the Confluent Cloud network you want to add the connection to.
  2. Click + Private Link Access.
  3. Enter the Azure subscription ID for the account containing the VNets you want to make the Azure Private Link connection from. The Azure subscription number can be found on your Azure subscription page of the Azure Portal.
  4. Click Save.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/private-link-accesses

Authentication

See Authentication.

Request specification

In the request specification, include Confluent Cloud network ID, subscription, environment, and, optionally, add the display name. Update the attributes below with the correct values.

{
   "spec":{
      "display_name":"Azure-PL-CCN-1",
      "cloud":{
         "kind":"AzurePrivateLinkAccess",
         "subscription":"00000000-0000-0000-0000-000000000000"
      },
      "environment":{
         "id":"env-abc123"
      },
      "network":{
         "id":"n-000000"
      }
   }
}

Your Azure Private Link connection status will transition from “Pending” to “Active” in the Confluent Cloud Console. You still need to configure the Private Endpoints in your VNet before you can connect to the cluster.

Note the Private Link Service Endpoint to create an Azure Private Link connection from your VNet to the Confluent Cloud cluster. This URL will also be provided later.

Create an Azure Private Link connection to Confluent Cloud¶

After the connection status is “Active” in the Confluent Cloud Console, configure Private Endpoints in your VNet from the Azure Portal to make the Azure Private Link connection to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform configuration for setting up Private Link endpoints. This configuration automates the manual steps described below.

For Confluent Cloud single availability zone clusters, create a single Private Endpoint to the Confluent Cloud Service Alias for the Kafka cluster zone.

For Confluent Cloud multi-availability zone clusters, create a Private Endpoint to each of the Confluent Cloud zonal Service Aliases.

To set up the VNet Endpoint for Azure Private Link in your Azure account:

  1. In the Confluent Cloud Console in Cluster Overview, gather the following information:

    • In Cluster Settings:

      • Bootstrap server endpoint
      • For single availability zone clusters, the availability zone of the Kafka cluster (Zones in the Cloud details section)
    • In Networking > Details:

      • DNS domain

      • Zonal DNS subdomain

      • Service Aliases

        For single availability zone clusters, you only need the Service Alias of the Kafka cluster zone you retrieved in Cluster Settings above.

  2. In the Azure Private Link Center, click Create Private Endpoint.

  3. Fill in subscription, resource group, name, and region for the virtual endpoint and click next. The selected subscription must be the same as the one registered with Confluent Cloud.

  4. Select the Connect to an Azure resource by resource ID or alias option, paste in the Confluent Cloud Service Alias and click Next. You can find the Confluent Cloud Service Aliases in the Networking tab under Cluster settings in the Confluent Cloud Console.

  5. Fill in virtual network and subnet where the Private Endpoint is to be created.

  6. Click Review + create. Review the details and click Create to create the Private Endpoint.

  7. Wait for the Azure deployment to complete, go to the Private Endpoint resource and verify Private Endpoint connection status is Approved.

Set up DNS records to use Azure Private Endpoints¶

You must update your DNS records to ensure connectivity passes through Azure Private Link in the supported pattern. Any DNS provider that can ensure DNS is routed as follows is acceptable. Azure Private DNS Zone (used in this example) is one option.

DNS resolution options¶

For Azure Private Link Confluent Cloud networks, you can use the public or private DNS resolution:

  • The private DNS resolution is the recommended option and guarantees fully private DNS resolution.
  • The public DNS resolution is useful when you want to ensure that Confluent deployments are homogenous and conform to DNS configurations for your networks.

DNS resolution is selected when you create a Confluent Cloud network, and it cannot be modified after creating the Confluent Cloud network. See Create a Confluent Cloud network.

Public DNS resolution¶

The public (also known as chased private in Confluent Cloud) DNS resolution is used for the bootstrap server and broker hostnames of a Confluent Cloud cluster that is using Azure Private Link. When the public resolution is used, the clusters in this network requires both public and private DNS to resolve cluster endpoints.

Only the the Confluent Global DNS Resolver (GLB) endpoints are advertised.

The public DNS resolution performs the following two-step process:

  1. The Confluent Cloud Global DNS Resolver removes the glb subdomain and returns a CNAME for your bootstrap and broker hostnames.

    Example: $lkc-id-$nid.$region.$cloud.glb.confluent.cloud

    CNAME returned: $lkc-id.$nid.$region.$cloud.confluent.cloud

  2. The CNAME resolves to your VNet private endpoints based on the Private DNS Zone configuration.

Warning

Some DNS systems, like Windows DNS service, lack the ability to recursively resolve the previously mentioned two-step process within a single DNS node. To solve the issue, use Private DNS resolution.

Private DNS resolution¶

When the private DNS resolution is used, the clusters in this network only require private DNS to resolve cluster endpoints. Only non-GLB endpoints are advertised.

Tip

To identity the CNAME DNS zone records to correctly map to zonal endpoints for Confluent Cloud, you can run the DNS helper script.

Configure DNS zones¶

DNS entries need to be created for Private Link irrespective of the DNS resolution option you selected when creating the Confluent Cloud network.

To update DNS resolution using Azure Private DNS Zone in the Azure console:

  1. Create the Private DNS Zone.

    1. Search for the Private DNS Zone resource in the Azure Portal.

    2. Click Add.

    3. Copy the DNS Domain name from the Networking tab under Cluster Settings in the Confluent Cloud Console and use it as the name for the Private DNS Zone.

      For example:

      4kgzg.centralus.azure.confluent.cloud
      

      Note that there is no glb in the DNS Domain name.

      If the Confluent Cloud DNS Domain name includes the logical cluster id which starts with lkc-, omit the logical cluster id when specifying it as the Private DNS Zone name. For example, the DNS Domain name shown as lkc-123abc-4kgzg.centralus.azure.confluent.cloud in Confluent Cloud should be converted to 4kgzg.centralus.azure.confluent.cloud to be used as the Private DNS Zone name.

    4. Fill in subscription, resource group and name and click Review + create.

    5. Wait for the Azure deployment to complete.

  2. Create DNS records.

    1. Go to the Private DNS Zone resource as created above.
    2. Click + Record Set.
    3. Create the following record set for Confluent Cloud single availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
      1. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP address of the single virtual endpoint as created above.
    4. Create the following record sets for Confluent Cloud multi-availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
      1. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP addresses of all three virtual endpoints as created above.
      2. Select name as “*.az1”, type as “A”, TTL as “1 Minute” and add IP address of the az1 virtual endpoint as created above.
      3. Select name as “*.az2”, type as “A”, TTL as “1 Minute” and add IP address of the az2 virtual endpoint as created above.
      4. Select name as “*.az3”, type as “A”, TTL as “1 Minute” and add IP address of the az3 virtual endpoint as created above.
  3. Attach the Private DNS Zone to the VNets where clients or applications are present.

  4. Go to the Private DNS Zone resource and click Virtual network links under settings.

    1. Click Add.
    2. Fill in link name, subscription and virtual network.

Validate connectivity to Confluent Cloud¶

  1. From an instance within the VNet, or anywhere the DNS is set up, run the following to validate Kafka connectivity through Azure Private Link is working correctly.

    1. Set an environment variable with the cluster bootstrap URL.

      export BOOTSTRAP=$<bootstrap-server-url>
      

      The Bootstrap URL displayed in Confluent Cloud Console includes the port (9092). The BOOTSTRAP value should include the full hostname, but do not include the port. This is so that you can run the openssl s_client -connect <host>:<port> command with the required values.

      For example:

      export BOOTSTRAP=lkc-222v1o-4kgzg.centralus.azure.glb.confluent.cloud
      
    2. Test connectivity to your cluster by running the openssl s_client -connect <host>:<port> command, specifying the $BOOTSTRAP environment variable for the <host> value and 9092 for the <port> value.

      openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      

      To run the openssl s_client -connect command, the -connect option requires that you specify the host and the port number. For details, see the openssl documentation for the -connect option option in the openssl s_client documentation.

    3. If the output returned is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), then connectivity to the bootstrap is confirmed.

      Note

      You might need to update the network security tools and firewalls to allow connectivity. If you have issues connecting after following these steps, confirm which network security systems your organization uses and whether their configurations need to be changed. If you still have issues, run the debug connectivity script and provide the output to Confluent Support for assistance with your Azure Private Link setup.

    4. Log in to the Confluent Cloud CLI with your Confluent Cloud credentials.

      confluent login
      
    5. List the clusters in your organization.

      confluent kafka cluster list
      
    6. Select the cluster with Azure Private Link you wish to test.

      confluent kafka cluster use ...
      

      For example:

      confluent kafka cluster use lkc-222v1o
      
    7. Create a cluster API key to authenticate with the cluster.

      confluent api-key create --resource ... --description ...
      

      For example:

      confluent api-key create --resource lkc-222v1o --description "connectivity test"
      
    8. Select the API key you just created.

      confluent api-key use ... --resource ...
      

      For example:

      confluent api-key use R4XPKKUPLYZSHOAT --resource lkc-222v1o
      
    9. Create a test topic.

      confluent kafka topic create test
      
    10. Start consuming events from the test topic.

      confluent kafka topic consume test
      
    11. Open another terminal tab or window.

    12. Start a producer.

      confluent kafka topic produce test
      

      Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    13. The tab running consume will print what was typed in the tab running produce.

      The cluster is ready for use.

Related content¶

  • For an overview of Azure Private Link, see Setting Up Secure Networking in Confluent with Azure Private Link.

    For getting started with Azure Private Link in Confluent Cloud, you should use the currently supported steps described in this document.

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portal Ask the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- Apache, Apache Kafka, Kafka, the Kafka logo, Apache Flink, Flink, and the Flink logo are trademarks of the Apache Software Foundation

On this page: