documentation
Get Started Free
  • Get Started Free
  • Stream
      Confluent Cloud

      Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

      Confluent Platform

      An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

  • Connect
      Managed

      Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

      Self-Managed

      Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

  • Govern
      Managed

      Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

      Self-Managed

      Use self-managed Schema Registry and Stream Governance with Confluent Platform.

  • Process
      Managed

      Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

      Self-Managed

      Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Stream
Confluent Cloud

Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

Confluent Platform

An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

Connect
Managed

Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

Self-Managed

Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

Govern
Managed

Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

Self-Managed

Use self-managed Schema Registry and Stream Governance with Confluent Platform.

Process
Managed

Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

Self-Managed

Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Learn
Get Started Free
  1. Home
  2. Cloud
  3. Manage Networking on Confluent Cloud
  4. Networking with Azure on Confluent Cloud
  5. Use Azure Private Links

CLOUD

  • Overview
  • Get Started
    • Overview
    • Quick Start
    • REST API Quick Start
    • Manage Schemas
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview
      • Example: Use Replicator to Copy Kafka Data to Cloud
      • Example: Create Fully-Managed Services
      • Example: Build an ETL Pipeline
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
        • Manage Audit Logs
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-Tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
  • Build Client Applications
    • Overview
    • Client Quick Start
    • Configure Clients
      • Architectural Considerations
      • Consumer
      • Producer
      • Configuration Properties
      • Connect Program
    • Test and Monitor a Client
      • Test
      • Monitor
      • Reset Offsets
    • Optimize and Tune
      • Overview
      • Configuration Settings
      • Throughput
      • Latency
      • Durability
      • Availability
      • Freight
    • Client Guides
      • Python
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java Client
      • JMS Client
        • Overview
        • Development Guide
    • Kafka Client APIs
      • Python Client API
      • .NET Client API
      • JavaScript Client API
      • Go Client API
      • C++ Client API
      • Java Client API
      • JMS Client
        • Overview
        • Development Guide
    • Deprecated Client APIs
    • Client Examples
      • Overview
      • Python Client
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java
      • Spring Boot
      • KafkaProducer
      • REST
      • Clojure
      • Groovy
      • Kafka Connect Datagen
      • kafkacat
      • Kotlin
      • Ruby
      • Rust
      • Scala
    • VS Code Extension
  • Build Kafka Streams Applications
    • Overview
    • Quick Start
    • ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Quick Start
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Grant Role-Based Access
      • Migrate ksqlDB Applications on Confluent Cloud
  • Manage Security
    • Overview
    • Manage Authentication
      • Overview
      • Manage User Identities
        • Overview
        • Manage User Accounts
          • Overview
          • Authentication Security Protections
          • Manage Local User Accounts
          • Multi-factor Authentication
          • Manage SSO User Accounts
        • Manage User Identity Providers
          • Overview
          • Use Single Sign-On (SSO)
          • Manage SAML Single Sign-On (SSO)
          • Manage Azure Marketplace SSO
          • Just-in-time User Provisioning
          • Group Mapping
            • Overview
            • Enable Group Mapping
            • Manage Group Mappings
            • Troubleshooting
            • Best Practices
          • Manage Trusted Domains
          • Manage SSO provider
          • Troubleshoot SSO
      • Manage Workload Identities
        • Overview
        • Manage Workload Identities
        • Manage Service Accounts and API Keys
          • Overview
          • Manage Service Accounts
          • Manage API Keys
            • Overview
            • Manage API keys
            • Best Practices
            • Troubleshoot
        • Manage OAuth/OIDC Identity Providers
          • Overview
          • Add an OIDC identity provider
          • Use OAuth identity pools and filters
          • Manage identity provider configurations
          • Manage the JWKS URI
          • Configure OAuth clients
          • Access Kafka REST APIs
          • Use Confluent STS tokens with REST APIs
          • Best Practices
        • Manage mTLS Identity Providers
          • Overview
          • Configure mTLS
          • Manage Certificate Authorities
          • Manage Certificate Identity Pools
          • Create CEL Filters for mTLS
          • Create JSON payloads for mTLS
          • Manage Certificate Revocation
          • Troubleshoot mTLS Issues
    • Control Access
      • Overview
      • Resource Hierarchy
        • Overview
        • Organizations
          • Overview
          • Manage Multiple Organizations
        • Environments
        • Confluent Resource Names (CRNs)
      • Manage Role-Based Access Control
        • Overview
        • Predefined RBAC Roles
        • Manage Role Bindings
        • Use ACLs with RBAC
      • Manage IP Filtering
        • Overview
        • Manage IP Groups
        • Manage IP Filters
        • Best Practices
      • Manage Access Control Lists
      • Use the Confluent CLI with multiple credentials on Confluent Cloud
    • Encrypt and Protect Data
      • Overview
      • Manage Data in Transit With TLS
      • Encrypt Data at Rest Using Self-managed Encryption Keys
        • Overview
        • Use Self-managed Encryption Keys on AWS
        • Use Self-managed Encryption Keys on Azure
        • Use Self-managed Encryption Keys on Google Cloud
        • Use Pre-BYOK-API-V1 Self-managed Encryption Keys
        • Use Confluent CLI for Self-managed Encryption Keys
        • Use BYOK API for Self-managed Encryption Keys
        • Revoke Access to Data at Rest
        • Best Practices
      • Encrypt Sensitive Data Using Client-side Field Level Encryption
        • Overview
        • Manage CSFLE using Confluent Cloud Console
        • Use Client-side Field Level Encryption
        • Configuration Settings
        • Manage Encryption Keys
        • Quick Start
        • Implement a Custom KMS Driver
        • Process Encrypted Data with Apache Flink
        • Code examples
        • Troubleshoot
        • FAQ
    • Monitor Activity
      • Concepts
      • Understand Audit Log Records
      • Audit Log Event Schema
      • Auditable Event Methods
        • Connector
        • Custom connector plugin
        • Flink
        • Flink Authentication and Authorization
        • IP Filter Authorization
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management
        • ksqlDB Cluster Authentication and Authorization
        • Networking
        • Notifications Service
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Role-based Access Control (RBAC)
        • Schema Registry Authentication and Authorization
        • Schema Registry Management and Operations
        • Tableflow Data Plane
        • Tableflow Control Plane
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Configuration Reference
    • Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
    • Tableflow
      • Overview
      • Concepts
        • Overview
        • Storage
        • Schemas
        • Materialize Change Data Capture Streams
        • Billing
      • Get Started
        • Overview
        • Quick Start with Managed Storage
        • Quick Start Using Your Storage and AWS Glue
        • Quick Start with Delta Lake Tables
      • How-to Guides
        • Overview
        • Configure Storage
        • Integrate Catalogs
          • Overview
          • Integrate with AWS Glue Catalog
          • Integrate with Snowflake Open Catalog or Apache Polaris
        • Query Data
          • Overview
          • Query with AWS
          • Query with Flink
          • Query with Snowflake
          • Query with Trino
      • Operate
        • Overview
        • Configure
        • Grant Role-Based Access
        • Monitor
        • Use Private Networking
        • Supported Cloud Regions
  • Govern Data Streams
    • Overview
    • Stream Governance
      • Manage Governance Packages
      • Data Portal
      • Track Data with Stream Lineage
      • Manage Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage and Examples Guide
        • GraphQL API Catalog Usage and Examples Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema ID Validation
      • Schema Linking
      • Schema Registry Tutorial
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Serializers and Deserializers Overview
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
      • Security Considerations
      • Enable Private Networking
        • Enable Private Networking with Schema Registry PrivateLink
        • Enable Private Networking for Schema Registry with a Public Endpoint
    • Reference
      • Configure Clients to Schema Registry
      • Schema Registry REST API Usage Examples
      • Use AsyncAPI to Describe Topics and Schemas
      • Maven Plugin
    • FAQ
  • Connect to External Systems
    • Overview
    • Install Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB CDC Source
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon S3 Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
      • Amazon S3 Source
      • Amazon SQS Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
        • Configure and Launch
        • Configure with Azure Egress Private Link Endpoints
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Sink V2
      • Azure Cosmos DB Source
      • Azure Cosmos DB Source V2
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS) Sink Connector for Confluent Cloud
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink [Deprecated]
      • Google BigQuery Sink V2
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink [Deprecated]
      • Google Cloud Functions Gen 2 Sink
      • Google Cloud Functions Sink [Deprecated]
      • Google Cloud Pub/Sub Source
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • HTTP Sink
      • HTTP Sink V2
      • HTTP Source
      • HTTP Source V2
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium) [Deprecated]
      • Microsoft SQL Server CDC Source V2 (Debezium)
        • Configure and launch the connector
        • Backward incompatibility considerations
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Egress Private Service Connect Endpoints
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium) [Deprecated]
      • MySQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • OpenSearch Sink
      • Oracle XStream CDC Source
        • Overview
        • Configure and Launch the connector
        • Oracle Database Prerequisites
        • Change Events
        • Examples
      • Oracle CDC Source
        • Overview
        • Configure and Launch the connector
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • SMT Examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink [Deprecated]
      • Pinecone Sink
      • PostgreSQL CDC Source (Debezium) [Deprecated]
      • PostgreSQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Sink
      • RabbitMQ Source
      • Redis Sink
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce Bulk API Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source [Legacy]
      • ServiceNow Source V2
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Snowflake Source
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Confluent Hub
      • Overview
      • Component Archive Specification
      • Contribute
    • Install Custom Plugins and Custom Connectors
      • Overview
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
      • API and CLI
    • Manage Provider Integration
      • Quick Start
      • Provider Integration APIs
    • Manage CSFLE
    • Networking and DNS
      • Overview
      • AWS Egress PrivateLink Endpoints for First-Party Services
      • AWS Egress PrivateLink Endpoints for Self-Managed Services
      • AWS Egress PrivateLink Endpoints for Amazon RDS
      • Azure Egress Private Link Endpoints for First-Party Services
      • Azure Egress Private Link Endpoints for Self-Managed Services
      • Google Cloud Private Service Connect Endpoints for First-Party Services
    • Connect API Usage
    • Manage Public Egress IP Addresses
    • Sample Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Interpret Connector Statuses
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Manage Offsets
    • Transforms List
      • Overview
      • Cast
      • Drop
      • DropHeaders
      • EventRouter
      • ExtractField
      • ExtractTopic
      • Filter (Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField (Kafka)
      • ReplaceField (Confluent)
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
  • Process Data with Flink
    • Overview
    • Get Started
      • Overview
      • Quick Start with Cloud Console
      • Quick Start with SQL Shell in Confluent CLI
      • Quick Start with Java Table API
      • Quick Start with Python Table API
    • Concepts
      • Overview
      • Compute Pools
      • Autopilot
      • Statements
      • Determinism
      • Tables and Topics
      • Time and Watermarks
      • User-defined Functions
      • Delivery Guarantees and Latency
      • Schema and Statement Evolution
      • Snapshot Queries
      • Private Networking
      • Comparison with Apache Flink
      • Billing
    • How-To Guides
      • Overview
      • Aggregate a Stream in a Tumbling Window
      • Combine Streams and Track Most Recent Records
      • Compare Current and Previous Values in a Stream
      • Convert the Serialization Format of a Topic
      • Create a UDF
      • Deduplicate Rows in a Table
      • Enable UDF Logging
      • Handle Multiple Event Types
      • Mask Fields in a Table
      • Process Schemaless Events
      • Resolve Common SQL Query Problems
      • Run a Snapshot Query
      • Scan and Summarize Tables
      • Transform a Topic
      • View Time Series Data
    • Operate and Deploy
      • Overview
      • Manage Compute Pools
      • Monitor and Manage Statements
      • Grant Role-Based Access
      • Deploy a Statement with CI/CD
      • Generate a Flink API Key
      • REST API
      • Move SQL Statements to Production
      • Enable Private Networking
    • Flink Reference
      • Overview
      • SQL Syntax
      • DDL Statements
        • Statements Overview
        • ALTER MODEL
        • ALTER TABLE
        • ALTER VIEW
        • CREATE FUNCTION
        • CREATE MODEL
        • CREATE TABLE
        • CREATE VIEW
        • DESCRIBE
        • DROP MODEL
        • DROP TABLE
        • DROP VIEW
        • HINTS
        • EXPLAIN
        • RESET
        • SET
        • SHOW
        • USE CATALOG
        • USE database_name
      • DML Statements
        • Queries Overview
        • Deduplication
        • Group Aggregation
        • INSERT INTO FROM SELECT
        • INSERT VALUES
        • Joins
        • LIMIT
        • Pattern Recognition
        • ORDER BY
        • OVER Aggregation
        • SELECT
        • Set Logic
        • EXECUTE STATEMENT SET
        • Top-N
        • Window Aggregation
        • Window Deduplication
        • Window Join
        • Window Top-N
        • Window Table-Valued Function
        • WITH
      • Functions
        • Flink SQL Functions
        • Aggregate
        • Collections
        • Comparison
        • Conditional
        • Datetime
        • Hashing
        • JSON
        • AI Model Inference
        • Numeric
        • String
        • Table API
      • Data Types
      • Data Type Mappings
      • Time Zone
      • Keywords
      • Information Schema
      • Example Streams
      • Supported Cloud Regions
      • SQL Examples
      • Table API
      • CLI Reference
    • Get Help
  • Build AI with Flink
    • Overview
    • Run an AI Model
    • Create an Embedding
  • Manage Networking
    • Overview
    • Networking on AWS
      • Overview
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • PrivateLink on AWS
        • Overview
        • Inbound PrivateLink for Dedicated Clusters
        • Inbound PrivateLink for Serverless Products
        • Outbound PrivateLink for Dedicated Clusters
        • Outbound PrivateLink for Serverless Products
      • VPC Peering on AWS
      • Transit Gateway on AWS
      • Private Network Interface on AWS
    • Networking on Azure
      • Overview
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Private Link on Azure
        • Overview
        • Inbound Private Link for Dedicated Clusters
        • Inbound Private Link for Serverless Products
        • Outbound Private Link for Dedicated Clusters
        • Outbound Private Link for Serverless Products
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Overview
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Private Service Connect on Google Cloud
        • Overview
        • Inbound Private Service Connect for Dedicated Clusters
        • Inbound Private Service Connect for Serverless Products
        • Outbound Private Service Connect for Dedicated Clusters
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Overview
      • Public Egress IP Address for Connectors and Cluster Linking
      • Cluster Linking using AWS PrivateLink
      • Follower Fetching using AWS VPC Peering
    • Use the Confluent Cloud Console with Private Networking
    • Test Connectivity
  • Log and Monitor
    • Metrics
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Kafka Clients to Confluent Cloud
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use Professional Services on Azure
    • Use Google Cloud Pay As You Go
    • Use Google Cloud Commits
    • Use Professional Services on Google Cloud
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Overview
    • Service Quotas
    • View Service Quotas using Confluent CLI
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Kafka Admin and Produce REST APIs
    • Connect API
    • Client APIs
      • C++ Client API
      • Python Client API
      • Go Client API
      • .NET Client API
    • Provider Integration API
    • Flink REST API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
    • Generate a HAR file for Troubleshooting
    • Confluent AI Assistant
  • Support
  • Glossary

Use Azure Private Link for Dedicated Clusters on Confluent Cloud¶

Azure Private Link allows for one-way secure connection access from your VNet to Confluent Cloud with added protection against data exfiltration. This networking option is popular for its unique combination of security and simplicity of setup.

The following diagram summarizes the Azure Private Link architecture between the VNet or subscription and the Confluent Cloud cluster.

Azure Private Link architecture between customer VNet or subscription and Confluent Cloud cluster

To set up to use Azure Private Link with Confluent Cloud:

  1. Identify a Confluent Cloud network you want to use, or set up a new Confluent Cloud network.
  2. Add a Private Link Access in Confluent Cloud.
  3. Provision Private Link endpoints in Azure.
  4. Set up DNS records in Azure.
  5. Validate connectivity.
  6. Troubleshoot broker connectivity issues if necessary.

The following tutorial video walks you through the process of configuring a private link. It also covers common mistakes and gotchas when setting up a private link with Confluent Cloud. Even though it uses AWS PrivateLink as an example, you will still find it relevant and useful for setting up a Private Link in Azure.

Requirements and considerations¶

Review the following requirements and considerations before you set up a Private Link in Azure with Confluent Cloud:

  • Have a Confluent Cloud network of type Private Link in Azure. If a network does not exist, see Create Confluent Cloud Network on Azure.

  • To use an Azure Private Link service with Confluent Cloud, your VNet must allow outbound Internet connections for Confluent Cloud Schema Registry, ksqlDB, and Confluent CLI to work.

    • DNS requests to public authority traversing to private DNS zone is required.
    • Confluent Cloud Schema Registry is accessible over the Internet.
    • Provisioning new ksqlDB instances requires Internet access. After ksqlDB instances are up and running, they are fully accessible over Azure Private Link connections.
    • Confluent CLI requires Internet access to authenticate with the Confluent Cloud control plane.
  • Confluent Cloud Console components, such as topic management, need additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud Console with Azure Private Link, see Use the Confluent Cloud Console with Private Networking.

  • Cross-region Azure Private Link is not supported through the Confluent Cloud Console.

    Contact Confluent Support to see if your regions are supported and to request configuration.

  • The Azure Private Link described in this document is only available for use with Dedicated clusters.

    For use with Enterprise clusters, see Use Azure Private Link for Serverless Products on Confluent Cloud.

  • Existing Confluent Cloud clusters cannot be converted to use Azure Private Link.

  • Confluent Cloud selects the availability zones for the Confluent Cloud cluster and Azure Private Link service. You cannot select availability zones.

Connectors¶

Fully-managed Confluent Cloud connectors can connect to data sources or sinks using a public IP address.

Fully-managed Confluent Cloud connectors can use Egress Private Link Endpoints to connect to the sources or sinks in the customer network with private IP addresses. For information about configuring an Egress Private Link Endpoint, see Use Azure Egress Private Link Endpoints for Dedicated Clusters on Confluent Cloud.

Add a Private Link Access in Confluent Cloud¶

Azure Private Link consists of two parts: the Private Link Service exposed by Confluent Cloud and private endpoints that you configure within your Azure subscription.

To make an Azure Private Link connection to a cluster in Confluent Cloud, first, register your Azure subscription with the Confluent Cloud network for automatic approval of private endpoint connections. This adds a Private Link Access in Confluent Cloud. If required, you can register multiple subscriptions.

  1. In the Network Management tab of the desired Confluent Cloud environment, click the For dedicated cluster tab.
  2. Click the Confluent Cloud network to which you want to add the connection.
  3. In the Ingress connections tab, click + Private Link Access.
  4. Specify the following values in the fields:
    • Name: The name for this private link access.
    • Azure subscription ID: The Azure subscription ID for the account containing the VNets you want to make the Azure Private Link connection from. The Azure subscription number can be found on your Azure subscription page of the Azure portal.
  5. Click Add.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/private-link-accesses

Authentication

See Authentication.

Request specification

In the request specification, include Confluent Cloud network ID, subscription, environment, and, optionally, add the display name. Update the attributes below with the correct values.

{
   "spec":{
      "display_name":"Azure-PL-CCN-1",
      "cloud":{
         "kind":"AzurePrivateLinkAccess",
         "subscription":"00000000-0000-0000-0000-000000000000"
      },
      "environment":{
         "id":"env-abc123"
      },
      "network":{
         "id":"n-000000"
      }
   }
}

Use the confluent network private-link access create Confluent CLI command to create an Azure private link access:

confluent network private-link access create <private-link-access-name> <flags>

The following command-specific flags are supported:

  • --network: Required. Confluent Cloud network ID.
  • --cloud: Required. The cloud provider. Set to azure.
  • --cloud-account. Required. Azure subscription ID for the account. containing the VNets you want to connect from using Azure Private Link.

You can specify additional optional CLI flags described in the Confluent CLI command reference, such as --environment.

The following is an example Confluent CLI command to create a private link access:

confluent network private-link access create my-private-link-access \
  --network n-123456 \
  --cloud azure \
  --cloud-account 1234abcd-12ab-34cd-1234-123456abcdef

Use the confluent_private_link_access Confluent Terraform Provider resource to create a Private Link Access.

An example snippet of Terraform configuration:

resource "confluent_environment" "development" {
  display_name = "Development"
}

resource "confluent_network" "azure-private-link" {
  display_name     = "Azure Private Link Network"
  cloud            = "AZURE"
  region           = "centralus"
  connection_types = ["PRIVATELINK"]
  environment {
    id = confluent_environment.development.id
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "confluent_private_link_access" "azure" {
  display_name = "Azure Private Link Access"
  azure {
    subscription = "1234abcd-12ab-34cd-1234-123456abcdef"
  }
  environment {
    id = confluent_environment.development.id
  }
  network {
    id = confluent_network.azure-private-link.id
  }

  lifecycle {
    prevent_destroy = true
  }
}

See more Terraform configuration examples for creating a Private Link connection using Terraform:

  • Private Link with ACLS
  • Private Link with RBAC

When your Azure Private Link connection status transitions to Ready in the Confluent Cloud Console, configure the private endpoints in your VNet before you can connect to the cluster.

Provision Private Link endpoints in Azure¶

When the Private Link Access becomes ready in the Confluent Cloud Console, configure private endpoints in your VNet from the Azure portal to make the Azure Private Link connection to your Confluent Cloud cluster.

For Confluent Cloud single availability zone clusters, create a single private endpoint to the Confluent Cloud Service Alias for the Kafka cluster zone.

For Confluent Cloud multi-availability zone clusters, create three private endpoints, one endpoint to each of the Confluent Cloud zonal Service Aliases.

To set up the VNet endpoint for Azure Private Link in your Azure account:

  1. In the Confluent Cloud Console, gather the following information in Cluster Overview.

    • In Cluster Settings:

      • Bootstrap server endpoint in the Endpoints section

      • Zones in the Cloud details section

        The availability zones of the Kafka cluster.

    • In Networking > Details:

      • DNS domain

      • DNS subdomain

      • Service aliases

        For single availability zone clusters, you only need the Service Alias of the Kafka cluster zone (Zones) that you retrieved in Cluster Settings > Cloud details. above.

  2. In the Azure Private Link Center, click Create private endpoint.

    Specify the required fields as described in Create a private endpoint.

    • Selected Subscription must be the name of the subscription ID that you registered with Confluent Cloud in Add a Private Link Access in Confluent Cloud. You can check it on the Portal Subscriptions page.
    • Name: Specify an instance name.
    • Network Interface Name: Use the default given.
    • Region: Select the same region as the Confluent Cloud Private Link Access.
  3. Click Next: Resource.

    • Connection method: Select the Connect to an Azure resource by resource ID or alias.

    • Resource ID or alias: Specify the Confluent Cloud Service Alias. and click Next.

      You can find the Confluent Cloud Service Aliases on the Confluent Cloud Console, in the Networking tab under Cluster settings.

      For a Confluent Cloud single availability zone cluster, use the Service Alias for the Kafka cluster availability zone.

      ../../_images/azure-pl-single-cluster-az.png

      ../../_images/azure-pl-single-az-alias.png

      For a Confluent Cloud multi-availability zone cluster, use the Confluent Cloud zonal Service Alias for each private endpoint. Note that you need three private endpoints.

  4. Click Next: Virtual Network.

    • Virtual network: The virtual network where the private endpoint is to be created.

    • Subnet: The subnet where the private endpoint is to be created.

      Select the subnets in the same region as your Confluent Cloud network from the Networking tab in the Confluent Cloud Console.

    • Network policy for private endpoints: Select the organization-approved or mandated policy. The default is Disabled.

  5. Click Review + create. Review the details and click Create to create the private endpoint.

  6. Wait for the Azure deployment to complete, go to the private endpoint resource, and verify the private endpoint connection status is Approved.

Set up DNS records in Azure¶

You must update your DNS records to ensure connectivity passes through Azure Private Link in the supported pattern. Any DNS provider that can ensure DNS is routed as follows is acceptable. Azure Private DNS Zone (used in this example) is one option.

DNS resolution options¶

For Azure Private Link Confluent Cloud networks, you can use the public or private DNS resolution:

  • The private DNS resolution is the recommended option and guarantees fully private DNS resolution.
  • The public DNS resolution is useful when you want to ensure that Confluent deployments are homogenous and conform to DNS configurations for your networks.

DNS resolution is selected when you create a Confluent Cloud network, and it cannot be modified after creating the Confluent Cloud network. See Create a Confluent Cloud network.

Public DNS resolution¶

The public (also known as chased private in Confluent Cloud) DNS resolution is used for the bootstrap server and broker hostnames of a Confluent Cloud cluster that is using Azure Private Link. When the public resolution is used, the clusters in this network require both public and private DNS to resolve cluster endpoints.

Only the Confluent Global DNS Resolver (GLB) endpoints are advertised.

The public DNS resolution performs the following two-step process:

  1. The Confluent Cloud Global DNS Resolver removes the glb subdomain and returns a CNAME for your bootstrap and broker hostnames.

    Example: $lkc-id-$nid.$region.$cloud.glb.confluent.cloud

    CNAME returned: $lkc-id.$nid.$region.$cloud.confluent.cloud

  2. The CNAME resolves to your VNet private endpoints based on the Private DNS Zone configuration.

Warning

Some DNS systems, like Windows DNS service, lack the ability to recursively resolve the previously mentioned two-step process within a single DNS node. To solve the issue, use Private DNS resolution.

Private DNS resolution¶

When the private DNS resolution is used, the clusters in this network only require private DNS to resolve cluster endpoints. Only non-GLB endpoints are advertised.

Create a DNS zone and DNS records¶

DNS entries need to be created for Private Link irrespective of the DNS resolution option you selected when creating the Confluent Cloud network. These DNS records map the Confluent Cloud DNS names to the Azure private endpoint addresses.

To update DNS resolution using Azure Private DNS Zone:

  1. Browse to the Azure portal.

  2. Create a Private DNS zone for the Private Link.

    1. Search for the Private DNS zone resource in the Azure portal.

    2. Click + Create.

    3. Specify the required fields as described in Create a DNS zone.

      • Name: Use the Confluent Cloud DNS domain value from the Networking under Cluster Overview in the Confluent Cloud Console.

        Note that there is no glb in the DNS Domain name.

        If the Confluent Cloud DNS Domain name includes the logical cluster ID which starts with lkc-, omit the logical cluster id when specifying it as the Private DNS Zone name. For example, the DNS Domain name shown as lkc-123abc-4kgzg.centralus.azure.confluent.cloud in Confluent Cloud should be converted to 4kgzg.centralus.azure.confluent.cloud to be used as the Private DNS Zone name.

    4. Click Review + create.

    5. Wait for the Azure deployment to complete.

  3. (Optional) To identify the correct mapping of DNS zone records to zonal endpoints for Confluent Cloud, you can run the DNS helper script from your VM instance within the VNet.

    You specify the Azure resource group name, and one or more private endpoints as arguments. You can find these values form the networking detail page in the Confluent Cloud Console.

    The output contains the Azure domain names, the record types, and the DNS values that you can input when you create DNS records. For example:

    ./dns-endpoints.sh my-resource-group private-endpoint-1 private-endpoint-2 private-endpoint-3
    
    *              A    10.0.0.2 10.0.0.3 10.0.0.4
    *.az2          A    10.0.0.2
    *.az1          A    10.0.0.3
    *.az3          A    10.0.0.4
    
  4. Add DNS records.

    Create required DNS records as described in Manage DNS records and record sets by using the Azure portal.

    1. Go to the Private DNS Zone resource as created above.

    2. Click + Record set.

    3. Create the record set for bootstrap endpoint resolution.

      The bootstrap DNS record should contain three zonal endpoints IPs to resolves to all three zonal endpoints that you created in Provision Private Link endpoints in Azure.

      • Name: *
      • Type: A
      • TTL: 1
      • TTL unit: Minute
      • IP address: The IP addresses of the Azure private endpoints.

      The IP address of an Azure private endpoint can be found under its associated Network interface.

      ../../_images/azure-dns-ip.png
    4. Create three zonal DNS records, one zonal DNS record per availability zone of the Confluent Cloud network.

      The IP address of the private endpoint can be found under its associated network interface.

      • Zonal DNS record #1
        • Name: *.az1
        • Type: A
        • TTL: 1
        • TTL unit: Minute
        • IP address: IP address of the az1 private endpoint as created above.
      • Zonal DNS record #2
        • Name: *.az2
        • Type: A
        • TTL: 1
        • TTL unit: Minute
        • IP address: IP address of the az2 private endpoint as created above.
      • Zonal DNS record #3
        • Name: *.az3
        • Type: A
        • TTL: 1
        • TTL unit: Minute
        • IP address: IP address of the az3 private endpoint as created above.

    Note

    In Confluent Cloud with private linking, Kafka broker names you retrieve from the metadata are not static. Do not hardcode the broker names in DNS records.

    The following are example DNS record sets.

    ../../_images/azure-dns-records.png
  5. Attach the Private DNS Zone to the VNets where clients or applications are present.

    1. Go to the Private DNS Zone resource and click Virtual network links under Settings.
    2. Click Add.
    3. Fill in Link name, Subscription, and Virtual network.
    4. Click OK.

Your cluster is now ready for use. If you encounter any problem, refer to troubleshoot connectivity issues.

Related content¶

  • For an overview of Azure Private Link, see Setting Up Secure Networking in Confluent with Azure Private Link.

    For getting started with Azure Private Link in Confluent Cloud, you should use the currently supported steps described in this document.

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portal Ask the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • Confluent Platform
  • Connectors
  • Flink
  • Stream Governance
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- Apache®️, Apache Kafka®️, Kafka®️, Apache Flink®️, Flink®️, Apache Iceberg®️, Iceberg®️ and associated open source project names are trademarks of the Apache Software Foundation

On this page: