documentation
Get Started Free
  • Get Started Free
  • Stream
      Confluent Cloud

      Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

      Confluent Platform

      An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

  • Connect
      Managed

      Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

      Self-Managed

      Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

  • Govern
      Managed

      Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

      Self-Managed

      Use self-managed Schema Registry and Stream Governance with Confluent Platform.

  • Process
      Managed

      Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

      Self-Managed

      Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Stream
Confluent Cloud

Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

Confluent Platform

An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

Connect
Managed

Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

Self-Managed

Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

Govern
Managed

Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

Self-Managed

Use self-managed Schema Registry and Stream Governance with Confluent Platform.

Process
Managed

Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

Self-Managed

Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Learn
Get Started Free
  1. Home
  2. Cloud
  3. Manage Networking on Confluent Cloud
  4. Networking with Azure on Confluent Cloud
  5. Use Azure Private Links

CLOUD

  • Overview
  • Get Started
    • Overview
    • Quick Start
    • REST API Quick Start
    • Manage Schemas
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview
      • Example: Use Replicator to Copy Kafka Data to Cloud
      • Example: Create Fully-Managed Services
      • Example: Build an ETL Pipeline
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
        • Manage Audit Logs
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-Tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
  • Build Client Applications
    • Overview
    • Client Quick Start
    • Configure Clients
      • Architectural Considerations
      • Consumer
      • Producer
      • Configuration Properties
      • Connect Program
    • Test and Monitor a Client
      • Test
      • Monitor
      • Reset Offsets
    • Optimize and Tune
      • Overview
      • Configuration Settings
      • Throughput
      • Latency
      • Durability
      • Availability
      • Freight
    • Client Guides
      • Python
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java Client
    • Kafka Client APIs
      • Python Client API
      • .NET Client API
      • JavaScript Client API
      • Go Client API
      • C++ Client API
      • Java Client API
    • Deprecated Client APIs
    • Client Examples
      • Overview
      • Python Client
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java
      • Spring Boot
      • KafkaProducer
      • REST
      • Clojure
      • Groovy
      • Kafka Connect Datagen
      • kafkacat
      • Kotlin
      • Ruby
      • Rust
      • Scala
    • VS Code Extension
  • Build Kafka Streams Applications
    • Overview
    • Quick Start
    • Monitor Applications
    • ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Quick Start
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Grant Role-Based Access
      • Migrate ksqlDB Applications on Confluent Cloud
  • Manage Topics
    • Overview
    • Configuration Reference
    • Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
    • Tableflow
      • Overview
      • Concepts
        • Overview
        • Storage
        • Schemas
        • Materialize Change Data Capture Streams
        • Billing
      • Get Started
        • Overview
        • Quick Start with Managed Storage
        • Quick Start Using Your Storage and AWS Glue
        • Quick Start with Delta Lake Tables
      • How-to Guides
        • Overview
        • Configure Storage
        • Integrate Catalogs
          • Overview
          • Integrate with AWS Glue Catalog
          • Integrate with Snowflake Open Catalog or Apache Polaris
        • Query Data
          • Overview
          • Query with AWS
          • Query with Flink
          • Query with Snowflake
          • Query with Trino
      • Operate
        • Overview
        • Configure
        • Grant Role-Based Access
        • Monitor
        • Use Private Networking
        • Supported Cloud Regions
  • Govern Data Streams
    • Overview
    • Stream Governance
      • Manage Governance Packages
      • Data Portal
      • Track Data with Stream Lineage
      • Manage Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage and Examples Guide
        • GraphQL API Catalog Usage and Examples Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema ID Validation
      • Schema Linking
      • Schema Registry Tutorial
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Serializers and Deserializers Overview
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
      • Security Considerations
      • Enable Private Networking
        • Enable Private Networking with Schema Registry PrivateLink
        • Enable Private Networking for Schema Registry with a Public Endpoint
    • Reference
      • Configure Clients to Schema Registry
      • Schema Registry REST API Usage Examples
      • Use AsyncAPI to Describe Topics and Schemas
      • Maven Plugin
    • FAQ
  • Connect to External Services
    • Overview
    • Install Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB CDC Source
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon S3 Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
      • Amazon S3 Source
      • Amazon SQS Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
        • Configure and Launch
        • Configure with Azure Egress Private Link Endpoints
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Sink V2
      • Azure Cosmos DB Source
      • Azure Cosmos DB Source V2
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • ClickHouse Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Couchbase Source
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
      • Couchbase Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS) Sink Connector for Confluent Cloud
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink [Deprecated]
      • Google BigQuery Sink V2
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink [Deprecated]
      • Google Cloud Functions Gen 2 Sink
      • Google Cloud Functions Sink [Deprecated]
      • Google Cloud Pub/Sub Source
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • HTTP Sink
      • HTTP Sink V2
      • HTTP Source
      • HTTP Source V2
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • MariaDB CDC Source
      • Microsoft SQL Server CDC Source (Debezium) [Deprecated]
      • Microsoft SQL Server CDC Source V2 (Debezium)
        • Configure and launch the connector
        • Backward incompatibility considerations
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Egress Private Service Connect Endpoints
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium) [Deprecated]
      • MySQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • OpenSearch Sink
      • Oracle XStream CDC Source
        • Overview
        • Configure and Launch the connector
        • Oracle Database Prerequisites
        • Change Events
        • Examples
        • Troubleshooting
      • Oracle CDC Source
        • Overview
        • Configure and Launch the connector
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • SMT Examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink [Deprecated]
      • Pinecone Sink
      • PostgreSQL CDC Source (Debezium) [Deprecated]
      • PostgreSQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Sink
      • RabbitMQ Source
      • Redis Sink
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce Bulk API Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source [Legacy]
      • ServiceNow Source V2
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Snowflake Source
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Confluent Hub
      • Overview
      • Component Archive Specification
      • Contribute
    • Install Custom Plugins and Custom Connectors
      • Overview
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
      • API and CLI
    • Manage CSFLE
    • Manage Provider Integration
      • Quick Start
      • Provider Integration APIs
    • Networking and DNS
      • Overview
      • AWS Egress PrivateLink Endpoints for First-Party Services
      • AWS Egress PrivateLink Endpoints for Self-Managed Services
      • AWS Egress PrivateLink Endpoints for Amazon RDS
      • Azure Egress Private Link Endpoints for First-Party Services
      • Azure Egress Private Link Endpoints for Self-Managed Services
      • Google Cloud Private Service Connect Endpoints for First-Party Services
    • Connect API Usage
    • Manage Public Egress IP Addresses
    • Sample Connector Output
    • Configure Single Message Transforms
    • Configure Custom SMTs
      • Quick Start
      • Manage Custom SMT APIs
      • Limitations and Support
    • View Connector Events
    • Interpret Connector Statuses
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Manage Offsets
    • Transforms List
      • Overview
      • Cast
      • Drop
      • DropHeaders
      • EventRouter
      • ExtractField
      • ExtractTopic
      • Filter (Kafka)
      • Filter (Confluent)
      • Flatten (Kafka)
      • Flatten (Confluent)
      • GzipDecompress
      • HeaderFrom
      • HeaderToValue
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField (Kafka)
      • ReplaceField (Confluent)
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
    • Reference
      • Additional Connector Configuration
  • Integrate with Confluent Cloud
    • Overview
    • Reuse Connections with External Services
      • Overview
      • Supported External Services
      • Manage Connections
    • Integrate with Cloud Service Providers
      • Overview
      • Create an AWS Provider Integration
      • Manage an AWS Provider Integration
  • Process Data with Flink
    • Overview
    • Get Started
      • Overview
      • Quick Start with Cloud Console
      • Quick Start with SQL Shell in Confluent CLI
      • Quick Start with Java Table API
      • Quick Start with Python Table API
    • Concepts
      • Overview
      • Autopilot
      • Batch and Stream Processing
      • Billing
      • Comparison with Apache Flink
      • Compute Pools
      • Delivery Guarantees and Latency
      • Determinism
      • Private Networking
      • Schema and Statement Evolution
      • Snapshot Queries
      • Statements
      • Statement CFU Metrics
      • Tables and Topics
      • Time and Watermarks
      • User-defined Functions
    • How-To Guides
      • Overview
      • Aggregate a Stream in a Tumbling Window
      • Combine Streams and Track Most Recent Records
      • Compare Current and Previous Values in a Stream
      • Convert the Serialization Format of a Topic
      • Create a UDF
      • Deduplicate Rows in a Table
      • Enable UDF Logging
      • Handle Multiple Event Types
      • Mask Fields in a Table
      • Process Schemaless Events
      • Resolve Common SQL Query Problems
      • Run a Snapshot Query
      • Scan and Summarize Tables
      • Transform a Topic
      • View Time Series Data
    • Operate and Deploy
      • Overview
      • Manage Compute Pools
      • Monitor and Manage Statements
      • Grant Role-Based Access
      • Deploy a Statement with CI/CD
      • Generate a Flink API Key
      • REST API
      • Move SQL Statements to Production
      • Enable Private Networking
    • Flink Reference
      • Overview
      • SQL Syntax
      • DDL Statements
        • Statements Overview
        • ALTER MODEL
        • ALTER TABLE
        • ALTER VIEW
        • CREATE FUNCTION
        • CREATE MODEL
        • CREATE TABLE
        • CREATE VIEW
        • DESCRIBE
        • DROP MODEL
        • DROP TABLE
        • DROP VIEW
        • HINTS
        • EXPLAIN
        • RESET
        • SET
        • SHOW
        • USE CATALOG
        • USE database_name
      • DML Statements
        • Queries Overview
        • Deduplication
        • Group Aggregation
        • INSERT INTO FROM SELECT
        • INSERT VALUES
        • Joins
        • LIMIT
        • Pattern Recognition
        • ORDER BY
        • OVER Aggregation
        • SELECT
        • Set Logic
        • EXECUTE STATEMENT SET
        • Top-N
        • Window Aggregation
        • Window Deduplication
        • Window Join
        • Window Top-N
        • Window Table-Valued Function
        • WITH
      • Functions
        • Flink SQL Functions
        • Aggregate
        • Collections
        • Comparison
        • Conditional
        • Datetime
        • Hashing
        • JSON
        • AI Model Inference
        • Numeric
        • String
        • Table API
      • Data Types
      • Data Type Mappings
      • Time Zone
      • Keywords
      • Information Schema
      • Example Streams
      • Supported Cloud Regions
      • SQL Examples
      • Table API
      • CLI Reference
    • Get Help
  • Build AI with Flink
    • Overview
    • Run an AI Model
    • Create an Embedding
  • Manage Networking
    • Overview
    • Networking on AWS
      • Overview
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • PrivateLink on AWS
        • Overview
        • Inbound PrivateLink for Dedicated Clusters
        • Inbound PrivateLink for Serverless Products
        • Outbound PrivateLink for Dedicated Clusters
        • Outbound PrivateLink for Serverless Products
      • VPC Peering on AWS
      • Transit Gateway on AWS
      • Private Network Interface on AWS
    • Networking on Azure
      • Overview
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Private Link on Azure
        • Overview
        • Inbound Private Link for Dedicated Clusters
        • Inbound Private Link for Serverless Products
        • Outbound Private Link for Dedicated Clusters
        • Outbound Private Link for Serverless Products
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Overview
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Private Service Connect on Google Cloud
        • Overview
        • Inbound Private Service Connect for Dedicated Clusters
        • Inbound Private Service Connect for Serverless Products
        • Outbound Private Service Connect for Dedicated Clusters
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Overview
      • Public Egress IP Address for Connectors and Cluster Linking
      • Cluster Linking using AWS PrivateLink
      • Follower Fetching using AWS VPC Peering
    • Use the Confluent Cloud Console with Private Networking
    • Test Connectivity
  • Log and Monitor
    • Metrics
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Kafka Clients to Confluent Cloud
  • Manage Security
    • Overview
    • Manage Authentication
      • Overview
      • Manage User Identities
        • Overview
        • Manage User Accounts
          • Overview
          • Authentication Security Protections
          • Manage Local User Accounts
          • Multi-factor Authentication
          • Manage SSO User Accounts
        • Manage User Identity Providers
          • Overview
          • Use Single Sign-On (SSO)
          • Manage SAML Single Sign-On (SSO)
          • Manage Azure Marketplace SSO
          • Just-in-time User Provisioning
          • Group Mapping
            • Overview
            • Enable Group Mapping
            • Manage Group Mappings
            • Troubleshooting
            • Best Practices
          • Manage Trusted Domains
          • Manage SSO provider
          • Troubleshoot SSO
      • Manage Workload Identities
        • Overview
        • Manage Workload Identities
        • Manage Service Accounts and API Keys
          • Overview
          • Create Service Accounts
          • Manage Service Accounts
          • Manage API Keys
            • Overview
            • Manage API keys
            • Best Practices
            • Troubleshoot
        • Manage OAuth/OIDC Identity Providers
          • Overview
          • Add an OIDC identity provider
          • Use OAuth identity pools and filters
          • Manage identity provider configurations
          • Manage the JWKS URI
          • Configure OAuth clients
          • Access Kafka REST APIs
          • Use Confluent STS tokens with REST APIs
          • Best Practices
        • Manage mTLS Identity Providers
          • Overview
          • Configure mTLS
          • Manage Certificate Authorities
          • Manage Certificate Identity Pools
          • Create CEL Filters for mTLS
          • Create JSON payloads for mTLS
          • Manage Certificate Revocation
          • Troubleshoot mTLS Issues
    • Control Access
      • Overview
      • Resource Hierarchy
        • Overview
        • Organizations
          • Overview
          • Manage Multiple Organizations
        • Environments
        • Confluent Resource Names (CRNs)
      • Manage Role-Based Access Control
        • Overview
        • Predefined RBAC Roles
        • Manage Role Bindings
        • Use ACLs with RBAC
      • Manage IP Filtering
        • Overview
        • Manage IP Groups
        • Manage IP Filters
        • Best Practices
      • Manage Access Control Lists
      • Use the Confluent CLI with multiple credentials on Confluent Cloud
    • Encrypt and Protect Data
      • Overview
      • Manage Data in Transit With TLS
      • Encrypt Data at Rest Using Self-managed Encryption Keys
        • Overview
        • Use Self-managed Encryption Keys on AWS
        • Use Self-managed Encryption Keys on Azure
        • Use Self-managed Encryption Keys on Google Cloud
        • Use Pre-BYOK-API-V1 Self-managed Encryption Keys
        • Use Confluent CLI for Self-managed Encryption Keys
        • Use BYOK API for Self-managed Encryption Keys
        • Revoke Access to Data at Rest
        • Best Practices
      • Encrypt Sensitive Data Using Client-side Field Level Encryption
        • Overview
        • Manage CSFLE using Confluent Cloud Console
        • Use Client-side Field Level Encryption
        • Configuration Settings
        • Manage Encryption Keys
        • Quick Start
        • Implement a Custom KMS Driver
        • Process Encrypted Data with Apache Flink
        • Code examples
        • Troubleshoot
        • FAQ
    • Monitor Activity
      • Concepts
      • Audit Log Event Categories
      • Understand Audit Log Records
      • Audit Log Event Schema
      • Auditable Event Methods
        • Access Transparency
        • Connector
        • Custom Connector Plugin
        • Flink
        • Flink Authentication and Authorization
        • IP Filter Authorization
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management and Operations
        • ksqlDB Cluster Authentication and Authorization
        • Networking
        • Notifications Service
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Role-based Access Control (RBAC)
        • Schema Registry Authentication and Authorization
        • Schema Registry Management and Operations
        • Tableflow Data Plane
        • Tableflow Control Plane
      • Access and Consume Audit Log Records
      • Access Transparency
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Access Management Tutorial
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use Jio Commits
    • Use Professional Services on Azure
    • Use Google Cloud Pay As You Go
    • Use Google Cloud Commits
    • Use Professional Services on Google Cloud
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Overview
    • Service Quotas
    • View Service Quotas using Confluent CLI
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Kafka Admin and Produce REST APIs
    • Connect API
    • Client APIs
      • C++ Client API
      • Python Client API
      • Go Client API
      • .NET Client API
    • Provider Integration API
    • Flink REST API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
    • Generate a HAR file for Troubleshooting
    • Confluent AI Assistant
  • Support
  • Glossary

Use Azure Private Link for Serverless Products on Confluent Cloud¶

Confluent Cloud supports private connectivity for serverless Confluent Cloud products, such as Enterprise Kafka clusters and Confluent Cloud for Apache Flink®. When you use Private Link Attachment, your Enterprise cluster or Flink resources are only accessible from tenant-specific private endpoints. Public access is blocked with Private Link Attachment.

Note

The Private Link Attachment enables you to access the Flink API with private networking, allowing you to issue Flink SQL queries and retrieve results through the associated Private Link connection. All data movement between Flink queries and Kafka clusters configured with private networking occurs over a secure private path within Confluent Cloud.

Confluent Cloud uses the following private networking resources for Enterprise clusters. These resources are regional and do not have a mapping to specific availability zones.

Private Link Attachment

The Private Link Attachment (PrivateLinkAttachment) resource represents a reservation to establish a Private Link connection from your Virtual Network (VNet) regional services in a Confluent Cloud environment.

A Private Link Attachment belongs to an Environment in the Confluent resource hierarchy.

This resource is referred to as gateways in the Confluent Cloud Console.

Private Link Attachment Connection

A Private Link Attachment Connection (PrivateLinkAttachmentConnection) is a registration of VNet private endpoints that are allowed to connect to Confluent Cloud. A Private Link Attachment Connection belongs to a specific Private Link Attachment.

This resource is referred as access points in the Confluent Cloud Console.

You can use the Confluent Cloud Console, Confluent REST API, Confluent CLI, or Terraform to establish a Private Link connectivity for serverless products, such as Enterprise Kafka clusters or Flink.

The high-level workflow is:

  1. In Confluent Cloud, create a Private Link Attachment.

  2. In Azure, create a private endpoint to be associated with the Private Link Attachment service.

    If you are using the Confluent Cloud Console for configuration, this step is merged into the next step and shows up as the first step in connection creation.

  3. In Confluent Cloud, create a Private Link Attachment Connection.

  4. Set up a DNS resolution.

  5. Create a Kafka client in your VNet using the bootstrap endpoint of your Enterprise Kafka cluster. This Kafka client can live in Virtual Machine or similar compute infrastructure.

  6. Validate produce/consume traffic is successful.

    Once you create a Private Link Attachment resource and establish a Private Link, you can securely send and receive traffic through the Private Link between your VNet and Confluent Cloud.

Requirements and considerations¶

  • You can connect to only one Confluent Cloud environment from a single VNet or from an on-premises network unless multiple DNS servers are used. Multiple connections utilizing a centralized DNS resolver does not work due to the overlap in domain names between Confluent Cloud environments.

    For the workaround to configure cross-environment queries in Flink, see Configure cross-environment queries.

  • You can connect to only one region in a specific environment from a single VNet or from an on-premises network.

  • For the regions supported for Private Link Attachment on Azure, see Cloud Providers and Regions for Confluent Cloud.

  • Confluent Cloud Console components, like topic management and Flink workspaces, require additional configuration to function as they use cluster endpoints.

    For information about using Flink with Azure Private Link, see Private Networking with Confluent Cloud for Apache Flink.

    To use all features of the Confluent Cloud Console with Azure Private Link, see Use the Confluent Cloud Console with Private Networking.

Create a Private Link Attachment¶

The Private Link Attachment resource, referred as the gateway in the Confluent Cloud Console, represents a reservation to establish a Private Link connection from Virtual Network (VNet) regional services in a Confluent Cloud environment.

When you create a Private Link Attachment in an environment and in a region, the Private Link Attachment resource provides connectivity to all Enterprise Kafka clusters within the environment for the specific cloud region.

In the Confluent Cloud Console, the Private Link Attachment resources are labeled and referred as gateways.

  1. In the Confluent Cloud Console, select an environment for the Private Link Attachment.

  2. In the Network management tab in the environment, click For serverless products.

  3. Click + Add gateway configuration.

  4. Select the PrivateLink card to select the type of gateway configuration.

  5. On the From your VPC or VNet to Confluent Cloud pane, click + Create configuration.

  6. On the Configure gateway sliding panel, enter the following information.

    • Gateway name
    • Cloud provider: Click Microsoft Azure.
    • Region
  7. Click Submit.

  8. You can continue to create an access point for the Ingress Private Link connection.

    Alternatively, you can create an access point at a later time by navigating to this gateway in the Network management tab.

The Private Link Attachment will be provisioned and move to the Waiting for connection state.

A Private Link Attachment can be in one of the following states:

  • WAITING FOR CONNECTION: The Private Link Attachment is waiting for a connection to be created.
  • READY: Azure Private Link connectivity is ready to be used.
  • EXPIRED: A valid connection has not been provisioned within the allotted time. A new Private Link Attachment must be provisioned.
  1. Send a request to create a Private Link Attachment resource:

    REST request

    POST https://api.confluent.cloud/networking/v1/private-link-attachments
    

    REST request body

    {
      "spec":
      {
        "display_name": "<name of this resource>",
        "cloud": "<provider type>",
        "region": "<region>",
        "environment":
        {
          "id": "<environement id>"
        }
      }
    }
    

    In the REST response, status.phase should be set to PROVISIONING.

  2. Check the status of the new Private Link Attachment:

    REST request

    GET https://api.confluent.cloud/networking/v1/private-link-attachments/<platt-id>
    

    REST response example

    {
      "status":
      {
        "phase": "WAITING_FOR_CONNECTIONS",
        "error_code": "",
        "error_message": "",
        "cloud":
        {
          "kind": "AzurePrivateLinkAttachmentStatus",
          "private_link_service":{
              "private_link_service_alias": "<pls-plt-abcdef-az1.f5aedb5a-5830-4ca6-9285-e5c81ffca2cb.centralus.azure.privatelinkservice>",
              "private_link_service_resource_id": "</subscriptions/12345678-9012-3456-7890-123456789012/resourceGroups/rg-abcdef/providers/Microsoft.Network/privateLinkServices/pls-plt-abcdef>"
            }
        }
      }
    }
    

    status.phase is WAITING_FOR_CONNECTIONS because no Private Link Attachment Connection has not been associated with this Private Link Attachment resource yet.

    The status.cloud object has information about the private_link_service_alias and private_link_service_resource_id that you must connect your Private Link Attachment endpoint to.

Use the confluent network private-link attachment create Confluent CLI command to create an Azure private link attachment:

confluent network private-link attachment create <attachment-name> <flags>

The following command-specific flags are supported:

  • --cloud: Required. Cloud provider type. Specify azure.
  • --region: Required. Azure region where the resources to be accessed using the private link attachment.

You can specify additional optional CLI flags described in the Confluent CLI command reference, such as --environment.

The following is an example Confluent CLI command to create a private link attachment:

confluent network private-link attachment create my-private-link-attachment \
  --cloud azure \
  --region us-west-2

Use the confluent_private_link_attachment Confluent Terraform Provider resource to create a Private Link Attachment.

An example snippet of Terraform configuration for a Private Link Attachment:

resource "confluent_private_link_attachment" "main" {
  cloud = "AZURE"
  region = "centralus"
  display_name = "prod-platt"
  environment {
    id = "env-1234nw"
  }
}

output "private_link_attachment" {
  value = confluent_private_link_attachment.main
}

See Terraform configuration example for creating a Private Link Attachment with ACLs using Terraform.

Create an Azure private endpoint¶

In Azure, create an endpoint that is associated with the Private Link Service ID of the Private Link Attachment you created in Create a Private Link Attachment.

For details on creating a private endpoint in Azure, see Create a Private Endpoint.

  1. On the Private Endpoint page in Azure portal, click + Create.

  2. In the Basics pane, specify the following:

    • Subscription: The subscription name that you selected when you created the VNet.
    • Resource group: The same resource group that you selected when you created the VNet.
    • Name: The name for the private endpoint.
    • Network interface name: A network interface name.
    • Region: The region for the private endpoint.
  3. Click Next: Resource.

  4. In the Resource pane, specify the following:

    • Connection method: Select Connect to an Azure resource by resource ID or alias.

    • Resource ID or alias: Paste in the Confluent Cloud Resource ID or Service Alias.

      This is the alias or ID created in the previous section, Create a Private Link Attachment.

      ../_images/azure-resource-id.png

      You can also use the value of the Private Link Service ID from your Network overview of the PrivateLink Attachment gateway in the Confluent Cloud Console.

  5. Click Next: Virtual Network.

  6. In the Virtual Network pane, specify the following:

    • Virtual network: Select the VNet where the private endpoint is to be created.
    • Subnet: Select the subnet where the private endpoint is to be created.
    • Network policy for private endpoints: Select the organization-approved or mandated policy. The default is Disabled.
    • Private IP configuration: Select Dynamically allocate IP address.
  7. Click Next: DNS and accept the default values.

  8. Click Next: Tags and, optionally, add tags.

  9. Click Next: Review + create. Review the details and click Create to create the private endpoint.

  10. Wait for the Azure deployment to complete.

Create an endpoint using the following Azure CLI:

az network private-endpoint create \
  --connection-name <connection name> \
  --name <endpoint name> \
  --private-connection-resource-id <resource id> \
  --resource-group <resource group name> \
  --subnet <subnet for the endpoint>

Create a Private Link Attachment Connection¶

Create a Private Link Attachment Connection resource in Confluent Cloud. A Private Link Attachment Connection represents a private endpoint in your VNet.

In the Confluent Cloud Console, the Private Link Attachment Connection resources are labeled and referred to as access points.

  1. In the Network Management tab of the desired Confluent Cloud environment, click the For serverless products tab.

    Make sure the Private Link Attachment is in the correct region of the private endpoint.

  2. Click the gateway to which you want to add the Private Link Endpoint.

    Make sure the gateway is in the correct region of the VPC Private Endpoint.

  3. In the Access points tab, click Create access point.

    Using the service ID in Private Link Service ID or the service alias in Private Link Service Alias shown in Step 3 on the sliding panel, create a VPC Endpoint in Azure.

  4. Specify the Private Endpoint ID.

    The private endpoint ID is the Azure resource ID of the private endpoint that was created in Create an Azure private endpoint.

  5. Specify the access point name.

  6. Click Create access point to create the Private Link Endpoint.

    The Private Link Attachment and Private Link Attachment Connection should now move to the READY state once the private endpoint connection is accepted.

  1. Send a request to create a Private Link Attachment Connection resource:

    REST request

    POST https://api.confluent.cloud/networking/v1/private-link-attachment-connections
    

    REST request body

    {
      "spec":
      {
        "display_name": "<PrivateLinkAttachmentEndpoint name>",
        "cloud":
        {
          "kind": "AzurePrivateLinkAttachmentConnection",
          "private_endpoint_id": "<Private Endpoint ID>",
        },
        "environment":
        {
          "id": "<Environment ID>",
        },
        "private_link_attachment":
        {
          "id": "<PrivateLinkAttachment>",
        }
      }
    }
    

    REST response example

    {
      "api_version": "networking/v1",
      "kind": "PrivateLinkAttachmentConnection",
      "id": "plattc-xyzuvw",
      "status": {
        "phase": "PROVISIONING",
        "error_code": "",
        "error_message": "",
      }
    }
    

    status.phase is PROVISIONING because a private endpoint connection has not yet been accepted.

  2. Check the status of the new Private Link Attachment Connection:

    REST request

    GET https://api.confluent.cloud/networking/v1/private-link-attachment-connections/<platt-id>
    

    REST response example

    {
      "api_version": "networking/v1",
      "kind": "PrivateLinkAttachmentConnection",
      "id": "plattc-xyzuvw",
      "status": {
        "phase": "READY",
        "error_code": "",
        "error_message": "",
        "cloud": {
          "kind": "AzurePrivateLinkAttachmentConnectionStatus",
          "phase": "READY",
          "private_link_service_alias": "pls-plt-abcdef-az1.f5aedb5a-5830-4ca6-9285-e5c81ffca2cb.centralus.azure.privatelinkservice",
          "private_link_service_resource_id": "/subscriptions/12345678-9012-3456-7890-123456789012/resourceGroups/rg-abcdef/providers/Microsoft.Network/privateLinkServices/pls-plt-abcdef",
          "private_endpoint_id": "/subscriptions/Microsoft.Network/privateEndpoints/pe-platt-abcdef"
        }
      }
    }
    
    • status.phase is READY because the private endpoint connection has been accepted.
    • status.cloud has an object of kind AzurePrivateLinkConnectionStatus.

Use the confluent network private-link attachment connection create Confluent CLI command to create an Azure private link attachment connection:

confluent network private-link attachment connection create <connection-name> <flags>

The following command-specific flags are supported:

  • --cloud: Required. The cloud provider. Set to azure.
  • --endpoint: Required. ID of an Azure private endpoint that is connected to the Azure private link service.
  • --attachment: Required. Private link attachment ID.

You can specify additional optional CLI flags described in the Confluent CLI command reference, such as --environment.

The following is an example Confluent CLI command to create a private link attachment connection:

confluent network private-link attachment connection create azure-private-link-attachment-connection \
  --cloud azure \
  --endpoint /subscriptions/Microsoft.Network/privateEndpoints/pe-platt-abcdef \
  --attachment platt-123456

Use the confluent_private_link_attachment_connection Confluent Terraform resource to create a Private Link Attachment Connection.

An example snippet of Terraform configuration for Private Link Attachment Connection:

resource confluent_private_link_attachment_connection "azure" {
  display_name = "prod-azure-central-us-az1-connection"
  environment {
    id = "env-12345"
  }
  azure {
    private_endpoint_resource_id = "/subscriptions/123aaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa/resourceGroups/testvpc/providers/Microsoft.Network/privateEndpoints/pe-platt-abcdef-az1"
  }
  private_link_attachment {
    id = "platt-abcdef"
  }
}

output "private_link_attachment_connection" {
  value = confluent_private_link_attachment_connection.azure
}

Go to the private endpoint resource in Azure Portal and verify that the private endpoint connection status is Approved.

Set up DNS records in Azure¶

Set up a DNS resolution and a DNS record using the Azure private DNS zone in the Azure console. This section focuses on the settings related to Confluent Cloud. For details, see Create an Azure private DNS zone.

  1. Create a private DNS zone.

    1. In the Confluent Cloud Console, copy the DNS Domain name in Private Link Attachment in the Network management tab, and use it as the name for the private DNS zone. It is in the form, <region>.azure.private.confluent.cloud. For example, centralus.azure.private.confluent.cloud.
    2. In Private Zones in the Azure portal, click + Create.
    3. In the Basics pane, enter or select the following values:
      • Subscription: Pre-filled with the subscription name that you selected when you created the VNet.
      • Resource group: Select the resource group that you selected when you created the VNet.
      • Name: Specify the domain name retrieved from the Confluent Cloud Console in the first step. It is in the format of <region>.azure.private.confluent.cloud, for example, centralus.azure.private.confluent.cloud.
    4. Click Next: Tags and, optionally, add tags.
    5. Click Next: Review + create. Review the details and click Create a DNS zone.
    6. Wait for the Azure deployment to complete.
  2. To create DNS records, go to the private DNS zone resource you created in the previous step, and click + Record Set for the Confluent Cloud clusters.

    Note

    In Confluent Cloud with private linking, Kafka broker names you retrieve from the metadata are not static. Do not hardcode the broker names in DNS records.

    • Name: *

    • Type: A

    • TTL and TTL unit: 1 Minute

    • IP address: The IP address of the private endpoint can be found under its associated network interface under Settings for the private endpoint.

      ../_images/azure-dns-ip.png
  3. Attach the private DNS zone to the VNets where clients or applications are present.

  4. Go to the private DNS zone resource and click Virtual network links under Settings.

    1. Click + Add.
    2. Specify the required values and click OK to create a virtual network link.

Connectivity scenarios¶

Below are examples of a few connectivity scenarios that are supported for Enterprise clusters in Confluent Cloud.

Scenario: Access one environment from one VNet¶

../_images/azure-platt-1env-1vnet.png

The following resources are configured:

  • PLATT-prod as a Private Link Attachment for accessing Kafka clusters in the env-prod environment
  • PLATTC-123 as a Private Link Attachment Connection for the private-endpoint-1 private endpoint in VNet-1
  • ProdApp as a Kafka client bootstrapped with lkc-123.centralus.azure.private.confluent.cloud
  • Private DNS Zone 1 with the regional wildcard centralus.azure.private.confluent.cloud

The following steps are performed:

  1. ProdApp attempts to access lkc-123 in the env-prod environment. A DNS query for lkc-123.centralus.azure.private.confluent.cloud resolves against returns private-endpoint-1.
  2. Application sends traffic to private-endpoint-1.
  3. private-endpoint-1 forwards traffic to PLATT-prod, and lkc-123 can be accessed since PLATTC-123 is associated with private-endpoint-1.

Scenario: Access one environment from many VNet’s¶

../_images/azure-platt-1env-manyvnets.png

The following resources are configured:

  • PLATT-abc as a Private Link Attachment for accessing Kafka clusters in the env-prod environment
  • PLATTC-123 as a Private Link Attachment Connection for the private-endpoint-1 private endpoint in VNet-1
  • PLATTC-456 for the private-endpoint-2 private endpoint in VNet-2
  • ProdApp-1 as a Kafka client bootstrapped with lkc-123.eastus.azure.private.confluent.cloud
  • ProdApp-2 as a Kafka client bootstrapped with lkc-456.eastus.azure.private.confluent.cloud
  • Private DNS Zone 1 with the regional wildcard *.eastus.azure.private.confluent.cloud
  • Private DNS Zone 2 with the regional wildcard *.eastus.azure.private.confluent.cloud

The following steps are performed:

  1. ProdApp-1 attempts to access lkc-123 in the env-prod environment. A DNS query for lkc-123.eastus.azure.private.confluent.cloud resolves against Private DNS Zone 1 and returns private-endpoint-1.
  2. ProdApp-1 sends traffic to private-endpoint-1.
  3. private-endpoint-1 forwards traffic to PLATT-abc, and lkc-123 can be accessed since PLATTC-123 is associated with private-endpoint-1.
  4. ProdApp-2 attempts to access lkc-456 in the env-prod environment. A DNS query for lkc-456.eastus.azure.private.confluent.cloud resolves against Private DNS Zone 2 and returns private-endpoint-2.
  5. ProdApp-2 sends traffic to private-endpoint-2.
  6. private-endpoint-2 forwards traffic to PLATT-abc, and lkc-456 can be accessed since PLATTC-456 is associated with private-endpoint-1.

Scenario: Access one environment from an on-premises network¶

../_images/azure-platt-1env-onprem.png

The following resources are configured:

  • PLATT-abc as a Private Link Attachment for accessing Kafka clusters in the env-abc environment
  • PLATTC-123 as a Private Link Attachment Connection for the private-endpoint-1 endpoint in VNet-1
  • On-Prem-1 as a Kafka client bootstrapped with lkc-123.westus.azure.private.confluent.cloud
  • ProdApp-1 as a Kafka client bootstrapped with lkc-123.westus.azure.private.confluent.cloud
  • Private DNS Zone Forward as a DNS forwarding rule with the regional wildcard *.westus.azure.private.confluent.cloud
  • Private DNS Zone 1 with the regional wildcard *.westus.azure.private.confluent.cloud

The following steps are performed:

  1. On-Prem-1 attempts to access lkc-123 in the env-abc environment. A DNS query for lkc-123.westus.azure.private.confluent.cloud forwards to Private DNS Zone 1 and returns private-endpoint-1.
  2. On-Prem-1 sends traffic to private-endpoint-1 over Azure ExpressRoute.
  3. private-endpoint-1 forwards traffic to PLATT-abc and lkc-123 can be accessed since PLATTC-123 is associated with private-endpoint-1.

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portal Ask the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • Confluent Platform
  • Connectors
  • Flink
  • Stream Governance
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- Apache®️, Apache Kafka®️, Kafka®️, Apache Flink®️, Flink®️, Apache Iceberg®️, Iceberg®️ and associated open source project names are trademarks of the Apache Software Foundation

On this page: