Get Started Free
  • Get Started Free
  • Courses
      What are the courses?

      Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

      View all courses
      Kafka® 101
      Kafka® Internal Architecture
      Kafka® Connect 101
      Kafka® Security
      Kafka Streams 101
      NewDesigning Events and Event Streams
      Event Sourcing and Storage
      NewSchema Registry 101
      Data Mesh 101
      ksqlDB 101
      Inside ksqlDB
      Spring Frameworks and Kafka®
      Building Data Pipelines
      Confluent Cloud Networking
      NewConfluent Cloud Security
  • Learn
      Pick your learning path

      A wide range of resources to get you started

      Start Learning
      Articles

      Deep-dives into key concepts

      Patterns

      Architectures for event streaming

      FAQs

      Q & A about Kafka® and its ecosystem

      100 Days of Code

      A self-directed learning path

      Blog

      The Confluent blog

      Podcast

      Our podcast, Streaming Audio

      Confluent Developer Live

      Free live professional training

      Coding in Motion

      Build a real-time streaming app

  • Build
      Design. Build. Run.

      Build a client app, explore use cases, and build on our demos and resources

      Start Building
      Language Guides

      Build apps in your favorite language

      Tutorials

      Hands-on stream processing examples

      Demos

      More resources to get you started

  • Community
      Join the Community

      Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

      Learn More
      Kafka Summit and Current Conferences

      Premier data streaming events

      Meetups & Events

      Kafka and data streaming community

      Ask the Community

      Community forums and Slack channels

      Community Catalysts

      Sharing expertise with the community

  • Docs
      Get started for free

      Use the Cloud quick start to get up and running with Confluent Cloud using a basic cluster

      Learn more
      Documentation

      Guides, tutorials, and reference

      Confluent Cloud

      Fully managed, cloud-native service

      Confluent Platform

      Enterprise-grade distribution of Kafka

      Confluent Connectors

      Stream data between Kafka and other systems

      Tools

      Operational and developer tools

      Clients

      Use clients to produce and consume messages

Courses
What are the courses?

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

View all courses
Kafka® 101
Kafka® Internal Architecture
Kafka® Connect 101
Kafka® Security
Kafka Streams 101
NewDesigning Events and Event Streams
Event Sourcing and Storage
NewSchema Registry 101
Data Mesh 101
ksqlDB 101
Inside ksqlDB
Spring Frameworks and Kafka®
Building Data Pipelines
Confluent Cloud Networking
NewConfluent Cloud Security
Learn
Pick your learning path

A wide range of resources to get you started

Start Learning
Articles

Deep-dives into key concepts

Patterns

Architectures for event streaming

FAQs

Q & A about Kafka® and its ecosystem

100 Days of Code

A self-directed learning path

Blog

The Confluent blog

Podcast

Our podcast, Streaming Audio

Confluent Developer Live

Free live professional training

Coding in Motion

Build a real-time streaming app

Build
Design. Build. Run.

Build a client app, explore use cases, and build on our demos and resources

Start Building
Language Guides

Build apps in your favorite language

Tutorials

Hands-on stream processing examples

Demos

More resources to get you started

Community
Join the Community

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Learn More
Kafka Summit and Current Conferences

Premier data streaming events

Meetups & Events

Kafka and data streaming community

Ask the Community

Community forums and Slack channels

Community Catalysts

Sharing expertise with the community

Docs
Get started for free

Use the Cloud quick start to get up and running with Confluent Cloud using a basic cluster

Learn more
Documentation

Guides, tutorials, and reference

Confluent Cloud

Fully managed, cloud-native service

Confluent Platform

Enterprise-grade distribution of Kafka

Confluent Connectors

Stream data between Kafka and other systems

Tools

Operational and developer tools

Clients

Use clients to produce and consume messages

Get Started Free
Confluent Documentation
/

CLOUD

  • Overview
  • Get Started
    • Free Trial
    • Quick Start
    • Console Basics
    • Manage Schemas
    • Deploy with Terraform
    • Deploy with Pulumi
    • REST API Quick Start for Developers
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connecting Control Center to Confluent Cloud
      • Connect Kafka Clients to Confluent Cloud
      • Connect Kafka Connect to Confluent Cloud
      • Connecting REST Proxy to Confluent Cloud
      • Connecting ksqlDB to Confluent Cloud
      • Manage ksqlDB by using the Confluent CLI
      • Connect MQTT Proxy to Confluent Cloud
      • Schema Registry and Confluent Cloud
      • Connecting Kafka Streams to Confluent Cloud
      • Autogenerating Configurations for Components to Confluent Cloud
    • Tutorials and Examples
      • Overview of Confluent Cloud Examples
      • ccloud-stack Utility for Confluent Cloud
      • Tutorial: Confluent CLI
      • Observability for Apache Kafka® Clients to Confluent Cloud
        • Observability overview and setup
        • Producer scenarios
          • Overview
          • Confluent Cloud Unreachable
          • Authorization Revoked
        • Consumer scenarios
          • Overview
          • Increasing consumer lag
        • General scenarios
          • Overview
          • Failing to create a new partition
          • Request rate limits
        • Clean up Confluent Cloud resources
        • Additional resources
      • Cloud ETL Example
      • Confluent Replicator to Confluent Cloud Configurations
  • Kafka Clusters
    • Overview
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Share Data Across Clusters, Regions, and Clouds
      • Disaster Recovery and Failover
      • Hybrid Cloud and Bridge-to-Cloud
      • Tiered Separation of Critical Workloads
      • Mirror Topics
      • Data Migration
      • Security Considerations
      • Private Networking
      • Configuration, Commands, and Management
      • Manage Audit Logs
      • Metrics and Monitoring
      • FAQs
      • Troubleshooting
    • Cluster and Topic Configuration Settings
    • Create a Cluster with the Console
    • Cloud Providers and Regions
    • Cluster Management API Quickstart
    • Resize a Dedicated Cluster
    • Multi-tenancy and Client Quotas for Dedicated Clusters
    • Encrypt a Dedicated Cluster Using Self-managed Keys
      • Overview
      • Encrypt Clusters using Self-Managed Keys – AWS
      • Encrypt Clusters using Self-Managed Keys – Google Cloud
    • Upgrade Policy
    • Migrate Topics Between Clusters
    • Compliance
  • Develop Client Applications
    • Client Architecture
    • Test Clients
    • Monitor Clients
    • Optimize Clients
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
    • Configure Clients
  • Manage Accounts and Access
    • Resource Hierarchy
      • Organizations
        • Overview
        • Manage Multiple Organizations
      • Environments
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • Use API keys
        • Overview
        • Best Practices
        • Troubleshoot
      • Use OAuth
        • OAuth for Confluent Cloud
        • Add an OAuth/OIDC identity provider
        • Use identity pools and filters
        • Refresh the JWKS URI
        • Configure OAuth clients
        • Access Kafka REST APIs
        • Best Practices
      • Use Single sign-on (SSO)
        • Single Sign-on (SSO) for Confluent Cloud
        • Enable SSO
    • Control Access
      • Role-Based Access Control
      • Access Control Lists
      • Use the Confluent CLI with multiple credentials
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Create, Edit and Delete Topics
    • Use the Message Browser
  • Govern Streams and Schemas
    • Stream Governance
      • Overview
      • Packages, Features, and Limits
      • Schema Registry Clusters API Quick Start
      • Stream Lineage
      • Stream Catalog
        • User Guide
        • REST API
        • GraphQL API
      • Stream Quality
      • Generate an AsyncAPI Specification for Confluent Cloud Clusters
    • Manage Schemas
    • Confluent Cloud Schema Registry Tutorial
    • Schema Registry REST API
    • Using Broker-Side Schema Validation
    • Schema Linking
  • Connect to External Systems
    • Overview
    • Install Managed Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon SQS Source
      • Amazon S3 Sink
      • Amazon S3 Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Source
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS)
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink
      • Google Cloud Functions Sink
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • Google Cloud Pub/Sub Source
      • HTTP Sink
      • HTTP Source
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium)
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium)
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • Oracle CDC Source
        • Connector Features
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • Configure and Launch the connector
        • SMT examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink
      • PostgreSQL CDC Source (Debezium)
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Source
      • RabbitMQ Sink
      • Redis Sink
      • Salesforce Bulk API Source
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Networking, DNS, and Service Endpoints
    • Connect API
    • Manage Static Egress IP Addresses
    • Preview Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
  • Process Data Streams
    • Build Data Pipelines with Stream Designer
      • Overview
      • Quick Start for Stream Designer
      • Create a join pipeline
      • Create an aggregation pipeline
      • Import a recipe into a pipeline
      • Import and export a pipeline
      • Edit and update a pipeline
      • Role-Based Access Control for pipelines
      • Troubleshooting a pipeline in Stream Designer
      • Manage pipelines with the CLI
      • Manage pipelines with the REST API
      • Manage pipeline secrets
    • Create Stream Processing Apps with ksqlDB
      • Overview
      • Confluent Cloud Quick Start
      • Enable ksqlDB integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitoring ksqlDB
      • ksqlDB Connector Management in Confluent Cloud
      • Develop applications for ksqlDB
      • Pull queries with Confluent Cloud ksqlDB
      • ksqlDB quick start
      • Grant Role-Based Access to a ksqlDB cluster
      • Confluent Platform ksqlDB
  • Manage Networking
    • Overview
    • Private Endpoints
      • Use AWS PrivateLink
      • Use Azure Private Link
      • Use Google Cloud Private Service Connect
    • VPC and VNet Peering
      • Overview
      • Use VPC Peering on AWS
      • Use VPC Peering on Google Cloud
      • Use VNet Peering on Azure
      • Use Confluent Cloud Schema Registry
    • AWS Transit Gateway
    • Access Confluent Cloud Console with Private Networking
    • Static Egress IP Addresses
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Auditable Event Methods
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Networking
        • Pipeline (Stream Designer)
        • Role-based Access Control (RBAC)
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Metrics
    • Use the Metrics API to Track Usage by Team
    • Notifications
    • Cluster Load Metric
    • Monitor Consumer Lag
    • Dedicated Cluster Performance and Expansion
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Azure Marketplace Pay As You Go
    • Azure Marketplace Commits
    • AWS Marketplace Pay As You Go
    • AWS Marketplace Commits
    • GCP Marketplace Pay As You Go
    • GCP Marketplace Commits
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Overview
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Cluster Management with Kafka REST API
    • Metrics API
    • Stream Catalog REST API
    • GraphQL API
    • Service Quotas API
    • Pipelines API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
  • Support
  1. Home
  2. Cloud
  3. Manage Networking
  4. Private Endpoints in Confluent Cloud

Use Azure Private Link¶

Azure Private Link allows for one-way secure connection access from your VNet to Confluent Cloud with an added protection against data exfiltration. This networking option is popular for its unique combination of security and simplicity of setup.

The following diagram summarizes the Azure Private Link architecture between the VNet or subscription and the Confluent Cloud cluster.

Azure Private Link architecture between customer VNet or subscription and Confluent Cloud cluster

Note

For an overview of Azure Private Link and illustrated steps on getting started with Azure Private Link in Confluent Cloud, see Setting Up Secure Networking in Confluent with Azure Private Link.

Prerequisites¶

  • A Confluent Cloud network (CCN) of type PRIVATELINK in Azure. If a network does not exist, follow the procedure below in Create a Confluent Cloud network in Azure.
  • To use an Azure Private Link service with Confluent Cloud, your VNet must allow outbound internet connections for DNS resolution, Confluent Cloud Schema Registry, ksqlDB, and Confluent CLI to work.
    • DNS requests to public authority traversing to private DNS zone is required.
    • Confluent Cloud Schema Registry is accessible over the internet.
    • Provisioning new ksqlDB instances requires Internet access. After ksqlDB instances are up and running, they are fully accessible over Azure Private Link connections.
    • Confluent CLI requires internet access to authenticate with the Confluent Cloud control plane.
  • Confluent Cloud Console components, like topic management, need additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud Console with Azure Private Link, see Access Confluent Cloud Console with Private Networking.

Warning

For limitations of the Azure Private Link, see Limitations below.

Create a Confluent Cloud network in Azure¶

To create a Dedicated cluster with Azure Private Link, you need to create a Confluent Cloud network first in the required cloud and region.

Note

You can create multiple clusters within one Confluent Cloud network. For details on default service quotas, see Network.

  1. In the Confluent Cloud Console, go to the Network management page for your environment.
  2. Click Create your first network if this is the first network in your environment, or click + Add Network if your environment has existing networks.
  3. Select Azure as the Cloud Provider and the desired geographic region.
  4. Select the Private Link connectivity type and click Continue.
  5. Specify a Network Name, review your configuration, and click Create Network.

Here is an example REST API request:

HTTP POST request

POST https://api.confluent.cloud/networking/v1/networks

Authentication

See Authentication.

Request specification

In the request specification, include values for cloud, region, environment, connection type, and, optionally, add the display name, CIDR, and zones for the Confluent Cloud network. Update the attributes below with the correct values.

{
   "spec":{
      "display_name":"Azure-PL-CCN-1",
      "cloud":"AZURE",
      "region":"centralus",
      "connection_types":[
         "PRIVATELINK"
      ],
      "zones":[
         "1",
         "2",
         "3"
      ],
      "environment":{
         "id":"env-00000"
      }
   }
}

In most cases, it takes up to 15 to 20 minutes to create a Confluent Cloud network. Note the Confluent Cloud network ID from the response to specify it in the following commands.

After successfully provisioning the Confluent Cloud network, you can add Dedicated clusters within your Confluent Cloud network by using either of the following procedures:

  • Confluent Cloud Console: Create a Cluster in Confluent Cloud
  • Cluster Management API: Create a cluster

Register your Azure subscription with Confluent Cloud¶

Register your Azure subscription with the Confluent Cloud network for automatic approval of private endpoint connections to the Confluent Cloud network. If required, you can register multiple subscriptions.

  1. In the Confluent Cloud Console, go to your network resource in the Network Management tab and click + Private Link Access.
  2. Enter the Azure subscription ID for the account containing the VNets you want to make the Azure Private Link connection from. The Azure subscription number can be found on your Azure subscription page of the Azure Portal.
  3. Click Save.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/private-link-accesses

Authentication

See Authentication.

Request specification

In the request specification, include Confluent Cloud network ID, subscription, environment, and, optionally, add the display name. Update the attributes below with the correct values.

{
   "spec":{
      "display_name":"Azure-PL-CCN-1",
      "cloud":{
         "kind":"AzurePrivateLinkAccess",
         "subscription":"00000000-0000-0000-0000-000000000000"
      },
      "environment":{
         "id":"env-00000"
      },
      "network":{
         "id":"n-000000"
      }
   }
}

Your Azure Private Link connection status will transition from “Pending” to “Active” in the Confluent Cloud Console. You still need to configure the Private Endpoints in your VNet before you can connect to the cluster.

Note the Private Link Service Endpoint to create an Azure Private Link connection from your VNet to the Confluent Cloud cluster. This URL will also be provided later.

Create an Azure Private Link connection to Confluent Cloud¶

Follow this procedure to create an Azure Private Link connection to a Confluent Cloud cluster on Azure using the Confluent Cloud Console or REST APIs.

Set up the VNet Endpoint for Azure Private Link in your Azure account¶

After the connection status is “Active” in the Confluent Cloud Console, you must configure Private Endpoints in your VNet from the Azure Portal to make the Azure Private Link connection to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform configuration for setting up Private Link endpoints. This configuration automates the manual steps described below.

Prerequisites

In the Confluent Cloud Console, you will find the following information for your Confluent Cloud cluster under the Cluster Settings section. This information is needed to configure Azure Private Link for a Dedicated cluster in Azure.

  • Kafka Bootstrap (in the General tab)
  • DNS domain Name (in the Networking card)
  • Zonal DNS Subdomain Names (in the Networking card)
  • Service Aliases (in the Networking card)

Create the following Private Endpoints through the Azure Private Link Center:

  1. For Confluent Cloud single availability zone clusters, create a single Private Endpoint to the Confluent Cloud Service Alias. For Confluent Cloud multi-availability zone clusters, create a Private Endpoint to each of the Confluent Cloud zonal Service Aliases.
  2. Create a Private Endpoint for Confluent Cloud by clicking Create Private Endpoint.
  3. Fill in subscription, resource group, name, and region for the virtual endpoint and click next. The selected subscription must be the same as the one registered with Confluent Cloud.
  4. Select the Connect to an Azure resource by resource ID or alias option, paste in the Confluent Cloud Service Alias and click Next. You can find the Confluent Cloud Service Aliases in the Networking tab under Cluster settings in the Confluent Cloud Console.
  5. Fill in virtual network and subnet where the Private Endpoint is to be created.
  6. Click Review + create. Review the details and click Create to create the Private Endpoint.
  7. Wait for the Azure deployment to complete, go to the Private Endpoint resource and verify Private Endpoint connection status is Approved.

Set up DNS records to use Azure Private Endpoints¶

DNS changes must be made to ensure connectivity passes through Azure Private Link in the supported pattern. Any DNS provider that can ensure DNS is routed as follows is acceptable. Azure Private DNS Zone (used in this example) is one option.

Note

Run the DNS helper script to identify the DNS Zone records for Private Endpoints.

Update DNS using Azure Private DNS Zone in the Azure console:

  1. Create the Private DNS Zone.

    1. Search for the Private DNS Zone resource in the Azure Portal.

    2. Click Add.

    3. Copy the DNS Domain name from the Networking tab under Cluster Settings in the Confluent Cloud Console and use it as the name for the Private DNS Zone.

      For example:

      4kgzg.centralus.azure.confluent.cloud
      

      Note

      Notice there is no glb in the DNS Domain name.

    1. Fill in subscription, resource group and name and click Review + create.
    2. Wait for the Azure deployment to complete.
  2. Create DNS records.

    1. Go to the Private DNS Zone resource as created above.
    2. Click + Record Set.
    3. Create the following record set for Confluent Cloud single availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
    4. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP address of the single virtual endpoint as created above.
    5. Create the following record sets for Confluent Cloud multi-availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
      1. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP addresses of all three virtual endpoints as created above.
      2. Select name as “*.az1”, type as “A”, TTL as “1 Minute” and add IP address of the az1 virtual endpoint as created above.
      3. Select name as “*.az2”, type as “A”, TTL as “1 Minute” and add IP address of the az2 virtual endpoint as created above.
      4. Select name as “*.az3”, type as “A”, TTL as “1 Minute” and add IP address of the az3 virtual endpoint as created above.
  3. Attach the Private DNS Zone to the VNets where clients or applications are present.

  4. Go to the Private DNS Zone resource and click Virtual network links under settings.

    1. Click Add.
    2. Fill in link name, subscription and virtual network.

Validate connectivity to Confluent Cloud¶

  1. From an instance within the VNet, or anywhere the DNS is set up, run the following to validate Kafka connectivity through Azure Private Link is working correctly.

    1. Set an environment variable with the cluster bootstrap URL.

      export BOOTSTRAP=$<bootstrap-server-url>
      

      The Bootstrap URL displayed in Confluent Cloud Console includes the port (9092). The BOOTSTRAP value should include the full hostname, but do not include the port. This is so that you can run the openssl s_client -connect <host>:<port> command with the required values.

      For example:

      export BOOTSTRAP=lkc-222v1o-4kgzg.centralus.azure.glb.confluent.cloud
      
    2. Test connectivity to your cluster by running the openssl s_client -connect <host>:<port> command, specifying the $BOOTSTRAP environment variable for the <host> value and 9092 for the <port> value.

      openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      

      To run the openssl s_client -connect command, the -connect option requires that you specify the host and the port number. For details, see the openssl documentation for the -connect option option in the openssl s_client documentation.

    3. If the output returned is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), then connectivity to the bootstrap is confirmed.

      Note

      You might need to update the network security tools and firewalls to allow connectivity. If you have issues connecting after following these steps, confirm which network security systems your organization uses and whether their configurations need to be changed. If you still have issues, run the debug connectivity script and provide the output to Confluent Support for assistance with your Azure Private Link setup.

    4. Log in to the Confluent Cloud CLI with your Confluent Cloud credentials.

      confluent login
      
    5. List the clusters in your organization.

      confluent kafka cluster list
      
    6. Select the cluster with Azure Private Link you wish to test.

      confluent kafka cluster use ...
      

      For example:

      confluent kafka cluster use lkc-222v1o
      
    7. Create a cluster API key to authenticate with the cluster.

      confluent api-key create --resource ... --description ...
      

      For example:

      confluent api-key create --resource lkc-222v1o --description "connectivity test"
      
    8. Select the API key you just created.

      confluent api-key use ... --resource ...
      

      For example:

      confluent api-key use R4XPKKUPLYZSHOAT --resource lkc-222v1o
      
    9. Create a test topic.

      confluent kafka topic create test
      
    10. Start consuming events from the test topic.

      confluent kafka topic consume test
      
    11. Open another terminal tab or window.

    12. Start a producer.

      confluent kafka topic produce test
      

      Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    13. The tab running consume will print what was typed in the tab running produce.

You’re done! The cluster is ready for use.

Note

The bootstrap and broker hostname DNS resolution for Confluent Cloud Cluster with Private Link is a two-step process:

  1. The bootstrap and broker hostnames have glb as a subdomain in their domain name (for example, <cluster-subdomain-name>.eastus2.azure.glb.confluent.cloud). In the first step, Confluent Cloud Global DNS Resolver returns a CNAME for bootstrap and broker hostnames which doesn’t include the glb subdomain (for example, <cluster-subdomain-name>.eastus2.azure.confluent.cloud).
  2. In the second step, the CNAME without the “glb” subdomain is resolved to private endpoints IP addresses using the Private DNS Zone that you configure by using the previous steps.

Some DNS systems, like Windows DNS service, lack the ability to recursively resolve the above mentioned two step resolution within a single DNS node. For such situations, you should use two DNS systems. The first DNS system sets up separate forwarding rules for a domain with the “glb” subdomain and a domain without it, and forwards it to the second DNS system. The second DNS system recursively resolves by forwarding the “glb” name resolution request to Confluent Cloud Global DNS Resolver, which creates the “non-glb” name resolution to the Cloud DNS that hosts the Private DNS Zone as shown previously. Another alternative is to host the “non-glb” DNS records locally in the second DNS system.

../../_images/cloud-azure-privatelink_dns.png

Limitations¶

  • Cross-region Azure Private Link connections are not supported.
  • Azure Private Link is only available for use with Dedicated clusters.
  • Existing Confluent Cloud clusters cannot be converted to use Azure Private Link.
  • Fully-managed Confluent Cloud connectors can connect to data sources or sinks using a public IP address. Sources or sinks in the customer network with private IP addresses are not supported.
  • Availability zone selection for placement of Confluent Cloud cluster and Azure Private Link service is not supported.
  • See also: Prerequisites.

Confluent Cloud is a fully-managed Apache Kafka service available on all three major clouds. Try it free today.

Get Started Free
  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- . Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation

On this page: