Get Started Free
  • Get Started Free
  • Courses
      What are the courses?

      Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

      View all courses
      Kafka® 101
      Kafka® Internal Architecture
      Kafka® Connect 101
      Kafka® Security
      Kafka Streams 101
      NewDesigning Events and Event Streams
      Event Sourcing and Storage
      NewSchema Registry 101
      Data Mesh 101
      ksqlDB 101
      Inside ksqlDB
      Spring Frameworks and Kafka®
      Building Data Pipelines
      Confluent Cloud Networking
      NewConfluent Cloud Security
  • Learn
      Pick your learning path

      A wide range of resources to get you started

      Start Learning
      Articles

      Deep-dives into key concepts

      Patterns

      Architectures for event streaming

      FAQs

      Q & A about Kafka® and its ecosystem

      100 Days of Code

      A self-directed learning path

      Blog

      The Confluent blog

      Podcast

      Our podcast, Streaming Audio

      Confluent Developer Live

      Free live professional training

      Coding in Motion

      Build a real-time streaming app

  • Build
      Design. Build. Run.

      Build a client app, explore use cases, and build on our demos and resources

      Start Building
      Language Guides

      Build apps in your favorite language

      Tutorials

      Hands-on stream processing examples

      Demos

      More resources to get you started

  • Community
      Join the Community

      Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

      Learn More
      Kafka Summit and Current Conferences

      Premier data streaming events

      Meetups & Events

      Kafka and data streaming community

      Ask the Community

      Community forums and Slack channels

      Community Catalysts

      Sharing expertise with the community

  • Docs
      Get started for free

      Use the Cloud quick start to get up and running with Confluent Cloud using a basic cluster

      Learn more
      Documentation

      Guides, tutorials, and reference

      Confluent Cloud

      Fully managed, cloud-native service

      Confluent Platform

      Enterprise-grade distribution of Kafka

      Confluent Connectors

      Stream data between Kafka and other systems

      Tools

      Operational and developer tools

      Clients

      Use clients to produce and consume messages

Courses
What are the courses?

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

View all courses
Kafka® 101
Kafka® Internal Architecture
Kafka® Connect 101
Kafka® Security
Kafka Streams 101
NewDesigning Events and Event Streams
Event Sourcing and Storage
NewSchema Registry 101
Data Mesh 101
ksqlDB 101
Inside ksqlDB
Spring Frameworks and Kafka®
Building Data Pipelines
Confluent Cloud Networking
NewConfluent Cloud Security
Learn
Pick your learning path

A wide range of resources to get you started

Start Learning
Articles

Deep-dives into key concepts

Patterns

Architectures for event streaming

FAQs

Q & A about Kafka® and its ecosystem

100 Days of Code

A self-directed learning path

Blog

The Confluent blog

Podcast

Our podcast, Streaming Audio

Confluent Developer Live

Free live professional training

Coding in Motion

Build a real-time streaming app

Build
Design. Build. Run.

Build a client app, explore use cases, and build on our demos and resources

Start Building
Language Guides

Build apps in your favorite language

Tutorials

Hands-on stream processing examples

Demos

More resources to get you started

Community
Join the Community

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Learn More
Kafka Summit and Current Conferences

Premier data streaming events

Meetups & Events

Kafka and data streaming community

Ask the Community

Community forums and Slack channels

Community Catalysts

Sharing expertise with the community

Docs
Get started for free

Use the Cloud quick start to get up and running with Confluent Cloud using a basic cluster

Learn more
Documentation

Guides, tutorials, and reference

Confluent Cloud

Fully managed, cloud-native service

Confluent Platform

Enterprise-grade distribution of Kafka

Confluent Connectors

Stream data between Kafka and other systems

Tools

Operational and developer tools

Clients

Use clients to produce and consume messages

Get Started Free
Confluent Documentation
/

CLOUD

  • Overview
  • Get Started
    • Free Trial
    • Quick Start
    • Console Basics
    • Manage Schemas
    • Deploy with Terraform
    • Deploy with Pulumi
    • REST API Quick Start for Developers
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connecting Control Center to Confluent Cloud
      • Connect Kafka Clients to Confluent Cloud
      • Connect Kafka Connect to Confluent Cloud
      • Connecting REST Proxy to Confluent Cloud
      • Connecting ksqlDB to Confluent Cloud
      • Manage ksqlDB by using the Confluent CLI
      • Connect MQTT Proxy to Confluent Cloud
      • Schema Registry and Confluent Cloud
      • Connecting Kafka Streams to Confluent Cloud
      • Autogenerating Configurations for Components to Confluent Cloud
    • Tutorials and Examples
      • Overview of Confluent Cloud Examples
      • ccloud-stack Utility for Confluent Cloud
      • Tutorial: Confluent CLI
      • Observability for Apache Kafka® Clients to Confluent Cloud
        • Observability overview and setup
        • Producer scenarios
          • Overview
          • Confluent Cloud Unreachable
          • Authorization Revoked
        • Consumer scenarios
          • Overview
          • Increasing consumer lag
        • General scenarios
          • Overview
          • Failing to create a new partition
          • Request rate limits
        • Clean up Confluent Cloud resources
        • Additional resources
      • Cloud ETL Example
      • Confluent Replicator to Confluent Cloud Configurations
  • Kafka Clusters
    • Overview
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Share Data Across Clusters, Regions, and Clouds
      • Disaster Recovery and Failover
      • Hybrid Cloud and Bridge-to-Cloud
      • Tiered Separation of Critical Workloads
      • Mirror Topics
      • Data Migration
      • Security Considerations
      • Private Networking
      • Configuration, Commands, and Management
      • Manage Audit Logs
      • Metrics and Monitoring
      • FAQs
      • Troubleshooting
    • Cluster and Topic Configuration Settings
    • Create a Cluster with the Console
    • Cloud Providers and Regions
    • Cluster Management API Quickstart
    • Resize a Dedicated Cluster
    • Multi-tenancy and Client Quotas for Dedicated Clusters
    • Encrypt a Dedicated Cluster Using Self-managed Keys
      • Overview
      • Encrypt Clusters using Self-Managed Keys – AWS
      • Encrypt Clusters using Self-Managed Keys – Google Cloud
    • Upgrade Policy
    • Migrate Topics Between Clusters
    • Compliance
  • Develop Client Applications
    • Client Architecture
    • Test Clients
    • Monitor Clients
    • Optimize Clients
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
    • Configure Clients
  • Manage Accounts and Access
    • Resource Hierarchy
      • Organizations
        • Overview
        • Manage Multiple Organizations
      • Environments
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • Use API keys
        • Overview
        • Best Practices
        • Troubleshoot
      • Use OAuth
        • OAuth for Confluent Cloud
        • Add an OAuth/OIDC identity provider
        • Use identity pools and filters
        • Refresh the JWKS URI
        • Configure OAuth clients
        • Access Kafka REST APIs
        • Best Practices
      • Use Single sign-on (SSO)
        • Single Sign-on (SSO) for Confluent Cloud
        • Enable SSO
    • Control Access
      • Role-Based Access Control
      • Access Control Lists
      • Use the Confluent CLI with multiple credentials
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Create, Edit and Delete Topics
    • Use the Message Browser
  • Govern Streams and Schemas
    • Stream Governance
      • Overview
      • Packages, Features, and Limits
      • Schema Registry Clusters API Quick Start
      • Stream Lineage
      • Stream Catalog
        • User Guide
        • REST API
        • GraphQL API
      • Stream Quality
      • Generate an AsyncAPI Specification for Confluent Cloud Clusters
    • Manage Schemas
    • Confluent Cloud Schema Registry Tutorial
    • Schema Registry REST API
    • Using Broker-Side Schema Validation
    • Schema Linking
  • Connect to External Systems
    • Overview
    • Install Managed Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon SQS Source
      • Amazon S3 Sink
      • Amazon S3 Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Source
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS)
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink
      • Google Cloud Functions Sink
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • Google Cloud Pub/Sub Source
      • HTTP Sink
      • HTTP Source
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium)
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium)
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • Oracle CDC Source
        • Connector Features
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • Configure and Launch the connector
        • SMT examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink
      • PostgreSQL CDC Source (Debezium)
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Source
      • RabbitMQ Sink
      • Redis Sink
      • Salesforce Bulk API Source
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Networking, DNS, and Service Endpoints
    • Connect API
    • Manage Static Egress IP Addresses
    • Preview Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
  • Process Data Streams
    • Build Data Pipelines with Stream Designer
      • Overview
      • Quick Start for Stream Designer
      • Create a join pipeline
      • Create an aggregation pipeline
      • Import a recipe into a pipeline
      • Import and export a pipeline
      • Edit and update a pipeline
      • Role-Based Access Control for pipelines
      • Troubleshooting a pipeline in Stream Designer
      • Manage pipelines with the CLI
      • Manage pipelines with the REST API
      • Manage pipeline secrets
    • Create Stream Processing Apps with ksqlDB
      • Overview
      • Confluent Cloud Quick Start
      • Enable ksqlDB integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitoring ksqlDB
      • ksqlDB Connector Management in Confluent Cloud
      • Develop applications for ksqlDB
      • Pull queries with Confluent Cloud ksqlDB
      • ksqlDB quick start
      • Grant Role-Based Access to a ksqlDB cluster
      • Confluent Platform ksqlDB
  • Manage Networking
    • Overview
    • Private Endpoints
      • Use AWS PrivateLink
      • Use Azure Private Link
      • Use Google Cloud Private Service Connect
    • VPC and VNet Peering
      • Overview
      • Use VPC Peering on AWS
      • Use VPC Peering on Google Cloud
      • Use VNet Peering on Azure
      • Use Confluent Cloud Schema Registry
    • AWS Transit Gateway
    • Access Confluent Cloud Console with Private Networking
    • Static Egress IP Addresses
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Auditable Event Methods
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Networking
        • Pipeline (Stream Designer)
        • Role-based Access Control (RBAC)
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Metrics
    • Use the Metrics API to Track Usage by Team
    • Notifications
    • Cluster Load Metric
    • Monitor Consumer Lag
    • Dedicated Cluster Performance and Expansion
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Azure Marketplace Pay As You Go
    • Azure Marketplace Commits
    • AWS Marketplace Pay As You Go
    • AWS Marketplace Commits
    • GCP Marketplace Pay As You Go
    • GCP Marketplace Commits
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Overview
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Cluster Management with Kafka REST API
    • Metrics API
    • Stream Catalog REST API
    • GraphQL API
    • Service Quotas API
    • Pipelines API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
  • Support
  1. Home
  2. Cloud
  3. Manage Networking
  4. Private Endpoints in Confluent Cloud

AWS PrivateLink¶

AWS PrivateLink allows for one-way secure connection access from your VPC to Confluent Cloud with an added protection against data exfiltration. This networking option is popular for its unique combination of security and simplicity.

The following diagram summarizes the AWS PrivateLink architecture with the customer VPC/account and the Confluent Cloud VPC/account.

AWS PrivateLink architecture between customer VPC/account and Confluent Cloud cluster

Prerequisites¶

  • A Confluent Cloud network of type PRIVATELINK in AWS. If you do not have a Confluent Cloud network, expand the following section and follow the procedure to create one:

    Create a Confluent Cloud network in AWS

    Create a Confluent Cloud network

    To create a Confluent Cloud network of type PRIVATELINK, follow the procedure below for either the Confluent Cloud Console or the REST API.

    Note

    You can create multiple Dedicated Kafka clusters within one Confluent Cloud network. For details on default service quotas, see Network.

    1. In the Confluent Cloud Console, go to the Network management page for your environment.
    2. Click Create your first network if this is the first network in your environment, or click + Add Network if your environment has existing networks.
    3. Select AWS as the Cloud Provider and the desired geographic region.
    4. Select the PrivateLink connectivity type and click Continue.
    5. Specify a Network Name, review your configuration, and click Create Network.

    Here is an example REST API request:

    HTTP POST request

    POST https://api.confluent.cloud/networking/v1/networks
    

    Authentication

    See Authentication.

    Request specification

    In the request specification, include values for cloud, region, environment, connection type, and, optionally, add the display name, CIDR, and zones for the Confluent Cloud network. Update the attributes below with the correct values.

    {
       "spec": {
           "display_name": "AWS-PL-CCN-1",
           "cloud": "AWS",
           "region": "us-west-1",
           "connection_types": [
               "PRIVATELINK"
           ],
          "zones": [
            "usw2-az1",
            "usw2-az2",
            "usw2-az3"
          ],
          "environment":{
              "id":"env-000000"
          }
      }
    }
    

    In most cases, it takes up to 15-20 minutes to create a Confluent Cloud network. Note the Confluent Cloud network ID from the response to specify it in the following commands.

    After successfully provisioning the Confluent Cloud network, you can add Dedicated Kafka clusters within your Confluent Cloud network by using either of the following procedures:

    • Confluent Cloud Console: Create a Cluster in Confluent Cloud
  • Your VPC must allow outbound internet connections for DNS resolution, Confluent Cloud Schema Registry, and Confluent CLI.

    • Confluent Cloud Schema Registry is accessible over the internet.
    • Provisioning new ksqlDB clusters requires Internet access. After ksqlDB clusters are up and running, they are fully accessible over AWS PrivateLink connections.
    • Confluent CLI requires internet access to authenticate with the Confluent Cloud control plane.
  • Confluent Cloud Console components, like topic management, require additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud Console with AWS PrivateLink, see Access Confluent Cloud Console with Private Networking.

Warning

For limitations of the AWS PrivateLink feature, see Limitations below.

  1. Open the Confluent Cloud Console to the Environments page at https://confluent.cloud/environments, select your environment, and then click Network management.

  2. Click Create your first network if this is the first network in your environment, or click + Create Network if your environment has existing networks.

  3. Select AWS as the Cloud Provider and the desired geographic region.

  4. Select PrivateLink as the connectivity type.

  5. select three zones for your network placement, and then click Continue.

  6. (Optional) Select Private DNS resolution to resolve your cluster endpoints using a private DNS zone. Otherwise, the default DNS resolution is used. For details, see the section below on DNS resolution <dns-resolution-options>.

    Important

    After your network is provisioned, you cannot change the DNS resolution option.

  7. Enter a Network name, review your Network configuration, optionally click Review payment method, and then click Create Network.

Here is an example REST API request:

HTTP POST request

POST https://api.confluent.cloud/networking/v1/networks

Authentication

See Authentication.

Request specification

In the request specification, include values for cloud, region, environment, connection type, and, optionally, add the display name, CIDR, and zones for the Confluent Cloud network. Update the attributes below with the correct values.

{
  "spec": {
    "display_name": "prod-aws-us-east1",
    "cloud": "AWS",
    "region": "us-east-1",
    "connection_types": [
      "PRIVATELINK"
    ],
    "cidr": "10.200.0.0/16",
    "zones": [
      "use1-az1",
      "use1-az2",
      "use1-az3"
    ],
    "dns_config": {
      "resolution": "CHASED_PRIVATE"
    },
    "environment": {
      "id": "env-00000",
      "environment": "string"
    }
  }
}

In most cases, it takes up to 15 to 20 minutes to create a Confluent Cloud network. Note the Confluent Cloud network ID from the response to specify it in the following commands.

After your Confluent Cloud network is provisioned, you can add Dedicated Kafka clusters within your Confluent Cloud network by using either of the following procedures:

  • Confluent Cloud Console: Create a Cluster in Confluent Cloud
  • Cluster Management API: Create a cluster

Register your AWS account with Confluent Cloud¶

To make an AWS PrivateLink connection to a cluster in Confluent Cloud you must register the AWS Account ID you wish to use. This is a security measure so Confluent can ensure only your organization can initiate AWS PrivateLink connections to the cluster. AWS PrivateLink connections from a VPC not contained in a registered AWS account will not be accepted by Confluent Cloud.

You can register multiple AWS accounts to the same Confluent Cloud cluster, and AWS PrivateLink connections can be made from multiple VPCs in each registered AWS account.

  • If a VPC exists in a different AWS account, you need to create a separate PrivateLink Access on your Confluent Cloud network.
  1. In the Confluent Cloud Console, go to your network resource in the Network Management tab and click + PrivateLink Access.
  2. Enter the 12-digit AWS Account Number for the account containing the VPCs you want to make the AWS PrivateLink connection from.
  3. Note the VPC Endpoint service name to create an AWS PrivateLink connection from your VPC to the Confluent Cloud cluster. This URL will also be provided later.
  4. Click Save.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/private-link-accesses

Authentication

See Authentication.

Request specification

In the request specification, include values for the Confluent Cloud network ID, account, environment, and, optionally, add the display name. Update the attributes below with the correct values.

{
   "spec": {
       "display_name": "AWS-PL-CCN-1",
       "cloud": {
           "kind": "AwsPrivateLinkAccess",
           "account": "000000000000"
       },
        "environment":{
          "id":"env-000000"
      },
      "network": {
           "id":"n-00000"
      }

Your AWS PrivateLink connection status will transition from “Pending” to “Active” in the Confluent Cloud Console. You still need to configure the Private Endpoints in your VPC before you can connect to the cluster.

Create an AWS PrivateLink connection to Confluent Cloud¶

Follow this procedure to create an AWS PrivateLink connection to a Confluent Cloud cluster on AWS using the Confluent Cloud Console or REST APIs.

Set up the VPC endpoint for AWS PrivateLink in your AWS account¶

After the connection status is “Active” in the Confluent Cloud Console, configure your VPC private endpoints using the AWS VPC dashboard to make the AWS PrivateLink connection to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform configuration for setting up Private Link endpoints. This configuration automates the manual steps described below.

Prerequisites¶

In the Confluent Cloud Console, find the following information for your Confluent Cloud cluster under the Cluster Settings section and Confluent Cloud network under Confluent Cloud Network overview.

  • Kafka Bootstrap (in the General tab)
  • Availability Zone IDs (in the Networking tab)
  • VPC Service Endpoint Name (in the Networking tab)
  • DNS Domain Name (in the Networking tab)
  • Zonal DNS Subdomain Names (in the Networking tab)

Steps¶

  1. Verify subnet availability.

    The Confluent Cloud VPC and cluster is created in specific zones that, for optimal usage, should match the zones of the VPC you want to make the AWS PrivateLink connections from. You must have subnets in your VPC for these zones so that IP addresses can be allocated from them. It is allowed to also have subnets in zones outside of these. AWS Zone IDs should be used for this. You can find the specific Availability Zones for your Confluent Cloud cluster in the Confluent Cloud Console.

    Note

    Please note: Because Availability Zone names (for example, us-west-2a) are inconsistent across AWS accounts, Availability Zone IDs (like usw2-az1) are used.

  2. Verify that DNS hostnames and DNS resolution are enabled.

    1. Open the AWS Management Console and go the the VPC Dashboard at https://console.aws.amazon.com/vpc/home.
    2. In the navigation menu under VIRTUAL PRIVATE CLOUD, click Your VPCs. The Your VPCs page appears.
    3. Select your VPC, click Actions, and Edit DNS hostnames. The Edit DNS hostnames page appears.
    4. Verify that DNS hostnames is enabled.
    5. Click Actions and then select Edit DNS resolution. The Edit DNS resolution page appears.
    6. Verify that DNS resolution is enabled.
  3. Create the VPC endpoint.

    1. Open the AWS Management Console and go the the VPC Dashboard at https://console.aws.amazon.com/vpc/home.

    2. In the navigation menu under VIRTUAL PRIVATE CLOUD, click Endpoints. The Endpoints page appears.

    3. Click Create endpoint. The Create endpoint page appears.

    4. Under Service category, select Other endpoint services.

    5. Under Service settings, enter the Service name for your Confluent Cloud VPC Service Endpoint Name. You can find this in the Confluent Cloud Console.

    6. Click Verify service. If you get an error, ensure that your account is allowed to create PrivateLink connections.

    7. Under VPC, select the VPC in which to create your endpoint.

    8. Click Create endpoint.

      Your VPC endpoint is created and displayed. Copy the VPC Endpoint ID for later use.

    9. Note the availability zones for your Confluent Cloud cluster from the Networking tab in the Confluent Cloud Console. Select the service in these zones. Ensure the desired subnet is selected for each zone. Failure to add all zones as displayed in the Confluent Cloud Console can cause connectivity issues to brokers in the omitted zones, which can result in an unusable cluster.

      Note

      Confluent Cloud single availability zone clusters need service and subnet selection in one zone whereas Confluent Cloud multi-availability zone clusters need service and subnet selection in three zones.

    10. Select or create a security group for the VPC Endpoints.

      • Add three inbound rules for each of ports 80, 443, and 9092 from your desired source (your VPC CIDR). The Protocol should be TCP for all three rules.
      • Port 80 is not required, but is available as a redirect only to https/443, if desired.
    11. Wait for acceptance by Confluent Cloud. This should happen almost immediately (less than a minute). After it is accepted, the endpoint will transition from “Pending” to “Active”.

Set up DNS records to use AWS VPC endpoints¶

You must update your DNS records to ensure connectivity through AWS PrivateLink in the supported pattern. Any DNS web service that can ensure that DNS requests are routed as follows can be used, but for the example, AWS Route53 is used.

DNS resolution options¶

For AWS PrivateLink used with Confluent Cloud networks, you can use the default DNS resolution or enable private DNS resolution.

Default DNS resolution¶

The default DNS resolution, which is partially public, is used for the bootstrap server and broker hostnames of a Confluent Cloud cluster that is using AWS PrivateLink. The default DNS resolution performs the following two-step process:

  1. The Confluent Cloud Global DNS Resolver removes the glb subdomain and returns a CNAME for your bootstrap and broker hostnames.

    Example: $lkc-id-$nid.$region.$cloud.glb.confluent.cloud

    CNAME returned: $lkc-id.$nid.$region.$cloud.confluent.cloud

  2. Using the Private DNS Zone that you configured in the previous steps, the CNAME is resolved to private endpoints.

Private DNS resolution (enabled)¶

If you enable the Private DNS resolution option, your private DNS zone provides internal DNS resolution for your private networks without requiring external resolution to the Confluent Global DNS Resolver (GLB).

Tip

To identity the CNAME DNS zone records to correctly map to zonal endpoints for Confluent Cloud, you can run the DNS helper shell script.

Important

In the AWS Management Console, the Enable DNS name setting under Additional settings (only appearing after the VPC is selected) must be disabled when creating a private endpoint. By default, this setting is enabled.

Configure DNS zones¶

To update DNS resolution using AWS Route53 in the AWS Management Console:

  1. Create the Private Hosted Zone.

    1. Click Create Hosted Zone.
    2. Paste Confluent Cloud DNS into Domain Name. This can be found in the Confluent Cloud Console.
    3. Change Type to Private Hosted Zone for Amazon VPC.
    4. Select the VPC ID where you added the VPC Endpoint.
    5. Click Create.
  2. Set up DNS records for Confluent Cloud single availability zone clusters as follows:

    1. Create the following record with the Create Record button using the VPC Endpoint DNS Name map from the previous setup in the form.

      *.$domain CNAME “The lone zonal VPC Endpoint” TTL 60
      

      For example:

      *.l92v4.us-west-2.aws.confluent.cloud CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2c.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      
  3. Set up DNS records for Confluent Cloud multi-availability zone clusters as follows:

    1. Create the following records with the Create Record button

      using the VPC Endpoint DNS Name map from the previous setup in the form.

      *.$domain CNAME “All Zones VPC Endpoint” TTL 60
      

      For example:

      *.l92v4.us-west-2.aws.confluent.cloud CNAME vpce-09f9f4e9a86682eed-9gxp2f7v.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      

      The CNAME is used to ensure AWS Route53 health checks are used in the case of AWS outages.

    2. Create one record per zone (repeat for all zones) in the form.

      *.$zoneid.$domain CNAME “Zonal VPC Endpoint” TTL 60
      

      For example:

      *.usw2-az3.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2a.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      *.usw2-az2.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2c.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      *.usw2-az1.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2b.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      

Validate connectivity to Confluent Cloud¶

  1. From an instance within the VPC (or anywhere the previous step’s DNS is set up), run the following to validate Kafka connectivity through AWS PrivateLink is working correctly.

    1. Set an environment variable with the cluster bootstrap URL.

      export BOOTSTRAP=$<bootstrap-server-url>
      

      The Bootstrap URL displayed in Confluent Cloud Console includes the port (9092). The BOOTSTRAP value should include the full hostname, but do not include the port. This is so that you can run the openssl s_client -connect <host>:<port> command with the required values.

      For example:

      # Default DNS resolution
      export BOOTSTRAP=lkc-2v531-lg1y3.us-west-1.aws.glb.confluent.cloud
      
      # Private DNS resolution
      
      export BOOTSTRAP=lkc-2v531.domz6wj0p.us-west-1.aws.confluent.cloud
      
    2. Test connectivity to your cluster by running the openssl s_client -connect <host>:<port> command, specifying the $BOOTSTRAP environment variable for the <host> value and 9092 for the <port> value.

      openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      

      To run the openssl s_client -connect command, the -connect option requires that you specify the host and the port number. For details, see the openssl s_client documentation.

    3. If the return output is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), connectivity to the bootstrap is confirmed.

      Note

      You might need to update the network security tools and firewalls to allow connectivity. If you have issues connecting after following these steps, confirm which network security systems your organization uses and whether their configurations need to be changed. If you still have issues, run the debug connectivity script and provide the output to Confluent Support for assistance with your PrivateLink setup.

  2. Next, verify connectivity using the Confluent CLI.

    1. Sign in to Confluent CLI with your Confluent Cloud credentials.

      confluent login
      
    2. List the clusters in your organization.

      confluent kafka cluster list
      
    3. Select the cluster with AWS PrivateLink you wish to test.

      confluent kafka cluster use ...
      

      For example:

      confluent kafka cluster use lkc-a1b2c
      
    4. Create a cluster API key to authenticate with the cluster.

      confluent api-key create --resource ... --description ...
      

      For example:

      confluent api-key create --resource lkc-a1b2c --description "connectivity test"
      
    5. Select the API key you just created.

      confluent api-key use ... --resource ...
      

      For example:

      confluent api-key use WQDMCIQWLJDGYR5Q --resource lkc-a1b2c
      
    6. Create a test topic.

      confluent kafka topic create test
      
    7. Start consuming events from the test topic.

      confluent kafka topic consume test
      
    8. Open another terminal tab or window.

    9. Start a producer.

      confluent kafka topic produce test
      
    10. Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    11. The tab running consume will print what was typed in the tab running produce.

You’re done! The cluster is ready for use.

Limitations¶

  • AWS PrivateLink is only available for use with Dedicated clusters.
  • Existing Confluent Cloud networks cannot be converted to use AWS PrivateLink.
  • After provisioning of a Confluent Cloud network, you cannot change the DNS resolution option for the default or private DNS resolution.
  • Cross-region AWS PrivateLink connections are not supported.
  • See also: Prerequisites.

Connectors¶

Fully-managed Confluent Cloud connectors can connect to sources or sinks using a public IP addresses. Sources or sinks in the customer network with private IP addresses are not supported.

Availability zones¶

Japan (Osaka) region ap-northeast-3 and Jakarta region ap-southeast-3 are not supported for AWS PrivateLink clusters in Confluent Cloud. For these regions, you can use VPC peering for clusters, or use AWS PrivateLink with clusters in different regions.

All AWS availability zones, except use1-az3, are supported in the us-east-1 region.

Single availability-zone clusters¶

Each Confluent Cloud single zone cluster that uses AWS PrivateLink access is provisioned with service endpoints in one availability zone. The availability zone is selected based on Confluent Cloud placement policies.

To ensure connectivity over AWS PrivateLink connections, provision subnets in your VPC that minimally include the single availability zone in which the AWS PrivateLink access is provisioned.

Confluent Cloud is a fully-managed Apache Kafka service available on all three major clouds. Try it free today.

Get Started Free
  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- . Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation

On this page: