Get Confluent | Sign up for Confluent Cloud or download Confluent Platform
  • Home
  • Platform
  • Cloud
  • Connectors
  • Tools
  • Clients
  • Download Confluent
Back to home

CLOUD

  • Overview
  • Get Started
    • Guided Tutorial
    • Quick Start for Apache Kafka using Confluent Cloud
    • ksqlDB in Confluent Cloud
    • Manage Schemas on Confluent Cloud
    • Confluent Cloud Basics
    • Confluent Cloud Demos
  • Kafka Clusters
    • Create Cluster
    • Expand a Dedicated Cluster
    • Cluster Types
    • Encrypt clusters using self-managed keys
      • Overview
      • Encrypt clusters using self-managed keys – AWS
    • Cloud Providers and Regions
    • Confluent Cloud Broker and Topic Configuration Settings
    • Compliance for Confluent Cloud
    • Upgrade Policy for Confluent Cloud
    • Migrate Topics on Confluent Cloud Clusters
    • Migrate Schemas
  • Manage Access
    • User Accounts for Confluent Cloud
    • Service Accounts for Confluent Cloud
    • Access Control Lists (ACLs) for Confluent Cloud
    • Audit Logs for Confluent Cloud
    • Tutorial: User Management in Confluent Cloud
    • Restrict Access to Confluent Cloud
    • Single sign-on (SSO) for Confluent Cloud
    • Configure the CLI for Multiple Environments
    • Environments
  • Manage Topics
    • Overview
    • Topics
    • Message Browser
    • Data Flow
  • Manage Schemas
  • Explore Data Lineage
    • Data Lineage Preview
  • Develop Client Applications
    • Architectural Considerations
    • Monitoring
    • Optimizing and Tuning
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
    • Configure Clients
    • Create API Keys
    • Tools for Confluent Cloud Clusters
  • Connect to External Systems
    • Overview
    • Amazon Kinesis Source Connector
    • Amazon Redshift Sink Connector
    • Amazon S3 Sink Connector
    • AWS Lambda Sink Connector
    • Azure Blob Storage Sink Connector
    • Azure Data Lake Storage Gen2 Sink Connector
    • Azure Event Hubs Source Connector
    • Azure Functions Sink Connector
    • Datagen Source Connector (development and testing)
    • Elasticsearch Service Sink Connector
    • Google BigQuery Sink Connector
    • Google Cloud Dataproc Sink Connector
    • Google Cloud Functions Sink Connector
    • Google Cloud Spanner Sink Connector
    • Google Cloud Storage Sink Connector
    • Google Pub/Sub Source Connector
    • Microsoft SQL Server Sink Connector
    • Microsoft SQL Server Source CDC Connector (Debezium)
    • Microsoft SQL Server Source Connector
    • MongoDB Atlas Sink Connector
    • MongoDB Atlas Source Connector
    • MySQL Source CDC Connector (Debezium)
    • MySQL Sink Connector
    • MySQL Source Connector
    • Oracle Database Source Connector
    • PostgreSQL CDC Source Connector (Debezium)
    • PostgreSQL Sink Connector
    • PostgreSQL Source Connector
    • Salesforce CDC Source Connector
    • Snowflake Sink Connector
    • Service Accounts
    • Connect API
    • Dead Letter Queue
    • Limitations
    • Confluent Cloud Connect Preview
  • ksqlDB Stream Processing
  • Networking
    • Overview
    • Configure Peering for AWS
    • Configure Peering for Azure
    • Configure Peering for GCP
    • Configuring Access to the Confluent Cloud Web UI with VPC peering
    • Using Confluent Cloud Schema Registry in a VPC Peered Environment
    • Configure AWS PrivateLink
    • Configure Azure Private Link (Preview)
    • Configuring Access to the Confluent Cloud Web UI with AWS and Azure Private Link
  • Monitoring
    • Confluent Cloud Metrics API
    • Monitor Consumer Lag
    • Debug Confluent Cloud using kafkacat
  • Resource Limits
  • Billing
    • Confluent Cloud Billing
    • Consumption Metrics
    • Azure Marketplace Pay As You Go
    • Azure Marketplace Commits
    • AWS Marketplace Pay As You Go
    • AWS Marketplace Commits
    • GCP Marketplace Pay As You Go
    • GCP Marketplace Commits
  • Use Confluent Platform with Cloud
    • Overview
    • Connecting Control Center to Confluent Cloud
    • Connecting Clients to Confluent Cloud
    • Connect Kafka Connect to Confluent Cloud
    • Connecting REST Proxy to Confluent Cloud
    • Connecting ksqlDB to Confluent Cloud
    • Manage ksqlDB by using the Confluent Cloud CLI
    • Schema Registry and Confluent Cloud
    • Connecting Kafka Streams to Confluent Cloud
    • Auto-Generating Configurations for Components to Confluent Cloud
  • Confluent Cloud APIs
  • Confluent Cloud CLI
  • Release Notes
  • FAQ

Configure an AWS PrivateLink connection to Confluent Cloud¶

Overview¶

Prerequisites
A Dedicated Kafka cluster in AWS with AWS PrivateLink enabled. For more information about how to create a dedicated cluster, see Create a Cluster in Confluent Cloud.

Follow this procedure to configure AWS PrivateLink for a Dedicated cluster in AWS.

  1. Register your AWS account with Confluent Cloud using the Confluent Cloud UI.
  2. Set up the VPC Endpoint(s) to Confluent Cloud PrivateLink Service in your AWS account using the AWS portal.
  3. Set up Availability Zone mapped DNS records to use AWS VPC Endpoints using the AWS portal.
  4. Validate connectivity to Confluent Cloud.

Requirements¶

  1. To use AWS PrivateLink with Confluent Cloud, your VPC must allow outbound internet connections for DNS resolution, Schema Registry and Confluent Cloud CLI to work.
    1. DNS requests to public authority traversing to private hosted zone is required.
    2. Confluent Cloud Schema Registry is only accessible over the internet.
    3. Confluent Cloud CLI requires internet access to authenticate with the Confluent Cloud control plane.
  2. Confluent Cloud web UI components like topic management and ksqlDB need additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud web UI with AWS PrivateLink, follow this procedure.

Warning

For limitations of the AWS PrivateLink feature, see Limitations.

Register your AWS account with Confluent Cloud¶

To make an AWS PrivateLink connection to a cluster in Confluent Cloud you must register the AWS account ID you wish to use. This is a security measure so Confluent can ensure only your organization can initiate AWS PrivateLink connections to the cluster. AWS PrivateLink connections from a VPC not contained in a registered AWS account will not be accepted by Confluent Cloud.

  1. Navigate to the Cluster Settings page, click the Networking tab, and click Add Account.
  2. Provide the 12-digit AWS Account Number for the account containing the VPCs you want to make the PrivateLink connection from and click Save. Your AWS PrivateLink connection status will transition from “Pending” to “Active” in the Confluent Cloud web UI. You still need to configure the VPC endpoint in your VPC before you can connect to the cluster.

Set up the VPC Endpoint for AWS PrivateLink in your AWS account¶

After the connection status is “Active” in the Confluent Cloud web UI, you must configure VPC Endpoint(s) in your VPC to make the PrivateLink connection(s) to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform config for setting up the VPC Endpoint. This config automates the manual steps described below.

Prerequisites

In the Confluent Cloud UI you will find the following information for your Confluent Cloud cluster under the Cluster Settings section.

  • Kafka Bootstrap (in the General tab)
  • Availability Zone ID(s) (in the Networking tab)
  • VPC Service Endpoint Name (in the Networking tab)
  • DNS Domain Name (in the Networking tab)
  • Zonal DNS Subdomain Name(s) (in the Networking tab)
  1. Verify subnet availability

    The Confluent Cloud VPC and cluster is created in specific zone(s) that, for optimal usage, should match the zone(s) of the customer VPC you want to make the AWS PrivateLink connection(s) from. You must have subnet(s) in your VPC for these zone(s) so that IP(s) can be allocated from them. It is allowed to also have subnets in zones outside of these. AWS Zone IDs should be used for this. You can find the specific Availability Zone(s) for your Confluent Cloud cluster in the UI.

    Ensure the VPC settings enableDnsHostnames and enableDnsSupport are set to true.

    Note

    Please note: Availability Zone names (like us-west-2a) are not consistent across AWS accounts, so Availability Zone IDs (like usw2-az1) are used instead.

  2. Create the VPC Endpoint

    In the AWS VPC Console:

    1. Select Endpoints from left hand list of tabs.

    2. Click Create Endpoint.

    3. Select Find service by name.

    4. Paste in the Confluent Cloud VPC Service Endpoint Name. You can find this in the Confluent Cloud UI.

    5. Click Verify. If you get an error, ensure that your account is allowed to create PrivateLink connections.

    6. Select VPC to create endpoints. Keep a note of this VPC Endpoint ID for use later.

    7. Note the availability zone(s) for your Confluent Cloud cluster from the Networking tab in the Confluent Cloud UI. Select the service in these zone(s). Ensure the desired subnet is selected for each zone. Failure to add all zone(s) as displayed in the Confluent Cloud UI can cause connectivity issues to brokers in the omitted zones, which can result in an unusable cluster.

      Note

      Confluent Cloud single availability zone clusters will need service and subnet selection in one zone whereas Confluent Cloud multi availability zone clusters will need service and subnet selection in three zones.

    8. Select or create a security group for the VPC Endpoints. Add three inbound rules for each of ports 80, 443, and 9092 from your desired source (i.e. your VPC CIDR). The Protocol should be TCP for all three rules. Note: port 80 is not required, but is available as a redirect only to https/443 if desired.

    9. Wait for acceptance by Confluent Cloud. This should happen almost immediately (less than 1 minute). After it is accepted, the endpoint will transition from “Pending” to “Active”.

Set up DNS records to use AWS VPC Endpoints¶

DNS changes must be made to ensure connectivity passes through AWS PrivateLink in the supported pattern. Any DNS provider can be used - AWS Route53 (used in this example) is not required. Any DNS provider that can ensure DNS is routed as follows is acceptable.

Note

Run the DNS helper script to figure out the DNS Zone records for a specific VPC Endpoint.

Update DNS using AWS Route53 in the AWS console:

  1. Create the Private Hosted Zone.

    1. Click Create Hosted Zone.
    2. Paste Confluent Cloud DNS into Domain Name. This can be found in the Confluent Cloud UI.
    3. Change Type to Private Hosted Zone for Amazon VPC.
    4. Select the VPC ID where you added the VPC Endpoint.
    5. Click Create.
  2. Set up DNS records for Confluent Cloud single availability zone clusters as follows:

    1. Create the following record with the Create Record Set button using the VPC Endpoint DNS Name map from the previous setup in the form

      *.$domain CNAME “The lone zonal VPC Endpoint” TTL 60
      

      For example:

      *.l92v4.us-west-2.aws.confluent.cloud CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2c.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      
  3. Set up DNS records for Confluent Cloud multi availability zone clusters as follows:

    1. Create the following records with the Create Record Set button using the VPC Endpoint DNS Name map from the previous setup in the form

      *.$domain CNAME “All Zones VPC Endpoint” TTL 60
      

      For example:

      *.l92v4.us-west-2.aws.confluent.cloud CNAME vpce-09f9f4e9a86682eed-9gxp2f7v.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      

      The CNAME is used to ensure AWS Route53 health checks are utilized in the case of AWS outages.

    2. Create one record per zone (repeat for all zones) in the form.

      *.$zoneid.$domain CNAME “Zonal VPC Endpoint” TTL 60
      

      For example:

      *.usw2-az3.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2a.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      *.usw2-az2.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2c.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      *.usw2-az1.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2b.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      

Validate Connectivity to Confluent Cloud¶

  1. From an instance within the VPC (or anywhere the previous step’s DNS is set up), run the following to validate Kafka connectivity through AWS PrivateLink is working correctly.

    1. Set a variable with the cluster bootstrap URL.

      % export BOOTSTRAP=$ConfluentCloudBootstrap
      

      For example:

      % export BOOTSTRAP=lkc-nkodz-0l6je.us-west-2.aws.confluent.cloud
      
    2. Test connectivity to the cluster.

      % openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      
    3. If the return output is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), connectivity to the bootstrap is confirmed.

    Note

    You might need to update the network security tools and firewalls to allow connectivity. If you have issues connecting after following these steps, confirm which network security systems your organization uses and whether their configurations need to be changed. If you still have issues, run the debug connectivity script and provide the output to Confluent Cloud Support for assistance with your PrivateLink setup.

  2. Next, verify connectivity with the Confluent Cloud CLI.

    1. Log in to the Confluent Cloud CLI with your Confluent Cloud credentials.

      ccloud login
      
    2. List the clusters in your organization.

      ccloud kafka cluster list
      
    3. Select the cluster with AWS PrivateLink you wish to test.

      ccloud kafka cluster use ...
      

      For example:

      ccloud kafka cluster use lkc-a1b2c
      
    4. Create a cluster API key to authenticate with the cluster.

      ccloud api-key create --resource ... --description ...
      

      For example:

      ccloud api-key create --resource lkc-a1b2c --description "connectivity test"
      
    5. Select the API key you just created.

      ccloud api-key use ... --resource ...
      

      For example:

      ccloud api-key use WQDMCIQWLJDGYR5Q --resource lkc-a1b2c
      
    6. Create a test topic.

      ccloud kafka topic create test
      
    7. Start consuming events from the test topic.

      ccloud kafka topic consume test
      
    8. Open another terminal tab or window.

    9. Start a producer.

      ccloud kafka topic produce test
      
    10. Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    11. The tab running consume will print what was typed in the tab running produce.

  3. You’re done! The cluster is ready for use.

Limitations¶

Warning

  1. The following regions are not yet supported for AWS PrivateLink clusters in Confluent Cloud: eu-west-2 (London), eu-west-3 (Paris), sa-east-1 (Sao Paulo), af-south-1 (Cape Town) and me-south-1 (Bahrain). Instead, you can use VPC peering for clusters in these regions, or use AWS PrivateLink with clusters in different regions.

  2. The following is the list of supported and unsupported availability zones in AWS us-east-1 region for AWS PrivateLink clusters in Confluent Cloud:

    • Supported availability zones: use1-az1, use1-az4 and use1-az5
    • Unsupported availability zones: use1-az2, use1-az3 and use1-az6
  3. Cross-region AWS PrivateLink connections are not supported.

  4. AWS PrivateLink is only available for use with Dedicated clusters.

  5. Existing Confluent Cloud clusters cannot be converted to use AWS PrivateLink.

  6. Fully-managed ksqlDB is not available for use with AWS PrivateLink clusters.

  7. Fully-managed Confluent Cloud connectors can connect to source(s) or sink(s) using a public IP. Source(s) or sink(s) in the customer network with private IP are not supported.

  8. AWS PrivateLink connections cannot be shared across multiple Confluent Cloud clusters. Separate AWS PrivateLink connections must be made to each Confluent Cloud cluster.

  9. Availability zone selection for placement of Confluent Cloud cluster and AWS PrivateLink service is not supported.

    1. Each Confluent Cloud multi availability zone cluster using PrivateLink will be provisioned with service endpoints in three availability zones. For those AWS regions that have more than three availability zones, the availability zones will be selected based on Confluent Cloud placement policies. One of the following options can be used to ensure connectivity to a Confluent Cloud multi availability zone cluster over AWS PrivateLink:

      • Provision subnets in your VPC in all availability zones in the region
      • Provision subnets in your VPC in at least the three availability zones in which the PrivateLink service endpoints are provisioned. This information will be available in the Confluent Cloud web UI under the Networking tab on the Cluster Settings page after the Confluent Cloud cluster is provisioned.

      The following regions currently have more than three availability zones: us-east-1 (N. Virginia), us-west-2 (Oregon) and ap-northeast-1 (Tokyo)

    2. Each Confluent Cloud single availability zone cluster using PrivateLink will be provisioned with service endpoints in one availability zone. The availability zone will be selected based on Confluent Cloud placement policies. One of the following options can be used to ensure connectivity to a Confluent Cloud single availability zone cluster over AWS PrivateLink:

      • Provision subnets in your VPC in all availability zones in the region
      • Provision subnets in your VPC in at least the single availability zone in which the PrivateLink service endpoint is provisioned. This information will be available in the Confluent Cloud web UI under the Networking tab on the Cluster Settings page after the Confluent Cloud cluster is provisioned.
  10. For requirements of the AWS PrivateLink feature, see Requirements.

Suggested Reading¶

For additional information about AWS PrivateLink Support with Confluent Cloud, see this article on the Confluent blog.

© Copyright , Confluent, Inc. Privacy Policy | Terms & Conditions. Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the Apache Software Foundation. All other trademarks, servicemarks, and copyrights are the property of their respective owners.

Please report any inaccuracies on this page or suggest an edit.

On this page: