Get Started Free
  • Get Started Free
  • Courses
    • Apache Kafka 101
    • Kafka Connect 101
    • Kafka Streams 101
    • ksqlDB 101
    • Inside ksqlDB
    • Spring Framework and Apache Kafka
    • Building Data Pipelines with Apache Kafka and Confluent
    • Event Sourcing and Event Storage with Apache Kafka
    • Data Mesh 101
  • Learn
    • What is Apache Kafka®?
    • Event Streaming vs. Related Trends
    • KRaft: Kafka without ZooKeeper
    • Transactions & Guarantees in Kafka
    • Processing & Storage Fundamentals of Kafka
    • Kafka Performance
    • Cloud-native Kafka
    • Streaming Database Systems
    • Testing Apache Kafka
    • Explore Kafka's Internals
    • Apache Kafka® FAQs
  • Design, Build, Run
    • Event Streaming Patterns
    • Get Started With Your Favorite Language
    • Stream Processing Cookbook
    • Developers: Build Apps
    • Data Engineers: Build Pipelines and Integrate
    • Admins: Operate
    • Demos & In-depth Examples
    • 100 Days Of Code
  • Community
    • Kafka Summit Conference
    • Meetups & Events
    • Blog
    • Podcast
    • Ask the Community
    • Community Catalysts
  • Docs
    • Docs Home
    • Confluent Cloud
    • Confluent Platform
    • Confluent Connectors
    • Tools
    • Clients
Courses
  • Apache Kafka 101
  • Kafka Connect 101
  • Kafka Streams 101
  • ksqlDB 101
  • Inside ksqlDB
  • Spring Framework and Apache Kafka
  • Building Data Pipelines with Apache Kafka and Confluent
  • Event Sourcing and Event Storage with Apache Kafka
  • Data Mesh 101
Learn
  • What is Apache Kafka®?
  • Event Streaming vs. Related Trends
  • KRaft: Kafka without ZooKeeper
  • Transactions & Guarantees in Kafka
  • Processing & Storage Fundamentals of Kafka
  • Kafka Performance
  • Cloud-native Kafka
  • Streaming Database Systems
  • Testing Apache Kafka
  • Explore Kafka's Internals
  • Apache Kafka® FAQs
Design, Build, Run
  • Event Streaming Patterns
  • Get Started With Your Favorite Language
  • Stream Processing Cookbook
  • Developers: Build Apps
  • Data Engineers: Build Pipelines and Integrate
  • Admins: Operate
  • Demos & In-depth Examples
  • 100 Days Of Code
Community
  • Kafka Summit Conference
  • Meetups & Events
  • Blog
  • Podcast
  • Ask the Community
  • Community Catalysts
Docs
  • Docs Home
  • Confluent Cloud
  • Confluent Platform
  • Confluent Connectors
  • Tools
  • Clients
Get Started Free
Confluent Documentation
  • Site Filter

CLOUD

  • Overview
  • Get Started
    • Free Trial for Confluent Cloud
    • Quick Start for Apache Kafka using Confluent Cloud
    • Confluent Cloud Console Basics
    • ksqlDB in Confluent Cloud
    • Manage Schemas on Confluent Cloud
    • Confluent Terraform Provider
    • REST API Quick Start for Confluent Cloud Developers
    • Confluent Cloud Demos
  • Kafka Clusters
    • Features and Limits by Cluster Type
    • Cluster and Topic Configuration Settings
    • Create a Cluster with the Console
    • Expand a Dedicated Cluster
    • Shrink a Dedicated Cluster
    • Cluster Management API Quickstart
    • Cloud Providers and Regions
    • Compliance for Confluent Cloud
    • Upgrade Policy for Confluent Cloud
    • Encrypt a Dedicated Cluster Using Self-managed Keys
      • Overview
      • Encrypt Clusters using Self-Managed Keys – AWS
      • Encrypt Clusters using Self-Managed Keys – Google Cloud
    • Migrate Topics
    • Migrate Schemas
  • Manage Accounts and Access
    • Resource Hierarchy
      • Organizations
        • Organizations
        • Manage multiple organizations
      • Environments
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • API keys
        • Use API Keys
        • Best Practices
        • Troubleshoot
      • Single sign-on (SSO)
        • Single sign-on (SSO) for Confluent Cloud
        • Enable SSO
    • Control Access
      • Role-Based Access Control
      • Access Control Lists
      • Use the Confluent Cloud CLI with multiple credentials
    • Tutorial: Access Management in Confluent Cloud
  • Manage Topics in Cloud Console
    • Overview
    • Create, Edit and Delete Topics
    • Use the Message Browser
  • Manage Schemas
    • Design, Tag, Evolve, and Manage Schemas
    • Using Broker-Side Schema Validation
    • Schema Linking
  • Stream Governance
    • Overview
    • Stream Lineage
    • Stream Catalog
      • User Guide
      • REST API
      • GraphQL API
    • Stream Quality
  • Cluster Linking
    • Overview
    • Quick Start Tutorial
    • Share Data Across Clusters, Regions, and Clouds
    • Disaster Recovery and Failover
    • Hybrid Cloud and Bridge-to-Cloud
    • Mirror Topics
    • Data Migration
    • Security Considerations
    • Configuration, Commands, and Management
    • Metrics and Monitoring
  • Develop Client Applications
    • Architectural Considerations
    • Testing
    • Monitoring clients
    • Optimizing and Tuning
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
    • Configure Clients
  • Connect to External Systems
    • Overview
    • Networking and DNS Considerations
    • ActiveMQ Source
    • Amazon CloudWatch Logs Source
    • Amazon CloudWatch Metrics Sink
    • Amazon DynamoDB Sink
    • Amazon Kinesis Source
    • Amazon Redshift Sink
    • Amazon SQS Source
    • Amazon S3 Sink
    • Amazon S3 Source
    • AWS Lambda Sink
    • Azure Blob Storage Sink
    • Azure Cognitive Search Sink
    • Azure Cosmos DB Sink
    • Azure Cosmos DB Source
    • Azure Data Lake Storage Gen2 Sink
    • Azure Event Hubs Source
    • Azure Functions Sink
    • Azure Service Bus Source
    • Azure Synapse Analytics Sink
    • Databricks Delta Lake Sink
      • Set up Databricks Delta Lake (AWS)
      • Configure and launch the connector
    • Datadog Metrics Sink
    • Datagen Source (development and testing)
    • Elasticsearch Service Sink
    • GitHub Source
    • Google BigQuery Sink
    • Google Cloud BigTable Sink
    • Google Cloud Dataproc Sink
    • Google Cloud Functions Sink
    • Google Cloud Spanner Sink
    • Google Cloud Storage Sink
    • Google Cloud Storage Source
    • Google Pub/Sub Source
    • HTTP Sink
    • IBM MQ Source
    • InfluxDB 2 Sink
    • InfluxDB 2 Source
    • Jira Source
    • Microsoft SQL Server CDC Source (Debezium)
    • Microsoft SQL Server Sink (JDBC)
    • Microsoft SQL Server Source (JDBC)
    • MongoDB Atlas Sink
    • MongoDB Atlas Source
    • MQTT Sink
    • MQTT Source
    • MySQL CDC Source (Debezium)
    • MySQL Sink (JDBC)
    • MySQL Source (JDBC)
    • Oracle CDC Source
      • Connector Features
      • Horizontal Scaling
      • Oracle Database Prerequisites
      • Configure and Launch the connector
      • SMT examples
      • Addressing DDL Changes in Oracle Database
    • Oracle Database Sink (JDBC)
    • Oracle Database Source (JDBC)
    • PagerDuty Sink
    • PostgreSQL CDC Source (Debezium)
    • PostgreSQL Sink (JDBC)
    • PostgreSQL Source (JDBC)
    • RabbitMQ Source
    • RabbitMQ Sink
    • Redis Sink
    • Salesforce Bulk API Source
    • Salesforce CDC Source
    • Salesforce Platform Event Sink
    • Salesforce Platform Event Source
    • Salesforce PushTopic Source
    • Salesforce SObject Sink
    • ServiceNow Sink
    • ServiceNow Source
    • SFTP Sink
    • SFTP Source
    • Snowflake Sink
    • Solace Sink
    • Splunk Sink
    • Zendesk Source
    • Static Egress IP Addresses
    • Connector Data Previews
    • Single Message Transforms
    • View Connector Events
    • Service Accounts
    • RBAC for managed connectors
    • API for fully-managed connectors
    • Dead Letter Queue
    • Limitations
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
    • About Preview Features
  • ksqlDB Stream Processing
    • Overview
    • Confluent Cloud ksqlDB quick start
    • Monitoring ksqlDB
    • ksqlDB Connector Management in Confluent Cloud
    • Develop applications for ksqlDB
    • ksqlDB quick start
    • Confluent Platform ksqlDB
  • Manage Networking
    • Overview
    • Private Link
      • Use AWS PrivateLink
      • Use Azure Private Link
      • Configure DNS Resolution
    • VPC and VNet Peering
      • Overview
      • Use VPC Peering on AWS
      • Use VPC Peering on Google Cloud
      • Use VNet Peering on Azure
      • Access Confluent Cloud Console
      • Use Confluent Cloud Schema Registry
    • AWS Transit Gateway
    • Static Egress IP Addresses
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Authorization and Authentication Events
      • Organization Events
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Confluent Cloud Metrics
    • Cluster Load Metric
    • Monitor Consumer Lag
    • Dedicated Cluster Performance and Expansion
  • Quotas
    • Service Quotas
    • Service Quotas API
  • Billing
    • Confluent Cloud Billing
    • Marketplace Consumption Metrics
    • Azure Marketplace Pay As You Go
    • Azure Marketplace Commits
    • AWS Marketplace Pay As You Go
    • AWS Marketplace Commits
    • GCP Marketplace Pay As You Go
    • GCP Marketplace Commits
    • Marketplace Organization Suspension and Deactivation
  • Use Confluent Platform with Cloud
    • Overview
    • Connecting Control Center to Confluent Cloud
    • Connect Clients to Confluent Cloud
    • Connect Kafka Connect to Confluent Cloud
    • Connecting REST Proxy to Confluent Cloud
    • Connecting ksqlDB to Confluent Cloud
    • Manage ksqlDB by using the Confluent CLI
    • Schema Registry and Confluent Cloud
    • Connecting Kafka Streams to Confluent Cloud
    • Auto-Generating Configurations for Components to Confluent Cloud
  • Confluent Cloud APIs
    • Confluent Cloud APIs
    • Metrics API
    • Stream Catalog REST API
    • Stream Catalog GraphQL API
  • Confluent CLI
  • Confluent Cloud CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
  1. Home
  2. Manage Networking
  3. Private Links

AWS PrivateLink¶

AWS PrivateLink allows for one-way secure connection access from your VPC to Confluent Cloud with an added protection against data exfiltration. This networking option is popular for its unique combination of security and simplicity.

The following diagram summarizes the AWS PrivateLink architecture with the customer VPC/account and the Confluent Cloud VPC/account.

AWS PrivateLink architecture between customer VPC/account and Confluent Cloud cluster

Prerequisites¶

  • A Confluent Cloud network (CCN) of type PRIVATELINK in AWS. If a network does not exist, follow the procedure below in Create a network of type PRIVATELINK in AWS.
  • To use AWS PrivateLink with Confluent Cloud, your VPC must allow outbound internet connections for DNS resolution, Confluent Cloud Schema Registry, and Confluent CLI.
    • DNS requests to the public authority traversing to private hosted zone is required.
    • Confluent Cloud Schema Registry is accessible over the internet.
    • Provisioning new ksqlDB clusters requires Internet access. After ksqlDB clusters are up and running, they are fully accessible over AWS PrivateLink connections.
    • Confluent CLI requires internet access to authenticate with the Confluent Cloud control plane.
  • Confluent Cloud Console components, like topic management, require additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud Console with AWS PrivateLink, see Configure DNS Resolution.

Warning

For limitations of the AWS PrivateLink feature, see Limitations below.

Create a network of type PRIVATELINK in AWS¶

To create a Dedicated cluster with AWS PrivateLink, you will need to create a Confluent Cloud network first in the required cloud and region.

  1. In the Confluent Cloud Console, go to the Network management page for your environment.
  2. Click Create your first network if this is the first network in your environment, or click + Add Network if your environment has existing networks.
  3. Select AWS as the Cloud Provider and the desired geographic region.
  4. Select the PrivateLink connectivity type and click Continue.
  5. Specify a Network Name, review your configuration, and click Create Network.

Here is an example REST API request:

HTTP POST request

POST https://api.confluent.cloud/networking/v1/networks

Authentication

See Authentication.

Request specification

In the request specification, include values for cloud, region, environment, connection type, and, optionally, add the display name, CIDR, and zones for the Confluent Cloud network. Update the attributes below with the correct values.

{
   "spec": {
       "display_name": "AWS-PL-CCN-1",
       "cloud": "AWS",
       "region": "us-west-1",
       "connection_types": [
           "PRIVATELINK"
       ],
      "zones": [
        "usw2-az1",
        "usw2-az2",
        "usw2-az3"
      ],
      "environment":{
          "id":"env-000000"
      }
  }
}

In most cases, it takes up to 15 to 20 minutes to create a Confluent Cloud network. Note the Confluent Cloud network ID from the response to specify it in the following commands.

After successfully provisioning the Confluent Cloud network, you can add Dedicated clusters within your Confluent Cloud network by using either of the following procedures:

  • Confluent Cloud Console: Create a Cluster in Confluent Cloud
  • Cluster Management API: Create a cluster

Register your AWS account with Confluent Cloud¶

To make an AWS PrivateLink connection to a cluster in Confluent Cloud you must register the AWS account ID you wish to use. This is a security measure so Confluent can ensure only your organization can initiate AWS PrivateLink connections to the cluster. AWS PrivateLink connections from a VPC not contained in a registered AWS account will not be accepted by Confluent Cloud.

You can register multiple AWS accounts to the same Confluent Cloud cluster, and AWS PrivateLink connections can be made from multiple VPCs in each registered AWS account.

  1. In the Confluent Cloud Console, go to your network resource in the Network Management tab and click + PrivateLink.
  2. Enter the 12-digit AWS Account Number for the account containing the VPCs you want to make the AWS PrivateLink connection from.
  3. Note the VPC Endpoint service name to create an AWS PrivateLink connection from your VPC to the Confluent Cloud cluster. This URL will also be provided later.
  4. Click Save.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/private-link-accesses

Authentication

See Authentication.

Request specification

In the request specification, include values for the Confluent Cloud network ID, account, environment, and, optionally, add the display name. Update the attributes below with the correct values.

{
   "spec": {
       "display_name": "AWS-PL-CCN-1",
       "cloud": {
           "kind": "AwsPrivateLinkAccess",
           "account": "000000000000"
       },
        "environment":{
          "id":"env-000000"
      },
      "network": {
           "id":"n-00000"
      }

Your AWS PrivateLink connection status will transition from “Pending” to “Active” in the Confluent Cloud Console. You still need to configure the Private Endpoints in your VPC before you can connect to the cluster.

Create an AWS PrivateLink connection to Confluent Cloud¶

Follow this procedure to create an AWS PrivateLink connection to a Confluent Cloud cluster on AWS using the Confluent Cloud Console or REST APIs.

Set up the VPC Endpoint for AWS PrivateLink in your AWS account¶

After the connection status is “Active” in the Confluent Cloud Console, you must configure Private Endpoints in your VPC from the AWS Management Console to make the AWS Private Link connection to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform configuration for setting up Private Link endpoints. This configuration automates the manual steps described below.

Prerequisites

In the Confluent Cloud Console, find the following information for your Confluent Cloud cluster under the Cluster Settings section and Confluent Cloud network under Confluent Cloud Network overview.

  • Kafka Bootstrap (in the General tab)
  • Availability Zone IDs (in the Networking tab)
  • VPC Service Endpoint Name (in the Networking tab)
  • DNS Domain Name (in the Networking tab)
  • Zonal DNS Subdomain Names (in the Networking tab)
  1. Verify subnet availability.

    The Confluent Cloud VPC and cluster is created in specific zones that, for optimal usage, should match the zones of the VPC you want to make the AWS PrivateLink connections from. You must have subnets in your VPC for these zones so that IP addresses can be allocated from them. It is allowed to also have subnets in zones outside of these. AWS Zone IDs should be used for this. You can find the specific Availability Zones for your Confluent Cloud cluster in the Confluent Cloud Console.

    Note

    Please note: Because Availability Zone names (for example, us-west-2a) are inconsistent across AWS accounts, Availability Zone IDs (like usw2-az1) are used.

  2. Verify that DNS hostnames and DNS resolution are enabled.

    1. Open the AWS Management Console and go the the VPC Dashboard at https://console.aws.amazon.com/vpc/home.
    2. In the navigation menu under VIRTUAL PRIVATE CLOUD, click Your VPCs. The Your VPCs page appears.
    3. Select your VPC, click Actions, and Edit DNS hostnames. The Edit DNS hostnames page appears.
    4. Verify that DNS hostnames is enabled.
    5. Click Actions and then select Edit DNS resolution. The Edit DNS resolution page appears.
    6. Verify that DNS resolution is enabled.
  3. Create the VPC endpoint.

    1. Open the AWS Management Console and go the the VPC Dashboard at https://console.aws.amazon.com/vpc/home.

    2. In the navigation menu under VIRTUAL PRIVATE CLOUD, click Endpoints. The Endpoints page appears.

    3. Click Create endpoint. The Create endpoint page appears.

    4. Under Service category, select Other endpoint services.

    5. Under Service settings, enter the Service name for your Confluent Cloud VPC Service Endpoint Name. You can find this in the Confluent Cloud Console.

    6. Click Verify service. If you get an error, ensure that your account is allowed to create PrivateLink connections.

    7. Under VPC, select the VPC in which to create your endpoint.

    8. Click Create endpoint.

      Your VPC endpoint is created and displayed. Copy the VPC Endpoint ID for later use.

    9. Note the availability zones for your Confluent Cloud cluster from the Networking tab in the Confluent Cloud Console. Select the service in these zones. Ensure the desired subnet is selected for each zone. Failure to add all zones as displayed in the Confluent Cloud Console can cause connectivity issues to brokers in the omitted zones, which can result in an unusable cluster.

      Note

      Confluent Cloud single availability zone clusters need service and subnet selection in one zone whereas Confluent Cloud multi-availability zone clusters need service and subnet selection in three zones.

    10. Select or create a security group for the VPC Endpoints.

      • Add three inbound rules for each of ports 80, 443, and 9092 from your desired source (your VPC CIDR). The Protocol should be TCP for all three rules.
      • Port 80 is not required, but is available as a redirect only to https/443, if desired.
    11. Wait for acceptance by Confluent Cloud. This should happen almost immediately (less than a minute). After it is accepted, the endpoint will transition from “Pending” to “Active”.

Set up DNS records to use AWS VPC endpoints¶

DNS changes must be made to ensure connectivity passes through AWS PrivateLink in the supported pattern. Any DNS provider can be used - AWS Route53 (used in this example) is not required. Any DNS provider that can ensure DNS is routed as follows is acceptable.

Note

Run the DNS helper script to identify the DNS Zone records for VPC Endpoints.

Update DNS using AWS Route53 in the AWS Management Console:

  1. Create the Private Hosted Zone.

    1. Click Create Hosted Zone.
    2. Paste Confluent Cloud DNS into Domain Name. This can be found in the Confluent Cloud Console.
    3. Change Type to Private Hosted Zone for Amazon VPC.
    4. Select the VPC ID where you added the VPC Endpoint.
    5. Click Create.
  2. Set up DNS records for Confluent Cloud single availability zone clusters as follows:

    1. Create the following record with the Create Record button using the VPC Endpoint DNS Name map from the previous setup in the form.

      *.$domain CNAME “The lone zonal VPC Endpoint” TTL 60
      

      For example:

      *.l92v4.us-west-2.aws.confluent.cloud CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2c.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      
  3. Set up DNS records for Confluent Cloud multi-availability zone clusters as follows:

    1. Create the following records with the Create Record button

      using the VPC Endpoint DNS Name map from the previous setup in the form.

      *.$domain CNAME “All Zones VPC Endpoint” TTL 60
      

      For example:

      *.l92v4.us-west-2.aws.confluent.cloud CNAME vpce-09f9f4e9a86682eed-9gxp2f7v.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      

      The CNAME is used to ensure AWS Route53 health checks are used in the case of AWS outages.

    2. Create one record per zone (repeat for all zones) in the form.

      *.$zoneid.$domain CNAME “Zonal VPC Endpoint” TTL 60
      

      For example:

      *.usw2-az3.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2a.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      *.usw2-az2.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2c.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      *.usw2-az1.l92v4.us-west-2.aws.confluent.cloud. CNAME vpce-09f9f4e9a86682eed-9gxp2f7v-us-west-2b.vpce-svc-04689782e9d70ee9e.us-west-2.vpce.amazonaws.com TTL 60
      

Validate connectivity to Confluent Cloud¶

  1. From an instance within the VPC (or anywhere the previous step’s DNS is set up), run the following to validate Kafka connectivity through AWS PrivateLink is working correctly.

    1. Set an environment variable with the cluster bootstrap URL.

      export BOOTSTRAP=$<bootstrap-server-url>
      

      The Bootstrap URL displayed in Confluent Cloud Console includes the port (9092). The BOOTSTRAP value should include the full hostname, but do not include the port. This is so that you can run the openssl s_client -connect <host>:<port> command with the required values.

      For example:

      export BOOTSTRAP=lkc-nkodz-0l6je.us-west-2.aws.confluent.cloud
      
    2. Test connectivity to your cluster by running the openssl s_client -connect <host>:<port> command, specifying the $BOOTSTRAP environment variable for the <host> value and 9092 for the <port> value.

      openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      

      To run the openssl s_client -connect command, the -connect option requires that you specify the host and the port number. For details, see the openssl documentation for the -connect option option in the openssl s_client documentation.

    3. If the return output is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), connectivity to the bootstrap is confirmed.

      Note

      You might need to update the network security tools and firewalls to allow connectivity. If you have issues connecting after following these steps, confirm which network security systems your organization uses and whether their configurations need to be changed. If you still have issues, run the debug connectivity script and provide the output to Confluent Support for assistance with your PrivateLink setup.

  2. Next, verify connectivity using the Confluent Cloud CLI.

    1. Sign in to Confluent CLI with your Confluent Cloud credentials.

      confluent login
      
    2. List the clusters in your organization.

      confluent kafka cluster list
      
    3. Select the cluster with AWS PrivateLink you wish to test.

      confluent kafka cluster use ...
      

      For example:

      confluent kafka cluster use lkc-a1b2c
      
    4. Create a cluster API key to authenticate with the cluster.

      confluent api-key create --resource ... --description ...
      

      For example:

      confluent api-key create --resource lkc-a1b2c --description "connectivity test"
      
    5. Select the API key you just created.

      confluent api-key use ... --resource ...
      

      For example:

      confluent api-key use WQDMCIQWLJDGYR5Q --resource lkc-a1b2c
      
    6. Create a test topic.

      confluent kafka topic create test
      
    7. Start consuming events from the test topic.

      confluent kafka topic consume test
      
    8. Open another terminal tab or window.

    9. Start a producer.

      confluent kafka topic produce test
      
    10. Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    11. The tab running consume will print what was typed in the tab running produce.

You’re done! The cluster is ready for use.

Limitations¶

  • The Africa (Cape Town) Region availability zone af-south-1 is not supported for AWS PrivateLink clusters in Confluent Cloud. Instead, you can use VPC peering for clusters in this region, or use AWS PrivateLink with clusters in different regions.
  • Supported and unsupported availability zones in AWS us-east-1 region for AWS PrivateLink clusters in Confluent Cloud:
    • Supported availability zones: use1-az1, use1-az2, use1-az4, use1-az5, and use1-az6
    • Unsupported availability zones: use1-az3
  • Cross-region AWS PrivateLink connections are not supported.
  • AWS PrivateLink is only available for use with Dedicated clusters.
  • Existing Confluent Cloud clusters cannot be converted to use AWS PrivateLink.
  • Fully-managed Confluent Cloud connectors can connect to sources or sinks using a public IP. Sources or sinks in the customer network with private IP are not supported.
  • AWS PrivateLink connections cannot be shared across multiple Confluent Cloud clusters. Separate AWS PrivateLink connections must be made to each Confluent Cloud cluster.
  • Availability zone selection for placement of Confluent Cloud cluster and AWS PrivateLink service is not supported.
    • Each Confluent Cloud multi-availability zone cluster using AWS PrivateLink will be provisioned with service endpoints in three availability zones. For those AWS regions that have more than three availability zones, the availability zones will be selected based on Confluent Cloud placement policies. One of the following options can be used to ensure connectivity to a Confluent Cloud multi availability zone cluster over AWS PrivateLink:
      • Provision subnets in your VPC in all availability zones in the region
      • Provision subnets in your VPC in at least the three availability zones in which the PrivateLink service endpoints are provisioned. This information will be available in the Confluent Cloud Console under the Networking tab on the Cluster Settings page after the Confluent Cloud cluster is provisioned.
      • The following regions currently have more than three availability zones: us-east-1 (N. Virginia) and us-west-2 (Oregon)
    • Each Confluent Cloud single availability zone cluster using AWS PrivateLink will be provisioned with service endpoints in one availability zone. The availability zone will be selected based on Confluent Cloud placement policies. One of the following options can be used to ensure connectivity to a Confluent Cloud single availability zone cluster over AWS PrivateLink:
      • Provision subnets in your VPC in all availability zones in the region
      • Provision subnets in your VPC in at least the single availability zone in which the PrivateLink service endpoint is provisioned. This information will be available in the Confluent Cloud Console under the Networking tab on the Cluster Settings page after the Confluent Cloud cluster is provisioned.
  • See also: Prerequisites.

Confluent Cloud is a fully-managed Apache Kafka service available on all three major clouds. Try it free today.

Get Started Free
  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings

Copyright © Confluent, Inc. 2014- . Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation

Report an issue with this page or suggest an edit.
On this page: