documentation
Get Started Free
  • Get Started Free
  • Stream
      Confluent Cloud

      Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

      Confluent Platform

      An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

  • Connect
      Managed

      Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

      Self-Managed

      Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

  • Govern
      Managed

      Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

      Self-Managed

      Use self-managed Schema Registry and Stream Governance with Confluent Platform.

  • Process
      Managed

      Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

      Self-Managed

      Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Stream
Confluent Cloud

Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

Confluent Platform

An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

Connect
Managed

Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

Self-Managed

Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

Govern
Managed

Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

Self-Managed

Use self-managed Schema Registry and Stream Governance with Confluent Platform.

Process
Managed

Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

Self-Managed

Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Learn
Get Started Free
  1. Home
  2. Cloud
  3. Manage Networking on Confluent Cloud
  4. Networking with AWS on Confluent Cloud
  5. Use AWS PrivateLinks on Confluent Cloud

CLOUD

  • Overview
  • Get Started
    • Overview
    • Quick Start
    • REST API Quick Start
    • Manage Schemas
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview
      • Example: Use Replicator to Copy Kafka Data to Cloud
      • Example: Create Fully-Managed Services
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
        • Manage Audit Logs
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-Tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Gateway to Cloud
        • Confluent Cloud Gateway Overview
        • Configure and Deploy
          • Overview
          • Configure and Deploy using Docker
          • Configure and Deploy using CFK
          • Configure Security using Docker
          • Configure Security using CFK
        • Migrate Kafka Clusters
        • Set up Network Isolation and Custom Domains
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
    • Migrate with kcp
    • FAQ
  • Build Streaming Applications
    • Overview
    • Architectural Considerations
    • Client Quick Start
    • Configure Clients
      • Consumer
      • Share Consumers
      • Producer
      • Configuration Properties
    • Test and Monitor a Client
      • Test
      • Monitor
      • Reset Offsets
    • Optimize and Tune
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
      • Freight
    • Client Guides
      • Python
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java Client
    • Kafka Client APIs for Confluent Cloud
      • Python Client API
      • .NET Client API
      • JavaScript Client API
      • Go Client API
      • C++ Client API
      • Java Client API
    • Deprecated Client APIs
    • Client Examples
      • Overview
      • Python Client
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java
      • Spring Boot
      • KafkaProducer
      • REST
      • Clojure
      • Groovy
      • Kafka Connect Datagen
      • kafkacat
      • Kotlin
      • Ruby
      • Rust
      • Scala
    • Kafka Plugin for JetBrains IDEs
    • VS Code Extension
  • Build Kafka Streams Applications
    • Overview
    • Quick Start
    • Metrics
    • Monitor Applications
    • Upgrade Guide
    • ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Quick Start
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Grant Role-Based Access
      • Migrate ksqlDB Applications on Confluent Cloud
  • Build AI with Confluent Intelligence
    • Overview
    • Streaming Agents
      • Overview
      • Agent Runtime Guide
      • Call Tools
      • Create and Run Streaming Agents
      • Debug Streaming Agents
      • Examples
    • Real-time Context Engine
    • Built-in AI/ML Functions
      • Overview
      • Detect Anomalies
      • Forecast Trends
      • Invoke a Tool in an AI Workflow
      • ML Preprocessing Functions
      • Model Inference Functions
    • Create Embeddings
      • Overview
      • Create Embeddings
    • Run a managed AI model
    • Run a remote AI Model
    • Search External Tables
      • Overview
      • Key Search with External Sources
      • Text Search with External Sources
      • Vector Search with External Sources
    • FAQ
  • Manage Topics
    • Overview
    • Configuration Reference
    • Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
    • Tableflow
      • Overview
      • Concepts
        • Overview
        • Storage
        • Schemas
        • Materialize Change Data Capture Streams
        • Billing
        • Write Modes
      • Get Started
        • Overview
        • Quick Start with Managed Storage
        • Quick Start Using Your Storage and AWS Glue
        • Quick Start with Delta Lake Tables
      • How-to Guides
        • Overview
        • Configure Storage
        • Integrate Catalogs
          • Overview
          • Integrate with AWS Glue Catalog
          • Integrate with Snowflake Open Catalog or Apache Polaris
          • Integrate with Unity Catalog
        • Query Data
          • Overview
          • Query with AWS
          • Query with DuckDB
          • Query with Flink
          • Query with Snowflake
          • Query with Trino
      • Operate
        • Overview
        • Configure
        • Grant Role-Based Access
        • Monitor
        • Use Private Networking
        • Use Private Networking with Azure
        • Supported Cloud Regions
    • FAQ
  • Manage and Monitor Resources with USM
    • Overview
    • Register Confluent Platform with USM
      • Overview
      • Set Up Payment Method
      • Configure Private Networking
      • Configure a Service Account
      • Deploy the USM Agent
      • Complete the Cluster Registration
    • Register Confluent Platform Connect Cluster with USM
    • Monitor Confluent Platform Resources
      • Overview
      • Clusters
      • Topics
      • Connectors
    • Data Governance for Confluent Platform
    • Remove the USM Agent
    • Troubleshoot
  • Govern Data Streams
    • Overview
    • Stream Governance
      • Manage Governance Packages
      • Data Portal
      • Track Data with Stream Lineage
      • Manage Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage and Examples Guide
        • GraphQL API Catalog Usage and Examples Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema ID Validation
      • Schema Linking
      • Schema Registry Tutorial
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Serializers and Deserializers Overview
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
      • Security Considerations
      • Enable Private Networking
        • Enable Private Networking with Schema Registry PrivateLink
        • Enable Private Networking for Schema Registry with a Public Endpoint
    • Reference
      • Schema Registry Reference
      • Configure Clients to Schema Registry
      • Schema Registry C++ Client (libschemaregistry)
      • Maven Plugin
      • REST API Usage Examples
      • Use AsyncAPI to Describe Topics and Schemas
    • FAQ
  • Connect to External Services
    • Overview
    • Install Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DocumentDB Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
      • Amazon DynamoDB CDC Source
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon S3 Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
      • Amazon S3 Source
      • Amazon SQS Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
        • Configure and Launch
        • Configure with Azure Egress Private Link Endpoints
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Sink V2
      • Azure Cosmos DB Source
      • Azure Cosmos DB Source V2
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • ClickHouse Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Couchbase Source
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
      • Couchbase Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS) Sink Connector for Confluent Cloud
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • Elasticsearch Service Sink V2
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
        • Configure with Azure Egress Private Link Endpoints
      • GitHub Source
      • Google BigQuery Sink [Deprecated]
      • Google BigQuery Sink V2
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink [Deprecated]
      • Google Cloud Functions Gen 2 Sink
      • Google Cloud Functions Sink [Deprecated]
      • Google Cloud Pub/Sub Source
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • HTTP Sink
      • HTTP Sink V2
      • HTTP Source
      • HTTP Source V2
      • IBM MQ Sink
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • InfluxDB 3 Sink
      • Jira Source
      • MariaDB CDC Source
      • Microsoft SQL Server CDC Source (Debezium) [Deprecated]
      • Microsoft SQL Server CDC Source V2 (Debezium)
        • Configure and launch the connector
        • Backward incompatibility considerations
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Egress Private Service Connect Endpoints
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium) [Deprecated]
      • MySQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • Neo4j Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • New Relic Metrics Sink
      • OpenSearch Sink
      • Oracle XStream CDC Source
        • Overview
        • Configure and Launch the connector
        • Oracle Database Prerequisites
        • Change Events
        • Signals and Actions
        • Examples
        • Troubleshooting
      • Oracle CDC Source
        • Overview
        • Configure and Launch the connector
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • SMT Examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink [Deprecated]
      • Pinecone Sink
      • PostgreSQL CDC Source (Debezium) [Deprecated]
      • PostgreSQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Sink
      • RabbitMQ Source
      • Redis Sink
      • Redis Kafka Sink
      • Redis Kafka Source
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce Bulk API Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source [Legacy]
      • ServiceNow Source V2
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Snowflake Source
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Confluent Marketplace
      • Overview
      • Component Archive Specification
      • Contribute
    • Install Custom Plugins and Custom Connectors
      • Overview
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
      • API and CLI
    • Manage Client-Side Encryption
    • Manage Provider Integration
    • Secret Manager Integration
    • Networking and DNS
      • Overview
      • AWS Egress PrivateLink Endpoints for First-Party Services
      • AWS Egress PrivateLink Endpoints for Self-Managed Services
      • AWS Egress PrivateLink Endpoints for Amazon RDS
      • AWS Egress PrivateLink Endpoints for Amazon DocumentDB
      • Azure Egress Private Link Endpoints for First-Party Services
      • Azure Egress Private Link Endpoints for Self-Managed Services
      • Google Cloud Private Service Connect Endpoints for First-Party Services
      • Google Cloud Private Service Connect Endpoints for Self-Managed Services
    • Connect API Usage
    • Manage Public Egress IP Addresses
    • Sample Connector Output
    • Configure Single Message Transforms
    • Configure Custom SMTs
      • Quick Start
      • Manage Custom SMT APIs
      • Limitations and Support
    • View Connector Events
    • Interpret Connector Statuses
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Manage Offsets
    • Transforms List
      • Overview
      • AdjustPrecisionAndScale
      • BytesToString
      • Cast
      • ChangeTopicCase
      • Drop
      • DropHeaders
      • EventRouter
      • ExtractField
      • ExtractNestedField
      • ExtractTimeStamp
      • ExtractTopic
      • ExtractXPath
      • Filter (Kafka)
      • Filter (Confluent)
      • Flatten (Kafka)
      • Flatten (Confluent)
      • FromXML
      • GzipDecompress
      • HeaderFrom
      • HeaderToValue
      • HoistField
      • InsertField
      • InsertHeader
      • KeyToValue
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField (Kafka)
      • ReplaceField (Confluent)
      • SetMaximumPrecision
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampNow
      • TimestampNowField
      • TimestampRouter
      • TimezoneConverter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
    • Reference
      • Additional Connector Configuration
      • Confluent AI-Assisted Troubleshooting
  • Integrate with Confluent Cloud
    • Overview
    • Connect with Confluent Program
    • Reuse Connections with External Services
      • Overview
      • Supported External Services
      • Manage Connections
    • Integrate with Cloud Service Providers
      • Overview
      • Create an AWS Provider Integration
      • Create an Azure Provider Integration
      • Create a Google Cloud Provider Integration
      • Manage a Provider Integration
  • Process Data with Flink
    • Overview
    • Get Started
      • Overview
      • Quick Start with Cloud Console
      • Quick Start with SQL Shell in Confluent CLI
      • Quick Start with Java Table API
      • Quick Start with Python Table API
    • Concepts
      • Overview
      • Autopilot
      • Batch and Stream Processing
      • Billing
      • Comparison with Apache Flink
      • Compute Pools
      • Delivery Guarantees and Latency
      • Determinism
      • Private Networking
      • Schema and Statement Evolution
      • Snapshot Queries
      • Statements
      • Statement CFU Metrics
      • Tables and Topics
      • Time and Watermarks
      • User-defined Functions
    • How-To Guides
      • Overview
      • Aggregate a Stream in a Tumbling Window
      • Combine Streams and Track Most Recent Records
      • Compare Current and Previous Values in a Stream
      • Convert the Serialization Format of a Topic
      • Create a UDF
      • Deduplicate Rows in a Table
      • Generate Custom Sample Data
      • Handle Multiple Event Types
      • Log Debug Messages in UDFs
      • Mask Fields in a Table
      • Process Schemaless Events
      • Profile a Query
      • Resolve Statement Issues
      • Scan and Summarize Tables
      • Run a Snapshot Query
      • Transform a Topic
      • View Time Series Data
    • Operate and Deploy
      • Overview
      • Carry-over Offsets
      • Deploy a Statement with CI/CD
      • Enable Private Networking
      • Generate a Flink API Key
      • Grant Role-Based Access
      • Manage Compute Pools
      • Manage Connections
      • Monitor and Manage Statements
      • Move SQL Statements to Production
      • Use the Query Profiler
      • REST API
    • Flink Reference
      • Overview
      • SQL Syntax
      • DDL Statements
        • Statements Overview
        • ALTER AGENT
        • ALTER CONNECTION
        • ALTER MODEL
        • ALTER TABLE
        • ALTER TOOL
        • ALTER VIEW
        • CREATE AGENT
        • CREATE CONNECTION
        • CREATE FUNCTION
        • CREATE MODEL
        • CREATE TABLE
        • CREATE TOOL
        • CREATE VIEW
        • DESCRIBE
        • DROP AGENT
        • DROP CONNECTION
        • DROP FUNCTION
        • DROP MODEL
        • DROP TABLE
        • DROP TOOL
        • DROP VIEW
        • HINTS
        • EXPLAIN
        • RESET
        • SET
        • SHOW
        • USE CATALOG
        • USE database_name
      • DML Statements
        • Queries Overview
        • Deduplication
        • Group Aggregation
        • INSERT INTO FROM SELECT
        • INSERT VALUES
        • Joins
        • LIMIT
        • Pattern Recognition
        • ORDER BY
        • OVER Aggregation
        • SELECT
        • Set Logic
        • EXECUTE STATEMENT SET
        • Top-N
        • Window Aggregation
        • Window Deduplication
        • Window Join
        • Window Top-N
        • Window Table-Valued Function
        • WITH
      • Functions
        • AI Model Inference
        • Aggregate
        • Collections
        • Comparison
        • Conditional
        • Datetime
        • Flink SQL Functions
        • Hashing
        • JSON
        • ML Preprocessing
        • Numeric
        • String
        • Table API
      • Data Types
      • Data Type Mappings
      • Time Zone
      • Keywords
      • Information Schema
      • Example Streams
      • Supported Cloud Regions
      • SQL Examples
      • Table API
      • CLI Reference
    • Get Help
    • FAQ
  • Manage Networking
    • Confluent Cloud Networking Overview
    • Networking on AWS
      • AWS Networking Overview
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • PrivateLink on AWS
        • PrivateLink Overview
        • Inbound PrivateLink for Dedicated Clusters
        • Inbound PrivateLink for Serverless Products
        • Outbound PrivateLink for Dedicated Clusters
        • Outbound PrivateLink for Serverless Products
      • VPC Peering on AWS
      • Transit Gateway on AWS
      • Private Network Interface on AWS
    • Networking on Azure
      • Azure Networking Overview
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Private Link on Azure
        • Private Link Overview
        • Inbound Private Link for Dedicated Clusters
        • Inbound Private Link for Serverless Products
        • Outbound Private Link for Dedicated Clusters
        • Outbound Private Link for Serverless Products
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Google Cloud Networking Overview
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Private Service Connect on Google Cloud
        • Private Service Connect Overview
        • Inbound Private Service Connect for Dedicated Clusters
        • Inbound Private Service Connect for Serverless Products
        • Outbound Private Service Connect for Dedicated Clusters
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Overview
      • Public Egress IP Address for Connectors and Cluster Linking
      • Cluster Linking using AWS PrivateLink
      • Follower Fetching using AWS VPC Peering
    • Use the Confluent Cloud Console with Private Networking
    • Test Connectivity
    • Networking FAQ
  • Log and Monitor
    • Metrics
    • Metrics Integrations
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Kafka Clients to Confluent Cloud
    • FAQ
  • Manage Security
    • Overview
    • Manage Authentication
      • Overview
      • Manage User Identities
        • Overview
        • Manage User Accounts
          • Overview
          • Authentication Security Protections
          • Manage Local User Accounts
          • Multi-factor Authentication
          • Manage SSO User Accounts
        • Manage User Identity Providers
          • Overview
          • Use Single Sign-On (SSO)
          • Manage SAML Single Sign-On (SSO)
          • Manage Azure Marketplace SSO
          • Just-in-time User Provisioning
          • Group Mapping
            • Overview
            • Enable Group Mapping
            • Manage Group Mappings
            • Troubleshooting
            • Best Practices
          • Manage Trusted Domains
          • Manage SSO provider
          • Troubleshoot SSO
      • Manage Workload Identities
        • Overview
        • Manage Workload Identities
        • Manage Service Accounts and API Keys
          • Overview
          • Create Service Accounts
          • Manage Service Accounts
          • Manage API Keys
            • Overview
            • Manage API keys
            • Best Practices
            • Troubleshoot
        • Manage OAuth/OIDC Identity Providers
          • Overview
          • Add an OIDC Identity Provider
          • Use OAuth Identity Pools and Filters
          • Manage Identity Provider Configurations
          • Manage the JWKS URI
          • Configure OAuth Clients
            • Overview
            • Go Clients
            • Java Clients
            • JavaScript Clients
            • .NET Clients
            • Python Clients
            • Configuration Reference
            • Configure UAMI
          • Access Kafka REST APIs
          • Use Confluent STS Tokens with REST APIs
          • Best Practices
          • Troubleshoot OAuth Issues
        • Manage mTLS Identity Providers
          • Overview
          • Configure mTLS
          • Manage Certificate Authorities
          • Manage Certificate Identity Pools
          • Create CEL Filters for mTLS
          • Create JSON payloads for mTLS
          • Manage Certificate Revocation
          • Troubleshoot mTLS Issues
    • Control Access
      • Overview
      • Resource Hierarchy
        • Overview
        • Organizations
          • Overview
          • Manage Multiple Organizations
        • Environments
        • Confluent Resource Names (CRNs)
      • Manage Role-Based Access Control
        • Overview
        • Predefined RBAC Roles
        • Manage Role Bindings
        • Use ACLs with RBAC
      • Manage IP Filtering
        • Overview
        • Manage IP Groups
        • Manage IP Filters
        • Best Practices
      • Manage Access Control Lists
        • Overview
        • Operations
        • Examples
        • Troubleshoot
        • Reference
      • Use the Confluent CLI with multiple credentials on Confluent Cloud
    • Encrypt and Protect Data
      • Overview
      • Manage Data in Transit With TLS
      • Use Self-Managed Encryption Keys
        • Overview
        • Use Self-Managed Encryption Keys on AWS
        • Use Self-Managed Encryption Keys on Azure
        • Use Self-Managed Encryption Keys on Google Cloud
        • Manage Key Policies
          • Overview
          • AWS
          • Azure
          • Google Cloud
        • Use Self-Managed Encryption Keys with Tableflow
        • Use Pre-BYOK-API-V1 Self-Managed Encryption Keys
        • Use Confluent CLI for Self-Managed Encryption Keys
        • Use BYOK API for Self-Managed Encryption Keys
        • Revoke Access to Data at Rest
        • Best Practices
        • Troubleshoot Key Policy Issues
      • Use Client-Side Field Level Encryption
        • Overview
        • Manage CSFLE using Confluent Cloud Console
        • Use Client-side Field Level Encryption
        • Configuration Settings
        • Manage Encryption Keys
        • Quick Start
        • Implement a Custom KMS Driver
        • Process Encrypted Data with Apache Flink
        • Code examples
        • Troubleshoot
        • FAQ
      • Use Client-Side Payload Encryption
    • Monitor Activity
      • Concepts
      • Audit Log Event Categories
      • Understand Audit Log Records
      • Audit Log Event Schema
      • Auditable Event Methods
        • Access Transparency
        • Connector
        • Custom Connector Plugin
        • Endpoints
        • Flink
        • Flink Authentication and Authorization
        • IP Filter Authorization
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management and Operations
        • ksqlDB Cluster Authentication and Authorization
        • Networking
        • Notifications Service
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Role-based Access Control (RBAC)
        • Real-Time Context Engine
        • Schema Registry Authentication and Authorization
        • Schema Registry Management and Operations
        • Tableflow Data Plane
        • Tableflow Control Plane
      • Access and Consume Audit Log Records
      • Access Transparency
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Access Management Tutorial
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use Jio Commits
    • Use Professional Services on Azure
    • Use Google Cloud Pay As You Go
    • Use Google Cloud Commits
    • Use Professional Services on Google Cloud
    • Marketplace Organization Suspension and Deactivation
    • FAQ
  • Manage Service Quotas
    • Overview
    • Service Quotas
    • View Service Quotas using Confluent CLI
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Kafka Admin and Produce REST APIs
    • Connect API
    • Client APIs
      • C++ Client API
      • Python Client API
      • Go Client API
      • .NET Client API
    • Provider Integration API
    • Flink REST API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
    • Generate a HAR file for Troubleshooting
    • Confluent AI Assistant
  • Support
  • Glossary

Use AWS PrivateLink for Serverless Products on Confluent Cloud

From your AWS virtual private cloud (VPC), you can use AWS PrivateLink to privately access serverless Confluent Cloud products. These products include Enterprise Kafka clusters, Schema Registry clusters, and Confluent Cloud for Apache Flink®. When you use AWS PrivateLink, your Confluent resources are only accessible from private endpoints in AWS that connect to your Confluent Cloud environment. To enable PrivateLink connectivity, you create the following private networking resources in your Confluent Cloud environment:

Ingress PrivateLink Gateway

A reservation to establish a PrivateLink connection from your VPC to regional services in a Confluent Cloud environment.

Ingress PrivateLink Access Point

A registration of a VPC interface endpoint that’s allowed to connect to a Confluent Cloud environment. A PrivateLink Access Point belongs to a specific PrivateLink Gateway.

These resources are regional and can be accessed from any availability zones.

Note

As of February 12th, 2026, the PrivateLink Attachment (PLATT) resource is replaced by the ingress PrivateLink Gateway resource. A gateway provides the same functionality as a PLATT, but it provides unique fully qualified domain names (FQDNs) for each PrivateLink connection. With these FQDNs, your applications can more granularly route traffic from your AWS VPC to the services in your Confluent Cloud environment.

Existing PLATT resources will continue to function, but you won’t be able to provision new ones following a future release. We recommended that you update your applications to use to use gateways.

You can use the Confluent Cloud Console, the Confluent REST API, the Confluent CLI, or Terraform to establish PrivateLink connectivity with the serverless products in your Confluent Cloud environment.

Requirements and considerations

  • For the supported regions, see Cloud Providers and Regions for Confluent Cloud.

  • Ingress PrivateLink Gateway resources don’t support PrivateLink connections to:

    • Different cloud regions.

    • Confluent Cloud resources in different environments.

  • Confluent Cloud Console components, such as topic management and Flink workspaces, may require additional configuration because they use private endpoints that aren’t accessible from the public internet. For information about using Flink with AWS PrivateLink, see Private Networking with Confluent Cloud for Apache Flink. To use all features of the Confluent Cloud Console with AWS PrivateLink, see Use the Confluent Cloud Console with Private Networking.

Step 1: Create an ingress PrivateLink Gateway

Create an ingress PrivateLink Gateway to enable PrivateLink connections to the Enterprise Kafka clusters, Schema Registry clusters, and the Flink service in an environment for a specific cloud region.

  1. In the Confluent Cloud Console, click Environments in the navigation menu.

  2. On the Environments page, do one of the following:

    • If you already have the environment where you want to create the gateway, select it.

    • If you need to create a new environment for the gateway, click Add cloud environment, and create one. For more information about creating Environments, see Environments on Confluent Cloud.

  3. On the page for your environment, click Network management in the navigation menu.

  4. On the Network management page, under the For serverless products tab, click +Add gateway configuration.

  5. On the Create gateway configuration page, under Choose type of networking gateway, select PrivateLink.

  6. Under Set up connections to/from Confluent Cloud, for From your VPC or VNet to Confluent Cloud, click +Create configuration.

    The console opens the Configure gateway pane.

  7. Under the 1. Gateway tab, configure the following settings:

    1. For Gateway name, enter a custom name for the gateway.

    2. For Cloud provider, select |aws|.

    3. For Set provider region, select the AWS Region where your VPC is located.

  8. Click Submit. The Configure gateway pane shows the next set of steps under the 2. Access point tab.

  9. Take note of the PrivateLink Service ID that the pane provides. You use this value next, when you create an AWS VPC endpoint.

At this point, you’re gateway is provisioned and has the CREATED state.

A gateway can have one of the following states:

  • CREATED: The gateway is created and waiting for a access point to be created.

  • READY: The access point is ready to be used.

  • EXPIRED: A valid access point was not provisioned in the allotted time. A new gateway must be created.

  1. Send a request to create a PrivateLink Gateway resource:

    REST request

    POST https://api.confluent.cloud/networking/v1/gateways
    

    REST request body

    {
      "spec": {
        "display_name": "<A custom name for the gateway>", "config": {
          "kind": "AwsIngressPrivateLinkGatewaySpec", "region": "<AWS
          region of the gateway>"
        }, "environment": {
          "id": "<The ID of the environment to add the gateway to>"
        }
      }
    }
    

Use the following command to create a PrivateLink Gateway:

confluent network gateway create <gateway-name> <flags>

The following command-specific flags are supported:

  • --cloud: Required. The cloud provider. Set to aws.

  • --region: Required. AWS region where the resources to be accessed using the gateway.

  • --type: Required. The type of gateway configuration.

You can specify additional optional CLI flags described in the Confluent CLI command reference.

The following is an example Confluent CLI command to create a PrivateLink gateway:

confluent network gateway create my-ingress-gateway \
  --cloud aws \ --region us-west-2 \ --type ingress-privatelink

Use the confluent_gateway Confluent Terraform Provider resource to create a PrivateLink Gateway.

The following is an example of a Terraform configuration:

resource "confluent_gateway" "aws_ingress" {
   display_name = "my-gateway" environment {
      id = "env-123abc"
   } aws_ingress_private_link_gateway {
      region = "us-west-2"
   }
}

Step 2: Create an AWS VPC endpoint

In AWS, create an endpoint that is associated with the PrivateLink Service ID of the ingress PrivateLink Gateway that you created.

  1. In the AWS Management Console, go to the VPC service.

  2. On the VPC dashboard page, do the following to verify that DNS hostnames and DNS resolution are enabled:

    1. In the navigation menu, under Virtual private cloud, click Your VPCs.

    2. Select the checkbox for your VPC, and click Actions > Edit VPC settings.

    3. Under DNS settings, verify that Enable DNS resolution and Enable DNS hostnames are selected.

    4. Click Save.

  3. In the navigation menu under PrivateLink and Lattice, click Endpoints.

  4. On the Endpoints page, click Create endpoint.

  5. For Name tag, enter a name for the endpoint.

  6. Under Type, select PrivateLink Ready partner services.

  7. Under Service settings, for Service name, specify the PrivateLink Service ID of the gateway you created. Then, click Verify service.

    If you get an error, ensure that your account is authorized to create PrivateLink connections, and try again.

  8. Under Network settings, for VPC, specify the ID of this VPC.

  9. Under Additional settings, uncheck the Enable DNS name setting.

  10. For Subnets, select the subnets in which to create an endpoint network interface.

  11. Select or create a security group for the VPC Endpoint.

    • Add three inbound rules for each of ports 80, 443, and 9092 from your desired source (your VPC CIDR). The Protocol should be TCP for all three rules.

    • Port 80 is not required, but is available as a redirect only to https/443, if desired.

  12. Click Create endpoint.

  13. Note the VPC endpoint ID. You use this value next, when you create an ingress PrivateLink Access Point.

aws ec2 create-vpc-endpoint --vpc-id <id of this VPC> \
  --service-name <privatelink attachment service id> \ --subnet-ids
  <subnet IDs for the endpoint> \ --region <region to use> \
  --private-dns-enabled false \ --vpc-endpoint-type Interface

Note that the VPC endpoint ID is created.

For example, using the information in status.cloud.vpc_endpoint_service_id in the PrivateLink Attachment status:

aws ec2 create-vpc-endpoint --vpc-id vpc-097799943f9fc059d \
  --service-name
  com.amazonaws.vpce.us-east-1.vpce-svc-123abcc1298abc123 \
  --subnet-ids subnet-7b16de0c \ --region us-east-1 \
  --private-dns-enabled false \ --vpc-endpoint-type Interface

Use the aws_vpc_endpoint AWS Terraform Provider resource to create a VPC endpoint in AWS.

Step 3: Create an ingress PrivateLink Access Point

An ingress PrivateLink Access Point registers a specific VPC endpoint with your ingress PrivateLink Gateway.

  1. To create an access point, navigate to either of the following locations in the Confluent Cloud console:

    • The Configure gateway pane where you created your gateway. You configure the access point under the 2. Access point tab.

    • The Create access point pane for your gateway. To open this pane, do the following:

      1. On the page for your environment, click Network management in the navigation menu.

      2. In the For serverless products tab, click your gateway name. Make sure the gateway is in the same region as your VPC private endpoint.

      3. Click the Access points tab, and click Create access point.

  2. At step four, for VPC Endpoint ID from AWS, specify the ID of the VPC endpoint that you created.

  3. At step five, for Access point name, enter a name.

  4. Click Create access point.

    The PrivateLink Gateway and PrivateLink Access Point enter the READY state after the VPC interface endpoint connection is accepted.

  1. Send a request to create a PrivateLink Access Point resource:

    REST request

    POST https://api.confluent.cloud/networking/v1/access-points
    

    REST request body

    {
      "spec": {
        "display_name": "<A custom name for the access point>",
        "config": {
          "kind": "AwsIngressPrivateLinkEndpoint", "vpc_endpoint_id":
          "<The ID of your VPC interface endpoint in AWS>"
        }, "environment": {
          "id": "<The ID of the environment that has the gateway for
          this access point>"
        }, "gateway": {
          "id": "<The ID of the gateway to add the access point to>"
        }
      }
    }
    

Use the following command to create a PrivateLink Access Point:

confluent network access-point private-link ingress-endpoint create
<access-point-name> <flags>

The following command-specific flags are supported:

  • --cloud: Required. The cloud provider. Set to aws.

  • --gateway: Required. The ID of the gateway ID to add the access point to.

  • --vpc-endpoint-id: Required. The ID of your VPC interface endpoint in AWS.

You can specify additional optional CLI flags described in the Confluent CLI command reference.

The following is an example Confluent CLI command to create a PrivateLink access point:

confluent network access-point private-link ingress-endpoint create
my-ingress-access-point \ --cloud aws \ --gateway gw-123abc \
--vpc-endpoint-id vpce-1234567890abcdef0

Use the confluent_access_point Confluent Terraform resource to create a PrivateLink Attachment Connection.

An example snippet of Terraform configuration for PrivateLink Attachment Connection:

resource "confluent_access_point" "aws_ingress_1" {
   display_name = "my_access_point" environment {
      id = "env-123abc"
   } gateway {
      id = "gw-123abc"
   } aws_ingress_private_link_endpoint {
      vpc_endpoint_id = "vpce-1234567890abcdef0"
   } depends_on = [
      confluent_gateway.aws_ingress
   ]
}

Step 4: Configure DNS

Confluent Cloud requires that you set up private DNS records for each access point pointing its DNS domain to the VPC endpoint you created.

When connecting to Confluent Cloud using access-point-specific hostnames, you must allow public DNS resolution from your network or VPC. Confluent Cloud advertises these hostnames in our public DNS resolver. These hostnames will then redirect to match the domains that you input to your private DNS resolver.

The resolution performs the following two-step process:

  1. The Confluent Cloud Global DNS Resolver returns a CNAME for all of your hostnames removing the glb subdomain and converting your access point ID to be a subdomain.

    For example, with the given hostname:

    $lkc-id-$accesspointId.$region.$cloud.accesspoint.glb.confluent.cloud
    

    The CNAME returned will be:

    $lkc-id.$accesspointId.$region.$cloud.accesspoint.confluent.cloud
    
  2. The CNAME then resolves to your VPC private endpoints based on the private DNS configuration.

If you are using AWS Route53 as your private DNS resolver, you can follow the steps below to configure DNS.

Set up a Route53 Private Hosted Zone in your AWS VPC for DNS resolution

  1. In Confluent Cloud, verify that the status of the Gateway is READY.

  2. Open the newly created Gateway to get the DNS domain value for your access point.

  3. In the AWS Route 53 console, create a Route53 Private Hosted Zone:

    1. Specify the following values:

      • Domain name: The DNS domain value in Confluent Cloud.

      • Type: Private hosted zone

      • VPC ID: The ID of the VPC where you added the VPC endpoint.

    2. Click Create hosted zone to associate the Private Hosted Zone with your VPC.

  4. Create a DNS record for the Hosted Zone you created above.

    This record is regional DNS and is used for all the target Confluent Cloud resources in the region.

    1. Click Create Record from within the previously created Hosted Zone.

    2. Specify the following values:

      • Record name: *

        Enter * as the subdomain name.

        The Record name consists of the subdomain and the DNS domain name. The DNS domain name is filled in with the Confluent Cloud DNS domain value you specified when you created the Route53 Private Hosted Zone in the previous step.

      • Record type: CNAME

      • Value: The DNS name of the VPC endpoint you created in Step 2: Create an AWS VPC endpoint.

        The value must be a fully qualified DNS name of the VPC endpoint. An example value would look like vpce-012c2200321aff207-gz49hgc1.vpce-svc-00da8c4990b89436d.us-west-2.vpce.amazonaws.com. Do not specify the VPC endpoint name.

        You can look up the DNS name on the Endpoint details page in the DNS names section.

      Note

      In Confluent Cloud, Kafka broker names you retrieve from the metadata are not static. Do not hardcode the broker names in DNS records.

    3. Click Create Record.

      You will see the summary of the new record.

Next steps

Try Confluent Cloud on AWS Marketplace with $1000 of free usage for 30 days, and pay as you go. No credit card is required.

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portalAsk the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • Confluent Platform
  • Connectors
  • Flink
  • Stream Governance
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & ConditionsPrivacy PolicyDo Not Sell My InformationModern Slavery PolicyCookie SettingsFeedback

Copyright © Confluent, Inc. 2014- Apache®️, Apache Kafka®️, Kafka®️, Apache Flink®️, Flink®️, Apache Iceberg®️, Iceberg®️ and associated open source project names are trademarks of the Apache Software Foundation

On this page:
  • Requirements and considerations
  • Step 1: Create an ingress PrivateLink Gateway
  • Step 2: Create an AWS VPC endpoint
  • Step 3: Create an ingress PrivateLink Access Point
  • Step 4: Configure DNS
  • Set up a Route53 Private Hosted Zone in your AWS VPC for DNS resolution
  • Next steps