Get Started Free
  • Get Started Free
  • Courses
    • Apache Kafka 101
    • Kafka Connect 101
    • Kafka Streams 101
    • ksqlDB 101
    • Inside ksqlDB
    • Spring Framework and Apache Kafka
    • Building Data Pipelines with Apache Kafka and Confluent
    • Event Sourcing and Event Storage with Apache Kafka
    • Data Mesh 101
  • Learn
    • What is Apache Kafka®?
    • Event Streaming vs. Related Trends
    • KRaft: Kafka without ZooKeeper
    • Transactions & Guarantees in Kafka
    • Processing & Storage Fundamentals of Kafka
    • Kafka Performance
    • Cloud-native Kafka
    • Streaming Database Systems
    • Testing Apache Kafka
    • Explore Kafka's Internals
    • Apache Kafka® FAQs
  • Design, Build, Run
    • Event Streaming Patterns
    • Get Started With Your Favorite Language
    • Stream Processing Cookbook
    • Developers: Build Apps
    • Data Engineers: Build Pipelines and Integrate
    • Admins: Operate
    • Demos & In-depth Examples
    • 100 Days Of Code
  • Community
    • Kafka Summit Conference
    • Meetups & Events
    • Blog
    • Podcast
    • Ask the Community
    • Community Catalysts
  • Docs
    • Docs Home
    • Confluent Cloud
    • Confluent Platform
    • Confluent Connectors
    • Tools
    • Clients
Courses
  • Apache Kafka 101
  • Kafka Connect 101
  • Kafka Streams 101
  • ksqlDB 101
  • Inside ksqlDB
  • Spring Framework and Apache Kafka
  • Building Data Pipelines with Apache Kafka and Confluent
  • Event Sourcing and Event Storage with Apache Kafka
  • Data Mesh 101
Learn
  • What is Apache Kafka®?
  • Event Streaming vs. Related Trends
  • KRaft: Kafka without ZooKeeper
  • Transactions & Guarantees in Kafka
  • Processing & Storage Fundamentals of Kafka
  • Kafka Performance
  • Cloud-native Kafka
  • Streaming Database Systems
  • Testing Apache Kafka
  • Explore Kafka's Internals
  • Apache Kafka® FAQs
Design, Build, Run
  • Event Streaming Patterns
  • Get Started With Your Favorite Language
  • Stream Processing Cookbook
  • Developers: Build Apps
  • Data Engineers: Build Pipelines and Integrate
  • Admins: Operate
  • Demos & In-depth Examples
  • 100 Days Of Code
Community
  • Kafka Summit Conference
  • Meetups & Events
  • Blog
  • Podcast
  • Ask the Community
  • Community Catalysts
Docs
  • Docs Home
  • Confluent Cloud
  • Confluent Platform
  • Confluent Connectors
  • Tools
  • Clients
Get Started Free
Confluent Documentation
  • Site Filter

CLOUD

  • Overview
  • Get Started
    • Free Trial for Confluent Cloud
    • Quick Start for Apache Kafka using Confluent Cloud
    • Confluent Cloud Console Basics
    • ksqlDB in Confluent Cloud
    • Manage Schemas on Confluent Cloud
    • Confluent Terraform Provider
    • REST API Quick Start for Confluent Cloud Developers
    • Confluent Cloud Demos
  • Kafka Clusters
    • Features and Limits by Cluster Type
    • Cluster and Topic Configuration Settings
    • Create a Cluster with the Console
    • Expand a Dedicated Cluster
    • Shrink a Dedicated Cluster
    • Cluster Management API Quickstart
    • Cloud Providers and Regions
    • Compliance for Confluent Cloud
    • Upgrade Policy for Confluent Cloud
    • Encrypt a Dedicated Cluster Using Self-managed Keys
      • Overview
      • Encrypt Clusters using Self-Managed Keys – AWS
      • Encrypt Clusters using Self-Managed Keys – Google Cloud
    • Migrate Topics
    • Migrate Schemas
  • Manage Accounts and Access
    • Resource Hierarchy
      • Organizations
        • Organizations
        • Manage multiple organizations
      • Environments
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • API keys
        • Use API Keys
        • Best Practices
        • Troubleshoot
      • Single sign-on (SSO)
        • Single sign-on (SSO) for Confluent Cloud
        • Enable SSO
    • Control Access
      • Role-Based Access Control
      • Access Control Lists
      • Use the Confluent Cloud CLI with multiple credentials
    • Tutorial: Access Management in Confluent Cloud
  • Manage Topics in Cloud Console
    • Overview
    • Create, Edit and Delete Topics
    • Use the Message Browser
  • Manage Schemas
    • Design, Tag, Evolve, and Manage Schemas
    • Using Broker-Side Schema Validation
    • Schema Linking
  • Stream Governance
    • Overview
    • Stream Lineage
    • Stream Catalog
      • User Guide
      • REST API
      • GraphQL API
    • Stream Quality
  • Cluster Linking
    • Overview
    • Quick Start Tutorial
    • Share Data Across Clusters, Regions, and Clouds
    • Disaster Recovery and Failover
    • Hybrid Cloud and Bridge-to-Cloud
    • Mirror Topics
    • Data Migration
    • Security Considerations
    • Configuration, Commands, and Management
    • Metrics and Monitoring
  • Develop Client Applications
    • Architectural Considerations
    • Testing
    • Monitoring clients
    • Optimizing and Tuning
      • Overview
      • Throughput
      • Latency
      • Durability
      • Availability
    • Configure Clients
  • Connect to External Systems
    • Overview
    • Networking and DNS Considerations
    • ActiveMQ Source
    • Amazon CloudWatch Logs Source
    • Amazon CloudWatch Metrics Sink
    • Amazon DynamoDB Sink
    • Amazon Kinesis Source
    • Amazon Redshift Sink
    • Amazon SQS Source
    • Amazon S3 Sink
    • Amazon S3 Source
    • AWS Lambda Sink
    • Azure Blob Storage Sink
    • Azure Cognitive Search Sink
    • Azure Cosmos DB Sink
    • Azure Cosmos DB Source
    • Azure Data Lake Storage Gen2 Sink
    • Azure Event Hubs Source
    • Azure Functions Sink
    • Azure Service Bus Source
    • Azure Synapse Analytics Sink
    • Databricks Delta Lake Sink
      • Set up Databricks Delta Lake (AWS)
      • Configure and launch the connector
    • Datadog Metrics Sink
    • Datagen Source (development and testing)
    • Elasticsearch Service Sink
    • GitHub Source
    • Google BigQuery Sink
    • Google Cloud BigTable Sink
    • Google Cloud Dataproc Sink
    • Google Cloud Functions Sink
    • Google Cloud Spanner Sink
    • Google Cloud Storage Sink
    • Google Cloud Storage Source
    • Google Pub/Sub Source
    • HTTP Sink
    • IBM MQ Source
    • InfluxDB 2 Sink
    • InfluxDB 2 Source
    • Jira Source
    • Microsoft SQL Server CDC Source (Debezium)
    • Microsoft SQL Server Sink (JDBC)
    • Microsoft SQL Server Source (JDBC)
    • MongoDB Atlas Sink
    • MongoDB Atlas Source
    • MQTT Sink
    • MQTT Source
    • MySQL CDC Source (Debezium)
    • MySQL Sink (JDBC)
    • MySQL Source (JDBC)
    • Oracle CDC Source
      • Connector Features
      • Horizontal Scaling
      • Oracle Database Prerequisites
      • Configure and Launch the connector
      • SMT examples
      • Addressing DDL Changes in Oracle Database
    • Oracle Database Sink (JDBC)
    • Oracle Database Source (JDBC)
    • PagerDuty Sink
    • PostgreSQL CDC Source (Debezium)
    • PostgreSQL Sink (JDBC)
    • PostgreSQL Source (JDBC)
    • RabbitMQ Source
    • RabbitMQ Sink
    • Redis Sink
    • Salesforce Bulk API Source
    • Salesforce CDC Source
    • Salesforce Platform Event Sink
    • Salesforce Platform Event Source
    • Salesforce PushTopic Source
    • Salesforce SObject Sink
    • ServiceNow Sink
    • ServiceNow Source
    • SFTP Sink
    • SFTP Source
    • Snowflake Sink
    • Solace Sink
    • Splunk Sink
    • Zendesk Source
    • Static Egress IP Addresses
    • Connector Data Previews
    • Single Message Transforms
    • View Connector Events
    • Service Accounts
    • RBAC for managed connectors
    • API for fully-managed connectors
    • Dead Letter Queue
    • Limitations
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
    • About Preview Features
  • ksqlDB Stream Processing
    • Overview
    • Confluent Cloud ksqlDB quick start
    • Monitoring ksqlDB
    • ksqlDB Connector Management in Confluent Cloud
    • Develop applications for ksqlDB
    • ksqlDB quick start
    • Confluent Platform ksqlDB
  • Manage Networking
    • Overview
    • Private Link
      • Use AWS PrivateLink
      • Use Azure Private Link
      • Configure DNS Resolution
    • VPC and VNet Peering
      • Overview
      • Use VPC Peering on AWS
      • Use VPC Peering on Google Cloud
      • Use VNet Peering on Azure
      • Access Confluent Cloud Console
      • Use Confluent Cloud Schema Registry
    • AWS Transit Gateway
    • Static Egress IP Addresses
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Authorization and Authentication Events
      • Organization Events
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Confluent Cloud Metrics
    • Cluster Load Metric
    • Monitor Consumer Lag
    • Dedicated Cluster Performance and Expansion
  • Quotas
    • Service Quotas
    • Service Quotas API
  • Billing
    • Confluent Cloud Billing
    • Marketplace Consumption Metrics
    • Azure Marketplace Pay As You Go
    • Azure Marketplace Commits
    • AWS Marketplace Pay As You Go
    • AWS Marketplace Commits
    • GCP Marketplace Pay As You Go
    • GCP Marketplace Commits
    • Marketplace Organization Suspension and Deactivation
  • Use Confluent Platform with Cloud
    • Overview
    • Connecting Control Center to Confluent Cloud
    • Connect Clients to Confluent Cloud
    • Connect Kafka Connect to Confluent Cloud
    • Connecting REST Proxy to Confluent Cloud
    • Connecting ksqlDB to Confluent Cloud
    • Manage ksqlDB by using the Confluent CLI
    • Schema Registry and Confluent Cloud
    • Connecting Kafka Streams to Confluent Cloud
    • Auto-Generating Configurations for Components to Confluent Cloud
  • Confluent Cloud APIs
    • Confluent Cloud APIs
    • Metrics API
    • Stream Catalog REST API
    • Stream Catalog GraphQL API
  • Confluent CLI
  • Confluent Cloud CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
  1. Home
  2. Manage Networking
  3. Private Links

Use Azure Private Link¶

Azure Private Link allows for one-way secure connection access from your VNet to Confluent Cloud with an added protection against data exfiltration. This networking option is popular for its unique combination of security and simplicity of setup.

The following diagram summarizes the Azure Private Link architecture between the VNet or subscription and the Confluent Cloud cluster.

Azure Private Link architecture between customer VNet or subscription and Confluent Cloud cluster

Note

For an overview of Azure Private Link and illustrated steps on getting started with Azure Private Link in Confluent Cloud, see Setting Up Secure Networking in Confluent with Azure Private Link.

Prerequisites¶

  • A Confluent Cloud network (CCN) of type PRIVATELINK in Azure. If a network does not exist, follow the procedure below in Create a network of type PRIVATELINK on Azure.
  • To use an Azure Private Link service with Confluent Cloud, your VNet must allow outbound internet connections for DNS resolution, Confluent Cloud Schema Registry, ksqlDB, and Confluent CLI to work.
    • DNS requests to public authority traversing to private DNS zone is required.
    • Confluent Cloud Schema Registry is accessible over the internet.
    • Provisioning new ksqlDB instances requires Internet access. After ksqlDB instances are up and running, they are fully accessible over Azure Private Link connections.
    • Confluent CLI requires internet access to authenticate with the Confluent Cloud control plane.
  • Confluent Cloud Console components, like topic management, need additional configuration to function as they use cluster endpoints. To use all features of the Confluent Cloud Console with Azure Private Link, see Configure DNS Resolution.

Warning

For limitations of the Azure Private Link, see Limitations below.

Create a network of type PRIVATELINK on Azure¶

To create a Dedicated cluster with Azure Private Link, you need to create a Confluent Cloud network first in the required cloud and region.

  1. In the Confluent Cloud Console, go to the Network management page for your environment.
  2. Click Create your first network if this is the first network in your environment, or click + Add Network if your environment has existing networks.
  3. Select Azure as the Cloud Provider and the desired geographic region.
  4. Select the Private Link connectivity type and click Continue.
  5. Specify a Network Name, review your configuration, and click Create Network.

Here is an example REST API request:

HTTP POST request

POST https://api.confluent.cloud/networking/v1/networks

Authentication

See Authentication.

Request specification

In the request specification, include values for cloud, region, environment, connection type, and, optionally, add the display name, CIDR, and zones for the Confluent Cloud network. Update the attributes below with the correct values.

{
   "spec":{
      "display_name":"Azure-PL-CCN-1",
      "cloud":"AZURE",
      "region":"centralus",
      "connection_types":[
         "PRIVATELINK"
      ],
      "zones":[
         "1",
         "2",
         "3"
      ],
      "environment":{
         "id":"env-00000"
      }
   }
}

In most cases, it takes up to 15 to 20 minutes to create a Confluent Cloud network. Note the Confluent Cloud network ID from the response to specify it in the following commands.

After successfully provisioning the Confluent Cloud network, you can add Dedicated clusters within your Confluent Cloud network by using either of the following procedures:

  • Confluent Cloud Console: Create a Cluster in Confluent Cloud
  • Cluster Management API: Create a cluster

Register your Azure subscription with Confluent Cloud¶

Register your Azure subscription with the Confluent Cloud network for automatic approval of private endpoint connections to the Confluent Cloud network. If required, you can register multiple subscriptions.

  1. In the Confluent Cloud Console, go to your network resource in the Network Management tab and click + Private Link.
  2. Enter the Azure subscription ID for the account containing the VNets you want to make the Azure Private Link connection from. The Azure subscription number can be found on your Azure subscription page of the Azure Portal.
  3. Click Save.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/private-link-accesses

Authentication

See Authentication.

Request specification

In the request specification, include Confluent Cloud network ID, subscription, environment, and, optionally, add the display name. Update the attributes below with the correct values.

{
   "spec":{
      "display_name":"Azure-PL-CCN-1",
      "cloud":{
         "kind":"AzurePrivateLinkAccess",
         "subscription":"00000000-0000-0000-0000-000000000000"
      },
      "environment":{
         "id":"env-00000"
      },
      "network":{
         "id":"n-000000"
      }
   }
}

Your Azure Private Link connection status will transition from “Pending” to “Active” in the Confluent Cloud Console. You still need to configure the Private Endpoints in your VNet before you can connect to the cluster.

Note the Private Link Service Endpoint to create an Azure Private Link connection from your VNet to the Confluent Cloud cluster. This URL will also be provided later.

Create an Azure Private Link connection to Confluent Cloud¶

Follow this procedure to create an Azure Private Link connection to a Confluent Cloud cluster on Azure using the Confluent Cloud Console or REST APIs.

Set up the VNet Endpoint for Azure Private Link in your Azure account¶

After the connection status is “Active” in the Confluent Cloud Console, you must configure Private Endpoints in your VNet from the Azure Portal to make the Azure Private Link connection to your Confluent Cloud cluster.

Note

Confluent recommends using a Terraform configuration for setting up Private Link endpoints. This configuration automates the manual steps described below.

Prerequisites

In the Confluent Cloud Console, you will find the following information for your Confluent Cloud cluster under the Cluster Settings section. This information is needed to configure Azure Private Link for a Dedicated cluster in Azure.

  • Kafka Bootstrap (in the General tab)
  • DNS domain Name (in the Networking card)
  • Zonal DNS Subdomain Names (in the Networking card)
  • Service Aliases (in the Networking card)

Create the following Private Endpoints through the Azure Private Link Center:

  1. For Confluent Cloud single availability zone clusters, create a single Private Endpoint to the Confluent Cloud Service Alias. For Confluent Cloud multi-availability zone clusters, create a Private Endpoint to each of the Confluent Cloud zonal Service Aliases.
  2. Create a Private Endpoint for Confluent Cloud by clicking Create Private Endpoint.
  3. Fill in subscription, resource group, name, and region for the virtual endpoint and click next. The selected subscription must be the same as the one registered with Confluent Cloud.
  4. Select the Connect to an Azure resource by resource ID or alias option, paste in the Confluent Cloud Service Alias and click Next. You can find the Confluent Cloud Service Aliases in the Networking tab under Cluster settings in the Confluent Cloud Console.
  5. Fill in virtual network and subnet where the Private Endpoint is to be created.
  6. Click Review + create. Review the details and click Create to create the Private Endpoint.
  7. Wait for the Azure deployment to complete, go to the Private Endpoint resource and verify Private Endpoint connection status is Approved.

Set up DNS records to use Azure Private Endpoints¶

DNS changes must be made to ensure connectivity passes through Azure Private Link in the supported pattern. Any DNS provider that can ensure DNS is routed as follows is acceptable. Azure Private DNS Zone (used in this example) is one option.

Note

Run the DNS helper script to identify the DNS Zone records for Private Endpoints.

Update DNS using Azure Private DNS Zone in the Azure console:

  1. Create the Private DNS Zone.

    1. Search for the Private DNS Zone resource in the Azure Portal.

    2. Click Add.

    3. Copy the DNS Domain name from the Networking tab under Cluster Settings in the Confluent Cloud Console and use it as the name for the Private DNS Zone.

      For example:

      4kgzg.centralus.azure.confluent.cloud
      

      Note

      Notice there is no glb in the DNS Domain name.

    1. Fill in subscription, resource group and name and click Review + create.
    2. Wait for the Azure deployment to complete.
  2. Create DNS records.

    1. Go to the Private DNS Zone resource as created above.
    2. Click + Record Set.
    3. Create the following record set for Confluent Cloud single availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
    4. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP address of the single virtual endpoint as created above.
    5. Create the following record sets for Confluent Cloud multi-availability zone clusters. The IP address of the Private Endpoint can be found under its associated network interface.
      1. Select name as “*”, type as “A”, TTL as “1 Minute” and add IP addresses of all three virtual endpoints as created above.
      2. Select name as “*.az1”, type as “A”, TTL as “1 Minute” and add IP address of the az1 virtual endpoint as created above.
      3. Select name as “*.az2”, type as “A”, TTL as “1 Minute” and add IP address of the az2 virtual endpoint as created above.
      4. Select name as “*.az3”, type as “A”, TTL as “1 Minute” and add IP address of the az3 virtual endpoint as created above.
  3. Attach the Private DNS Zone to the VNets where clients or applications are present.

  4. Go to the Private DNS Zone resource and click Virtual network links under settings.

    1. Click Add.
    2. Fill in link name, subscription and virtual network.

Validate connectivity to Confluent Cloud¶

  1. From an instance within the VNet, or anywhere the DNS is set up, run the following to validate Kafka connectivity through Azure Private Link is working correctly.

    1. Set an environment variable with the cluster bootstrap URL.

      export BOOTSTRAP=$<bootstrap-server-url>
      

      The Bootstrap URL displayed in Confluent Cloud Console includes the port (9092). The BOOTSTRAP value should include the full hostname, but do not include the port. This is so that you can run the openssl s_client -connect <host>:<port> command with the required values.

      For example:

      export BOOTSTRAP=lkc-222v1o-4kgzg.centralus.azure.glb.confluent.cloud
      
    2. Test connectivity to your cluster by running the openssl s_client -connect <host>:<port> command, specifying the $BOOTSTRAP environment variable for the <host> value and 9092 for the <port> value.

      openssl s_client -connect $BOOTSTRAP:9092 -servername $BOOTSTRAP -verify_hostname $BOOTSTRAP </dev/null 2>/dev/null | grep -E 'Verify return code|BEGIN CERTIFICATE' | xargs
      

      To run the openssl s_client -connect command, the -connect option requires that you specify the host and the port number. For details, see the openssl documentation for the -connect option option in the openssl s_client documentation.

    3. If the output returned is -----BEGIN CERTIFICATE----- Verify return code: 0 (ok), then connectivity to the bootstrap is confirmed.

    4. Log in to the Confluent Cloud CLI with your Confluent Cloud credentials.

      confluent login
      
    5. List the clusters in your organization.

      confluent kafka cluster list
      
    6. Select the cluster with Azure Private Link you wish to test.

      confluent kafka cluster use ...
      

      For example:

      confluent kafka cluster use lkc-222v1o
      
    7. Create a cluster API key to authenticate with the cluster.

      confluent api-key create --resource ... --description ...
      

      For example:

      confluent api-key create --resource lkc-222v1o --description "connectivity test"
      
    8. Select the API key you just created.

      confluent api-key use ... --resource ...
      

      For example:

      confluent api-key use R4XPKKUPLYZSHOAT --resource lkc-222v1o
      
    9. Create a test topic.

      confluent kafka topic create test
      
    10. Start consuming events from the test topic.

      confluent kafka topic consume test
      
    11. Open another terminal tab or window.

    12. Start a producer.

      confluent kafka topic produce test
      

      Type anything into the produce tab and hit Enter; press Ctrl+D or Ctrl+C to stop the producer.

    13. The tab running consume will print what was typed in the tab running produce.

You’re done! The cluster is ready for use.

Note

The bootstrap and broker hostname DNS resolution for Confluent Cloud Cluster with Private Link is a two-step process:

  1. The bootstrap and broker hostnames have glb as a subdomain in their domain name (for example, <cluster-subdomain-name>.eastus2.azure.glb.confluent.cloud). In the first step, Confluent Cloud Global DNS Resolver returns a CNAME for bootstrap and broker hostnames which doesn’t include the glb subdomain (for example, <cluster-subdomain-name>.eastus2.azure.confluent.cloud).
  2. In the second step, the CNAME without the “glb” subdomain is resolved to private endpoints IP addresses using the Private DNS Zone that you configure by using the previous steps.

Some DNS systems, like Windows DNS service, lack the ability to recursively resolve the above mentioned two step resolution within a single DNS node. For such situations, you should use two DNS systems. The first DNS system sets up separate forwarding rules for a domain with the “glb” subdomain and a domain without it, and forwards it to the second DNS system. The second DNS system recursively resolves by forwarding the “glb” name resolution request to Confluent Cloud Global DNS Resolver, which creates the “non-glb” name resolution to the Cloud DNS that hosts the Private DNS Zone as shown previously. Another alternative is to host the “non-glb” DNS records locally in the second DNS system.

../../_images/cloud-azure-privatelink_dns.png

Limitations¶

  • Cross-region Azure Private Link connections are not supported.
  • Only one Dedicated cluster is supported in a network of type PRIVATELINK.
  • Azure Private Link is only available for use with Dedicated clusters.
  • Existing Confluent Cloud clusters cannot be converted to use Azure Private Link.
  • Fully-managed Confluent Cloud connectors can connect to data sources or sinks using a public IP address. Sources or sinks in the customer network with private IP addresses are not supported.
  • Azure Private Link connections cannot be shared across multiple Confluent Cloud clusters. Separate Azure Private Link connections must be made to each Confluent Cloud cluster.
  • Availability zone selection for placement of Confluent Cloud cluster and Azure Private Link service is not supported.
  • See also: Prerequisites.

Confluent Cloud is a fully-managed Apache Kafka service available on all three major clouds. Try it free today.

Get Started Free
  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings

Copyright © Confluent, Inc. 2014- . Apache, Apache Kafka, Kafka, and associated open source project names are trademarks of the Apache Software Foundation

Report an issue with this page or suggest an edit.
On this page: