>
Confluent
  • Product
    • Confluent Platform
    • KSQL
    • Confluent Hub
    • Subscription
    • Professional Services
    • Training
    • Customers
  • Cloud
    • Confluent Cloud
    • Support
    • Sign Up
    • Log In
    • Cloud FAQ
  • Developers
    • What is Kafka?
    • Resources
    • Events
    • Online Talks
    • Meetups
    • Kafka Summit
    • Kafka Tutorials
  • About Us
    • Company
    • Partners
    • News
    • Events
    • Careers
    • Contact
  • Blog
  • Docs
  • Download
Get Confluent | Sign up for Confluent Cloud or download Confluent Platform
  • Getting Started
    • What is the Confluent Platform?
      • What is included in the Confluent Platform?
        • Overview
        • Confluent Open Source
          • Confluent Connectors
            • Kafka Connect JDBC Connector
            • Kafka Connect HDFS Connector
            • Kafka Connect Elasticsearch Connector
            • Kafka Connect S3 Connector
          • Confluent Clients
            • C/C++ Client Library
            • Python Client Library
            • Go Client Library
            • .Net Client Library
          • Confluent Schema Registry
          • Confluent REST Proxy
        • Confluent Enterprise
          • Automatic Data Balancing
          • Multi-Datacenter Replication
          • Confluent Control Center
          • JMS Client
        • Confluent Proactive Support
      • Migrating an existing Kafka deployment
        • Migrating developer tools
        • Migrating to Control Center
    • Confluent Platform Quick Start
    • Installing and Upgrading
      • Install Confluent Platform
        • System Requirements
        • Installing Confluent Platform
          • Docker
          • ZIP and TAR archives
          • Debian and Ubuntu
          • RHEL and CentOS
        • Next Steps
      • Packages
        • Platform Packages
        • Component Packages
      • Installing using Docker
        • Docker Quick Start
          • Installing and Running Docker
          • Getting Started with Docker Compose
          • Getting Started with Docker Client
            • ZooKeeper
            • Kafka
            • Schema Registry
            • REST Proxy
            • Confluent Control Center
              • Stream Monitoring
              • Alerts
            • Kafka Connect
              • Getting Started
              • Monitoring in Control Center
            • Cleanup
        • Docker Configuration
          • Confluent Docker Images
          • Configuration Notes
          • Configuration Parameters
            • ZooKeeper
              • Required Settings
            • Confluent Kafka (cp-kafka)
              • Required Settings
            • Confluent Enterprise Kafka (cp-enterprise-kafka)
              • Required Settings
            • Schema Registry
              • Required Settings
            • Kafka REST Proxy
              • Required Settings
            • Kafka Connect
              • Required Settings
              • Optional Settings
            • Confluent Control Center
              • Docker Options
              • Required Settings
              • Optional Settings
            • Confluent Enterprise Replicator
              • Required Settings
              • Optional Settings
        • Docker Advanced Tutorials
          • Clustered Deployment
            • Installing and Running Docker
            • Docker Client: Setting Up a Three Node Kafka Cluster
            • Docker Compose: Setting Up a Three Node Kafka Cluster
          • Clustered Deployment Using SSL
            • Installing and Running Docker
            • Docker Client: Setting Up a Three Node Kafka Cluster
            • Docker Compose: Setting Up a Three Node Confluent Platform Cluster with SSL
          • Clustered Deployment Using SASL and SSL
            • Installing and Running Docker
            • Docker Client: Setting Up a Three Node Kafka Cluster
            • Docker Compose: Setting Up a Three Node Confluent Platform Cluster with SASL
          • Kafka Connect Tutorial
            • Installing and Running Docker
            • Starting Up Confluent Platform and Kafka Connect
          • Automatic Data Balancing
            • Installing and Running Docker
          • Replicator Tutorial
            • Installing and Running Docker
      • Clients
        • Maven repository for JARs
        • C/C++
        • Python
        • Go
        • .NET
      • Migrating from Open Source to Enterprise
      • Upgrade
        • Preparation
        • Step-by-step Guide
          • Upgrade All Kafka Brokers
          • Upgrade Schema Registry
          • Upgrade Kafka REST Proxy
          • Upgrade Kafka Streams applications
          • Upgrade Kafka Connect
            • Upgrade Kafka Connect Standalone Mode
            • Upgrade Kafka Connect Distributed Mode
          • Upgrade Camus
          • Confluent Control Center
          • Upgrade Other Client Applications
      • Supported Versions and Interoperability
        • Confluent Platform Versions
        • Operating Systems
        • Confluent Control Center
        • Java
        • Scala
        • Component Security
          • Apache Kafka
          • Apache Kafka Connect clients
          • Apache Kafka Streams clients
          • Apache ZooKeeper
          • Confluent Control Center
          • REST Proxy
          • Schema Registry
        • Clients
        • Connectors
        • Cross-Component Compatibility
          • Apache Kafka Connect Workers
          • Apache Kafka Java Clients
          • Apache Kafka Streams Clients
          • Auto Data Balancing
          • Confluent C, C++, Python, Go and .NET
          • Schema Registry
          • REST Proxy
        • Docker and Orchestration Tools
          • DC/OS
          • Docker
    • Command Line Interface
      • Installing and Configuring the CLI
        • Installing the CLI
        • Configuring the CLI
      • Command Reference
        • confluent acl
          • Description
          • Usage
          • Options
          • Positional arguments
        • confluent config
          • Description
          • Usage
          • Options
          • Positional arguments
          • Examples
        • confluent current
          • Description
          • Usage
          • Options
          • Positional arguments
          • Example
        • confluent destroy
          • Description
          • Usage
          • Options
          • Positional arguments
          • Example
        • confluent help
          • Description
          • Usage
          • Options
          • Positional arguments
          • Example
        • confluent list
          • Description
          • Usage
          • Options
          • Positional arguments
          • Examples
        • confluent load
          • Description
          • Usage
          • Options
          • Positional arguments
          • Example
        • confluent log
          • Description
          • Usage
          • Options
          • Positional arguments
          • Examples
        • confluent start
          • Description
          • Usage
          • Options
          • Positional arguments
          • Examples
        • confluent status
          • Description
          • Usage
          • Options
          • Positional arguments
          • Examples
        • confluent stop
          • Description
          • Usage
          • Options
          • Positional arguments
          • Examples
        • confluent top
          • Description
          • Usage
          • Options
          • Positional arguments
          • Example
        • confluent unload
          • Description
          • Usage
          • Options
          • Positional arguments
          • Example
        • Available Commands
        • Environment Variables
    • Confluent Platform 3.3.2 Release Notes
      • Highlights
        • Open Source Features
          • Apache Kafka 0.11.0.3-cp1
      • Previous releases
        • Confluent Platform 3.3.1 Release Notes
          • Highlights
            • Enterprise Features
              • Control Center
            • Open Source Features
              • Kafka Streams: Improved Rebalance Latency
              • Apache Kafka 0.11.0.1-cp1
              • Elasticsearch Sink Connector
              • HDFS Sink Connector
              • S3 Sink Connector
              • JDBC Source Connector
              • librdkafka 0.11.1
        • Confluent Platform 3.3.0 Release Notes
          • Highlights
            • Enterprise Features
              • REST Proxy ACL Plug-In
              • Control Center Interceptors for Non-Java Clients
              • Control Center
              • Metrics Publisher
            • Open Source Features
              • Exactly-Once Semantics for Apache Kafka
              • Confluent CLI
              • Non-Java Clients
              • Kafka Streams: Confluent serializers/deserializers (serdes) for Avro
              • Kafka Streams: Capacity Planning Guide available
            • Apache Kafka 0.11.0
              • Admin client
              • Record headers
              • Leader epoch in replication protocol
              • Single-threaded controller
            • Classpath Isolation for Apache Kafka Connectors
            • Request rate quotas
            • Kafka Streams: rebalancing improvements
            • More bug fixes and improvements
        • Confluent Platform 3.2.2 Release Notes
          • Highlights
            • Apache Kafka 0.10.2.1-cp2
            • Confluent Control Center
            • Replicator
            • S3 Connector
        • Confluent Platform 3.2.1 Release Notes
          • Highlights
            • Apache Kafka 0.10.2.1
              • Fixes
            • S3 Connector
              • Fixes
        • Confluent Platform 3.2.0 Release Notes
          • Highlights
            • Enterprise Features
              • JMS Client
              • Confluent Control Center
              • Automatic Data Balancing
            • Open Source Features
              • Kafka Streams: Backwards Compatible to Older Kafka Clusters
              • Kafka Streams: Compatibility Matrix
              • Kafka Streams: Session Windows
              • Kafka Streams: Global KTables
              • Kafka Streams: ZooKeeper Dependency Removed
              • Kafka Connect: Single Message Transform
              • S3 Connector
              • .Net Client
              • REST Proxy
              • Apache Kafka 0.10.2
        • Confluent Platform 3.1.2 Release Notes
          • Highlights
            • Apache Kafka 0.10.1.1
              • Fixes
        • Confluent Platform 3.1.1 Release Notes
          • Highlights
            • Enterprise Features
              • Automatic Data Balancing
              • Multi-Datacenter Replication
              • Confluent Control Center
            • Open Source Features
              • Kafka Streams: Interactive Queries
              • Kafka Streams: Application Reset Tool
              • Kafka Streams: Improved memory management
              • Kafka Connect
              • Go Client
            • Apache Kafka 0.10.1
              • New Feature
              • Improvement
              • Bug
              • Task
              • Wish
              • Test
              • Sub-task
        • Confluent Platform 3.0.1 Release Notes
          • Highlights
            • Confluent Control Center
            • Kafka Streams
              • New Feature: Application Reset Tool
            • Apache Kafka 0.10.0.1
              • New Feature
              • Improvement
              • Bug
              • Test
              • Sub-task
        • Confluent Platform 3.0.0 Release Notes
          • Highlights
            • Kafka Streams
            • Confluent Control Center
            • Apache Kafka 0.10.0.0
            • Deprecating Camus
            • Other Notable Changes
        • Confluent Platform 2.0.1 Release Notes
          • New Java consumer
          • Compatibility
          • Security
          • Performance/memory usage
          • Topic deletion
        • Confluent Platform 2.0.0 Release Notes
          • Security
          • Kafka Connect
          • User Defined Quotas
          • New Consumer
          • New Client Library - librdkafka
          • Proactive Support
          • Compatibility Notes
      • How to Download
      • Questions?
  • Operations
    • Kafka Operations
      • Production Deployment
        • Hardware
          • Memory
          • CPUs
          • Disks
          • Network
          • Filesystem
          • General considerations
        • JVM
        • Important Configuration Options
          • Replication configs
        • File descriptors and mmap
        • Multi-node Configuration
      • Post Deployment
        • Changing configs dynamically
        • Logging
          • Controller
          • State Change Log
          • Request logging
        • Admin operations
          • Adding topics
          • Modifying topics
          • Deleting topics
          • Graceful shutdown
        • Rolling Restart
          • Scaling the Cluster
          • Increasing replication factor
          • Limiting Bandwidth Usage during Data Migration
          • Balancing Replicas Across Racks
          • Enforcing Client Quotas
        • Performance Tips
          • Picking the number of partitions for a topic
          • Lagging replicas
          • Increasing consumer throughput
        • Software Updates
        • Backup and Restoration
      • Auto Data Balancing
        • Requirements
        • Quickstart
          • Start a Kafka cluster
          • Create topics and produce data
          • Execute the rebalancer
          • Check status and finish
          • Leader balance
          • Decommissioning brokers
          • Limiting Bandwidth Usage during Data Migration
        • Licensing
        • Configuration Options
          • Configuration Options for the rebalancer tool
      • Monitoring Kafka
        • Server Metrics
          • Broker Metrics
          • ZooKeeper Metrics
        • Producer Metrics
          • Global Request Metrics
          • Global Connection Metrics
          • Per-Broker Metrics
          • Per-Topic Metrics
        • New Consumer Metrics
          • Fetch Metrics
          • Topic-level Fetch Metrics
          • Consumer Group Metrics
          • Global Request Metrics
          • Global Connection Metrics
          • Per-Broker Metrics
        • Old Consumer Metrics
      • Confluent Metrics Reporter
        • Installation
        • Configuration
          • Message size
        • Configuration Options
          • Security
          • Authentication
          • Authorization
          • Verification
        • Logging
    • Confluent Control Center
      • Control Center Quick Start
        • Configure Kafka and Kafka Connect
        • Configure and Start Control Center
        • Setup stream monitoring
      • Control Center Concepts
        • Motivation
        • Time windows and metrics
        • Latency
      • Installing and Upgrading Control Center
        • Installing Control Center
          • Configure Kafka and Kafka Connect
          • Configure and start Control Center
          • Next steps
        • Control Center System Requirements
          • Hardware
            • Memory
            • CPUs
            • Disks
            • Network
          • OS
          • JVM
          • User/Cluster Metadata
          • Kafka
          • Multi-Cluster Configuration
          • Dedicated Metric Data Cluster
          • Example Deployments
            • Broker Monitoring
            • Streams Monitoring
        • Configuring Control Center
          • Base Settings
          • Logging
          • Optional Settings
            • General
            • Monitoring Settings
            • UI Authentication Settings
            • Email Settings
            • Kafka Encryption, Authentication, Authorization Settings
            • HTTPS Settings
            • Internal Kafka Streams Settings
            • Internal Command Settings
        • Installing Control Center Interceptors
          • Interceptor Installation
            • Compatibility
          • Interceptor installation for librdkafka-based clients
          • Client Configuration
            • Adding the interceptor to your Kafka Producer
            • Adding the interceptor to your Kafka Consumer
            • Adding interceptors to your Kafka Streams application
            • Adding the interceptor to librdkafka-based client applications
          • Interceptor Configuration
            • Configuration options
            • Security
            • Logging
            • Example Configuration
          • Verification
        • Installing Control Center on Apache Kafka
          • Software requirements
          • Hardware requirements
            • Install Control Center
          • Installation on Red Hat, CentOS, or Fedora
          • Installation on Debian Linux
            • Install and configure components
          • Installing Confluent Metrics Reporter with Kafka
          • Installing Kafka Connect
          • Installing Confluent Metrics Clients with Kafka Connect
            • Start Control Center
            • License Keys
            • Removing Control Center
          • TAR and ZIP Archives
          • Debian packages
          • Red Hat packages
        • Troubleshooting Control Center
          • Common issues
            • Installing and Setup
              • Bad security configuration
              • InvalidStateStoreException
              • Not enough brokers
              • Local store permissions
              • Multiple Control Center’s with the same ID
              • License expired
            • System health
              • Web interface that is blank or stuck loading
              • I see a rocket ship
              • Nothing is produced on the Metrics (_confluent-metrics) topic
              • Control Center is lagging behind Kafka
              • RecordTooLargeException
              • Parts of the broker or topic table have blank values
            • Streams Monitoring
              • Blank charts
              • Unexpected herringbone pattern
              • Missing consumers or consumer groups
            • Connect
              • I see rocket ship
          • Debugging
            • Check logs
            • Enable debug and trace logging
            • Check configurations
            • Review input topics
            • Size of clusters
            • System check
            • Frontend and REST API
            • Consumer offset lag
            • Enable GC logging
            • Thread dump
            • Data directory
        • Upgrading Control Center
          • Upgrading from version 3.1.x and later
          • Upgrading from version 3.0.x
      • Control Center User Interface
        • System Health
          • Navigation
          • UI Commonalities
          • Broker Aggregate Metrics
          • Produce and Fetch Charts
          • Broker Metrics Table
          • Topic Aggregate Metrics
          • Topic Metrics Table
        • Stream Monitoring
          • Chart Types
          • Page Layout
          • Navigation
          • Time Range and Cluster Selection
          • Summary statistics
          • Missing Metrics Data
          • Example Scenarios
            • Adding a new Consumer Group
              • Latency
              • Expected consumption
        • Kafka Connect
          • Creating new Connectors
          • Editing Sources and Sinks
        • Alerts
          • Concepts
          • User Interface
            • Overview page
            • Integration page
          • Trigger Management
            • New/Edit Trigger Form
            • Topic
            • Consumer Groups
            • Brokers
          • Actions Management
            • New/Edit Action Form
          • Alert History
        • Clusters
          • Setting a Cluster Name
        • Topic Management
          • Detail view for a topic
      • Security
        • Configuring SSL
          • Kafka Brokers
          • Control Center
          • Connect
        • Configuring SASL
          • ZooKeeper
          • Kafka Broker
          • Control Center Configuration
          • Schema Registry Configuration
          • Connect Configuration
        • UI Authentication
        • UI HTTPS
        • Authorization with Kafka ACLS
      • Changelog
        • Version 3.3.1
          • Confluent Control Center
        • Version 3.3.0
          • Confluent Control Center
        • Version 3.2.2
          • Confluent Control Center
        • Version 3.2.1
          • Confluent Control Center
        • Version 3.2.0
          • Confluent Control Center
        • Version 3.1.0
          • Confluent Control Center
        • Version 3.0.1
          • Confluent Control Center
    • Multi Data-Center Deployment
      • Replicator Quick Start
        • Start the destination cluster
        • Start the origin cluster
        • Create a topic
        • Configure and run Replicator
      • Installing and Configuring Replicator
        • Install and Configure Kafka Connect Cluster for Replicator
          • Configuring origin and destination brokers
          • Where to Install Connect Workers
          • Running Replicator on Existing Connect Cluster
          • Configuring Logging for Connect Cluster
          • License Key
        • Configure and run a Confluent Replicator on the Connect Cluster
      • Tuning and Monitoring Replicator
        • Sizing Replicator Cluster
        • Getting More Throughput From Replicator Tasks
          • Improving CPU Utilization of a Connect Task
          • Improving Network Utilization of a Connect Task
        • Monitoring Replicator
          • Monitoring Replicator Lag
          • Monitoring Producer and Consumer Metrics
            • Important Producer Metrics
            • Important Consumer Metrics
      • Configuration Options
        • Confluent Platform
        • Source Topics
        • Source Data Conversion
        • Source Zookeeper
        • Source Kafka
        • Source Kafka: Security
        • Source Kafka: Consumer
        • Destination Topics
        • Destination Zookeeper
        • Destination Data Conversion
      • Apache Kafka’s Mirror Maker
        • Comparing Mirror Maker to Confluent Replicator
          • Greater control for reliable replication
          • Deploying, managing, and monitoring
          • Replication Patterns
          • License
        • Example of Mirror Maker Use
    • Schema Registry Operations
      • Production Deployment
        • Hardware
          • Memory
          • CPUs
          • Disks
          • Network
        • JVM
        • Important Configuration Options
        • Don’t Touch These Settings!
          • Storage settings
        • Kafka & ZooKeeper
        • Backup and Restore
        • Multi-DC Setup
          • Overview
          • Recommended Deployment
          • Important Settings
          • Setup
          • Run Book
      • Monitoring
        • Global Metrics
        • Per-Endpoint Metrics
        • Endpoints
    • Kafka REST Proxy Operations
      • Production Deployment
        • Hardware
          • Memory
          • CPUs
          • Disks
          • Network
        • JVM
        • Deployment
        • Important Configuration Options
        • Don’t Touch These Settings!
        • Post Deployment
      • Monitoring
        • Global Metrics
        • Per-Endpoint Metrics
        • Endpoints
    • Docker Operations
      • Monitoring
        • Using JMX
          • Security on JMX
          • Kafka and ZooKeeper
            • Settings
            • Launching Kafka and ZooKeeper with JMX Enabled
      • Configuring logging
        • log4j Log Levels
        • Component Names
      • Mounting External Volumes
        • Data Volumes for Kafka and ZooKeeper
        • Security: Data Volumes for Configuring Secrets
        • Configuring Connect with External jars
    • Security
      • Kafka Security
        • Encryption and Authentication using SSL
          • Overview
          • Generate SSL key and certificate for each Kafka broker
          • Creating your own CA
          • Signing the certificate
          • Configuring Kafka Brokers
          • Configuring Kafka Clients
          • Enabling SSL Logging
        • Authentication using SASL
          • SASL configuration for Kafka brokers
          • SASL configuration for Kafka Clients
          • Authentication using SASL/Kerberos
            • Prerequisites
              • Kerberos
              • Kerberos Principals
              • All hosts must be reachable using hostnames
            • Configuring Kafka Brokers
            • Configuring Kafka Clients
          • Authentication using SASL/PLAIN
            • Configuring Kafka Brokers
            • Configuring Kafka Clients
            • Use of SASL/PLAIN in production
          • Enabling multiple SASL mechanisms in a broker
          • Modifying SASL mechanisms in a Running Cluster
          • Enabling Logging for SASL
        • Authorization and ACLs
          • Overview
            • Common cases:
            • Further configuration:
          • Command Line Interface
            • Adding ACLs
            • Removing ACLs
            • List ACLs
            • Adding or Removing a Principal as Producer or Consumer
            • Enabling Authorizer Logging for Debugging
        • Adding Security to a Running Cluster
        • ZooKeeper Authentication
          • New clusters
          • Migrating clusters
          • Migrating the ZooKeeper ensemble
        • Kafka Security & the Confluent Platform
          • Other Observations
      • Confluent Security Plugins
        • Installation
          • Kafka REST Security Plugin
            • Installation
            • Authentication Mechanisms
            • Configuration
          • Principal Propagation
            • SSL
            • SASL
      • Docker Security
    • Kafka Connect
      • Introduction
      • Requirements
      • Kafka Connect Quick Start
        • Goal
        • What we will do
        • Start the services
        • Read File Data with Connect
        • Write File Data with Connect
      • Concepts
        • Connectors
        • Tasks
          • Task Rebalancing
        • Workers
          • Standalone Workers
          • Distributed Workers
        • Converters
        • Transforms
          • List of Transformations
      • Installing and Configuring Kafka Connect
        • Getting Started
        • Planning for Installation
          • Prerequisites
          • Standalone vs. Distributed
          • Deployment Considerations
        • Installing Plugins
        • Running Workers
          • Standalone Mode
          • Distributed Mode
        • Configuring Workers
          • Common Worker Configs
          • Standalone Worker Configuration
          • Distributed Worker Configuration
          • Configuring Converters
          • Overriding Producer & Consumer Settings
        • Upgrading Kafka Connect Workers
      • Managing Connectors
        • Using Bundled Connectors
        • Configuring Connectors
          • Standalone Example
          • Distributed Example
        • Managing Running Connectors
          • Using the REST Interface
          • Connector and Task Status
          • Common REST Examples
        • Using Community Connectors
        • Upgrading a Connector Plugin
      • Bundled Connectors
        • Confluent JDBC Connector
          • JDBC Source Connector
            • Quick Start
              • Create SQLite Database and Load Data
              • Load the JDBC Source Connector
              • Add a Record to the Consumer
            • Features
            • Configuration
            • Schema Evolution
          • JDBC Source Configuration Options
            • Database Connection Security
            • Database
            • Connector
            • Mode
          • JDBC Sink Connector
            • Quick Start
              • Create SQLite Database and Load Data
              • Load the JDBC Sink Connector
              • Produce a Record in SQLite
            • Features
          • JDBC Sink Configuration Options
            • Database Connection Security
            • Connection
            • Writes
            • Data Mapping
            • DDL Support
            • Retries
          • Changelog
            • Version 3.3.1
              • JDBC Source Connector
              • JDBC Sink Connector
            • Version 3.3.0
              • JDBC Source Connector
              • JDBC Sink Connector
            • Version 3.2.2
            • Version 3.2.1
            • Version 3.2.0
              • JDBC Source Connector
              • JDBC Sink Connector
            • Version 3.1.1
            • Version 3.1.0
              • JDBC Source Connector
              • JDBC Sink Connector
            • Version 3.0.1
              • JDBC Source Connector
            • Version 3.0.0
              • JDBC Source Connector
        • Confluent HDFS Connector
          • HDFS Connector
            • Quickstart
            • Features
            • Configuration
              • Example
              • Format and Partitioner
              • Hive Integration
              • Secure HDFS and Hive Metastore
            • Schema Evolution
          • Configuration Options
            • HDFS
            • Hive
            • Security
            • Schema
            • Connector
            • Internal
          • Changelog
            • Version 3.3.1
            • Version 3.3.0
            • Version 3.2.2
            • Version 3.2.1
            • Version 3.2.0
            • Version 3.1.1
            • Version 3.1.0
            • Version 3.0.1
              • HDFS Connector
            • Version 3.0.0
              • HDFS Connector
        • Confluent S3 Connector
          • S3 Connector
            • Features
            • Exactly-once delivery on top of eventual consistency
            • Schema Evolution
            • Quickstart
            • Configuration
              • Example
          • Configuration Options
            • Connector
            • S3
            • Storage
            • Partitioner
          • Changelog
            • Version 3.3.1
              • S3 Connector
            • Version 3.3.0
              • S3 Connector
            • Version 3.2.2
              • S3 Connector
            • Version 3.2.1
              • S3 Connector
            • Version 3.2.0
              • S3 Connector
        • Confluent Elasticsearch Connector
          • Elasticsearch Connector
            • Quick Start
              • Add a Record to the Consumer
              • Load the Elasticsearch Connector
            • Features
            • Delivery Semantics
            • Mapping Management
            • Schema Evolution
            • Automatic Retries
            • Reindexing
            • Security
          • Configuration Options
            • Connector
            • Data Conversion
          • Changelog
            • Version 3.3.1
            • Version 3.3.0
            • Version 3.2.2
            • Version 3.2.1
            • Version 3.2.0
        • Confluent Replicator
          • Confluent Replicator
            • Features
            • Requirements
            • Quick start
              • Start the target cluster
              • Start the source cluster
              • Configure and run Replicator
            • Topic Renaming
            • Periodic Metadata Updates
            • Security
          • Changelog
            • Version 3.3.1
            • Version 3.3.0
            • Version 3.2.2
            • Version 3.2.1
            • Version 3.2.0
          • Configuration Options
            • Confluent Platform
            • Source Topics
            • Source Data Conversion
            • Source Zookeeper
            • Source Kafka
            • Source Kafka: Security
            • Source Kafka: Consumer
            • Destination Topics
            • Destination Zookeeper
            • Destination Data Conversion
        • Kafka FileStream Connectors
          • Quickstart
          • FileSource Connector
          • FileSink Connector
      • Security
        • Configuring Workers with Security
        • Configuring Connectors with Security
        • ACL Considerations
          • Worker ACL Requirements
          • Connector ACL Requirements
      • Architecture & Internals
        • Motivation
        • Architecture
          • Internal Connect Offsets
      • Connector Developer Guide
        • Core Concepts and APIs
          • Connectors and Tasks
          • Partitions and Records
          • Dynamic Connectors
        • Developing a Simple Connector
          • Connector Example
          • Task Example - Source Task
          • Sink Tasks
          • Resuming from Previous Offsets
        • Dynamic Input/Output Partitions
        • Configuration Validation
        • Working with Schemas
        • Schema Evolution
        • Testing
        • Packaging
          • Creating an Archive
          • Creating an Uber JAR
      • FAQ
        • How do I change the output data format of a SinkConnector?
        • Why does a connector configuration update trigger a task rebalance?
        • Why should I use distributed mode instead of standalone?
        • Do I need to write custom code to use Kafka Connect?
        • Is the Schema Registry a required service to run Kafka Connect?
        • How can I use plain JSON data with Connect?
        • Does source connector X support output format Y?
        • Why is CPU usage high for my Connect worker when no connectors have been deployed?
        • Can connect sink connectors read data written by other clients, e.g. a custom client?
        • After testing a connector in standalone mode, restarting it doesn’t write the data again?
        • Can I use a newer version of Connect with older brokers?
      • Reference
        • Javadocs
        • REST Interface
          • Content Types
          • Statuses & Errors
          • Connectors
          • Tasks
          • Connector Plugins
        • All Worker Configs
          • Common Worker Configs
          • Standalone Worker Configuration
          • Distributed Worker Configuration
    • Camus Operations
      • Important Configuration Options
    • ZooKeeper Operations
      • ZooKeeper Production Deployment
        • Stable version
        • Hardware
          • Memory
          • CPU
          • Disks
        • JVM
        • Important Configuration Options
        • Monitoring
          • Operating System
          • “Four Letter Words”
          • JMX Monitoring
        • Multi-node Setup
      • Post Deployment
  • Development
    • Kafka Streams
      • Introduction
        • The Kafka Streams API in a Nutshell
        • Use Case Examples
        • A Closer Look
      • Requirements
        • Apache Kafka
        • Confluent
      • Kafka Streams Quick Start
        • Purpose
        • Start the Kafka cluster
        • Prepare the topics and the input data
        • Process the input data with Kafka Streams
        • Inspect the output data
        • Stop the Kafka cluster
        • Next steps
      • Kafka Streams Demo Application
        • Requirements
        • Running the Kafka Music demo application
        • Running further Confluent demo applications for the Kafka Streams API
        • Appendix
          • Inspecting the input topics of the Kafka Music application
          • Creating new topics
          • Listing available topics
      • Concepts
        • Kafka 101
        • Stream
        • Stream Processing Application
        • Processor Topology
        • Stream Processor
        • Stateful Stream Processing
        • Duality of Streams and Tables
        • KStream
        • KTable
        • GlobalKTable
        • Time
        • Aggregations
        • Joins
        • Windowing
        • Interactive Queries
        • Processing Guarantees
      • Architecture
        • Processor Topology
        • Parallelism Model
          • Stream Partitions and Tasks
          • Threading Model
          • Example
        • State
        • Memory management
          • Record caches
        • Fault Tolerance
        • Flow Control with Timestamps
        • Backpressure
      • Code Examples
        • Getting started examples
          • Java
          • Scala
        • Security examples
          • Java programming language
        • Interactive queries examples
          • Java
        • End-to-end application examples
          • Java
          • Scala
        • Event-Driven Microservice example
          • Java
      • Developer Guide
        • Writing a Streams Application
          • Libraries and Maven artifacts
          • Using Kafka Streams within your application code
        • Configuring a Streams Application
          • Configuration parameter reference
            • Required configuration parameters
              • application.id
              • bootstrap.servers
            • Optional configuration parameters
              • default.key.serde
              • default.value.serde
              • default.timestamp.extractor
              • num.standby.replicas
              • num.stream.threads
              • partition.grouper
              • processing.guarantee
              • replication.factor
              • state.dir
            • Kafka consumers and producer configuration parameters
              • Naming
              • Default Values
              • enable.auto.commit
              • rocksdb.config.setter
            • Recommended configuration parameters for resiliency
              • acks
              • replication.factor
        • Streams DSL
          • Overview
          • Creating source streams from Kafka
          • Transform a stream
            • Stateless transformations
            • Stateful transformations
              • Aggregating
              • Joining
                • Join co-partitioning requirements
                • KStream-KStream Join
                • KTable-KTable Join
                • KStream-KTable Join
                • KStream-GlobalKTable Join
              • Windowing
                • Tumbling time windows
                • Hopping time windows
                • Sliding time windows
                • Session Windows
            • Applying processors and transformers (Processor API integration)
          • Writing streams back to Kafka
        • Processor API
          • Overview
          • Defining a Stream Processor
          • State Stores
            • Defining and creating a State Store
            • Fault-tolerant State Stores
            • Enable or Disable Fault Tolerance of State Stores (Store Changelogs)
            • Implementing Custom State Stores
          • Connecting Processors and State Stores
        • Data Types and Serialization
          • Configuring SerDes
          • Overriding default SerDes
          • Available SerDes
            • Primitive and basic types
            • Avro
            • JSON
            • Further serdes
          • Implementing custom SerDes
        • Interactive Queries
          • Querying local state stores for an app instance
            • Querying local key-value stores
            • Querying local window stores
            • Querying local custom state stores
          • Querying remote state stores for the entire app
            • Adding an RPC layer to your application
            • Exposing the RPC endpoints of your application
            • Discovering and accessing application instances and their local state stores
          • Demo applications
        • Memory Management
          • Record caches in the DSL
          • Record caches in the Processor API
          • Other memory usage
        • Running Streams Applications
          • Starting a Kafka Streams application
          • Elastic scaling of your application
            • Adding capacity to your application
            • Removing capacity from your application
            • State restoration during workload rebalance
            • Determining how many application instances to run
        • Managing Streams Application Topics
          • User topics
          • Internal topics
        • Streams Security
          • Required ACL setting for secure Kafka clusters
          • Security example
        • Application Reset Tool
          • Step 1: Run the application reset tool
          • Step 2: Reset the local environments of your application instances
            • Example
      • Operations
        • Capacity planning and sizing
          • Background and context
          • Stateless vs. stateful
            • Stateless applications
            • Stateful applications
          • Examples
          • Troubleshooting
        • Monitoring your application
          • Metrics
            • Accessing Metrics
              • Accessing Metrics via JMX and Reporters
              • Accessing Metrics Programmatically
            • Configuring Metrics Granularity
            • Built-in Metrics
              • Thread Metrics
              • Task Metrics
              • Processor Node Metrics
              • State Store Metrics
              • Record Cache Metrics
            • Adding Your Own Metrics
          • Run-time Status Information
            • Status of KafkaStreams instances
          • Integration with Confluent Control Center
      • Upgrade Guide
        • Upgrading from CP 3.2.x (Kafka 0.10.2.x-cp1) to CP 3.3.2 (Kafka 0.11.0.3-cp1)
          • Compatibility
          • Upgrading your Kafka Streams applications to CP 3.3.2
          • API changes
            • Streams Configuration
            • Local timestamp extractors
            • KTable Changes
          • Full upgrade workflow
        • Upgrading older Kafka Streams applications to CP 3.3.1
          • API changes (from CP 3.1 to CP 3.2)
            • Handling Negative Timestamps and Timestamp Extractor Interface
            • Metrics
            • Scala
          • API changes (from CP 3.0 to CP 3.1)
            • Stream grouping and aggregation
            • Auto Repartitioning
            • TopologyBuilder
            • DSL: New parameters to specify state store names
            • Windowing
      • FAQ
        • General
          • Is Kafka Streams a project separate from Apache Kafka?
          • Is Kafka Streams a proprietary library of Confluent?
          • Do Kafka Streams applications run inside the Kafka brokers?
          • What are the system dependencies of Kafka Streams?
          • How do I migrate my CP 3.0.x, CP 3.1.x, or CP 3.2.x Kafka Streams applications to CP 3.3.x
          • Which versions of Kafka clusters are supported by Kafka Streams?
          • What programming languages are supported?
          • Why is my application re-processing data from the beginning?
        • Scalability
          • Maximum parallelism of my application? Maximum number of app instances I can run?
        • Processing
          • Accessing record metadata such as topic, partition, and offset information?
          • Difference between map, peek, foreach in the DSL?
          • How to avoid data repartitioning if you know it’s not required?
          • Serdes config method
          • How can I replace RocksDB with a different store?
        • Failure and exception handling
          • Handling corrupted records and deserialization errors (“poison pill messages”)?
            • Option 1: Skip corrupted records with flatMap
            • Option 2: Quarantine corrupted records (dead letter queue) with branch
            • Option 3: Skip corrupted records with a custom serde
          • Sending corrupt records to a quarantine topic or dead letter queue?
        • Interactive Queries
          • Handling InvalidStateStoreException: “the state store may have migrated to another instance”?
        • Security
          • Application fails when running against a secured Kafka cluster?
        • Troubleshooting and debugging
          • Easier to interpret Java stacktraces?
          • Visualizing topologies?
          • Inspecting streams and tables?
          • Invalid Timestamp Exception
          • Why do I get an IllegalStateException when accessing record metadata?
          • Why is punctuate() not called?
          • Scala: compile error “no type parameter”, “Java-defined trait is invariant in type T”
          • How can I convert a KStream to a KTable without an aggregation step?
            • Option 1: Write KStream to Kafka, read back as KTable
            • Option 2: Perform a dummy aggregation
          • RocksDB behavior in 1-core environments
      • Javadocs
      • Getting started
        • Quick Start
        • Streams API Screencasts
      • Contents
        • Introduction
          • The Kafka Streams API in a Nutshell
          • Use Case Examples
          • A Closer Look
        • Requirements
          • Apache Kafka
          • Confluent
        • Kafka Streams Quick Start
          • Purpose
          • Start the Kafka cluster
          • Prepare the topics and the input data
          • Process the input data with Kafka Streams
          • Inspect the output data
          • Stop the Kafka cluster
          • Next steps
        • Concepts
          • Kafka 101
          • Stream
          • Stream Processing Application
          • Processor Topology
          • Stream Processor
          • Stateful Stream Processing
          • Duality of Streams and Tables
          • KStream
          • KTable
          • GlobalKTable
          • Time
          • Aggregations
          • Joins
          • Windowing
          • Interactive Queries
          • Processing Guarantees
        • Architecture
          • Processor Topology
          • Parallelism Model
            • Stream Partitions and Tasks
            • Threading Model
            • Example
          • State
          • Memory management
            • Record caches
          • Fault Tolerance
          • Flow Control with Timestamps
          • Backpressure
        • Code Examples
          • Getting started examples
            • Java
            • Scala
          • Security examples
            • Java programming language
          • Interactive queries examples
            • Java
          • End-to-end application examples
            • Java
            • Scala
          • Event-Driven Microservice example
            • Java
        • Developer Guide
          • Writing a Streams Application
            • Libraries and Maven artifacts
            • Using Kafka Streams within your application code
          • Configuring a Streams Application
            • Configuration parameter reference
              • Required configuration parameters
                • application.id
                • bootstrap.servers
              • Optional configuration parameters
                • default.key.serde
                • default.value.serde
                • default.timestamp.extractor
                • num.standby.replicas
                • num.stream.threads
                • partition.grouper
                • processing.guarantee
                • replication.factor
                • state.dir
              • Kafka consumers and producer configuration parameters
                • Naming
                • Default Values
                • enable.auto.commit
                • rocksdb.config.setter
              • Recommended configuration parameters for resiliency
                • acks
                • replication.factor
          • Streams DSL
            • Overview
            • Creating source streams from Kafka
            • Transform a stream
              • Stateless transformations
              • Stateful transformations
                • Aggregating
                • Joining
                  • Join co-partitioning requirements
                  • KStream-KStream Join
                  • KTable-KTable Join
                  • KStream-KTable Join
                  • KStream-GlobalKTable Join
                • Windowing
                  • Tumbling time windows
                  • Hopping time windows
                  • Sliding time windows
                  • Session Windows
              • Applying processors and transformers (Processor API integration)
            • Writing streams back to Kafka
          • Processor API
            • Overview
            • Defining a Stream Processor
            • State Stores
              • Defining and creating a State Store
              • Fault-tolerant State Stores
              • Enable or Disable Fault Tolerance of State Stores (Store Changelogs)
              • Implementing Custom State Stores
            • Connecting Processors and State Stores
          • Data Types and Serialization
            • Configuring SerDes
            • Overriding default SerDes
            • Available SerDes
              • Primitive and basic types
              • Avro
              • JSON
              • Further serdes
            • Implementing custom SerDes
          • Interactive Queries
            • Querying local state stores for an app instance
              • Querying local key-value stores
              • Querying local window stores
              • Querying local custom state stores
            • Querying remote state stores for the entire app
              • Adding an RPC layer to your application
              • Exposing the RPC endpoints of your application
              • Discovering and accessing application instances and their local state stores
            • Demo applications
          • Memory Management
            • Record caches in the DSL
            • Record caches in the Processor API
            • Other memory usage
          • Running Streams Applications
            • Starting a Kafka Streams application
            • Elastic scaling of your application
              • Adding capacity to your application
              • Removing capacity from your application
              • State restoration during workload rebalance
              • Determining how many application instances to run
          • Managing Streams Application Topics
            • User topics
            • Internal topics
          • Streams Security
            • Required ACL setting for secure Kafka clusters
            • Security example
          • Application Reset Tool
            • Step 1: Run the application reset tool
            • Step 2: Reset the local environments of your application instances
              • Example
        • Operations
          • Capacity planning and sizing
            • Background and context
            • Stateless vs. stateful
              • Stateless applications
              • Stateful applications
            • Examples
            • Troubleshooting
          • Monitoring your application
            • Metrics
              • Accessing Metrics
                • Accessing Metrics via JMX and Reporters
                • Accessing Metrics Programmatically
              • Configuring Metrics Granularity
              • Built-in Metrics
                • Thread Metrics
                • Task Metrics
                • Processor Node Metrics
                • State Store Metrics
                • Record Cache Metrics
              • Adding Your Own Metrics
            • Run-time Status Information
              • Status of KafkaStreams instances
            • Integration with Confluent Control Center
        • Upgrade Guide
          • Upgrading from CP 3.2.x (Kafka 0.10.2.x-cp1) to CP 3.3.2 (Kafka 0.11.0.3-cp1)
            • Compatibility
            • Upgrading your Kafka Streams applications to CP 3.3.2
            • API changes
              • Streams Configuration
              • Local timestamp extractors
              • KTable Changes
            • Full upgrade workflow
          • Upgrading older Kafka Streams applications to CP 3.3.1
            • API changes (from CP 3.1 to CP 3.2)
              • Handling Negative Timestamps and Timestamp Extractor Interface
              • Metrics
              • Scala
            • API changes (from CP 3.0 to CP 3.1)
              • Stream grouping and aggregation
              • Auto Repartitioning
              • TopologyBuilder
              • DSL: New parameters to specify state store names
              • Windowing
        • FAQ
          • General
            • Is Kafka Streams a project separate from Apache Kafka?
            • Is Kafka Streams a proprietary library of Confluent?
            • Do Kafka Streams applications run inside the Kafka brokers?
            • What are the system dependencies of Kafka Streams?
            • How do I migrate my CP 3.0.x, CP 3.1.x, or CP 3.2.x Kafka Streams applications to CP 3.3.x
            • Which versions of Kafka clusters are supported by Kafka Streams?
            • What programming languages are supported?
            • Why is my application re-processing data from the beginning?
          • Scalability
            • Maximum parallelism of my application? Maximum number of app instances I can run?
          • Processing
            • Accessing record metadata such as topic, partition, and offset information?
            • Difference between map, peek, foreach in the DSL?
            • How to avoid data repartitioning if you know it’s not required?
            • Serdes config method
            • How can I replace RocksDB with a different store?
          • Failure and exception handling
            • Handling corrupted records and deserialization errors (“poison pill messages”)?
              • Option 1: Skip corrupted records with flatMap
              • Option 2: Quarantine corrupted records (dead letter queue) with branch
              • Option 3: Skip corrupted records with a custom serde
            • Sending corrupt records to a quarantine topic or dead letter queue?
          • Interactive Queries
            • Handling InvalidStateStoreException: “the state store may have migrated to another instance”?
          • Security
            • Application fails when running against a secured Kafka cluster?
          • Troubleshooting and debugging
            • Easier to interpret Java stacktraces?
            • Visualizing topologies?
            • Inspecting streams and tables?
            • Invalid Timestamp Exception
            • Why do I get an IllegalStateException when accessing record metadata?
            • Why is punctuate() not called?
            • Scala: compile error “no type parameter”, “Java-defined trait is invariant in type T”
            • How can I convert a KStream to a KTable without an aggregation step?
              • Option 1: Write KStream to Kafka, read back as KTable
              • Option 2: Perform a dummy aggregation
            • RocksDB behavior in 1-core environments
        • Javadocs
    • Kafka Clients
      • Kafka Consumers
        • Concepts
        • Configuration
        • Initialization
        • Basic Usage
          • Java Client
          • C/C++ Client (librdkafka)
          • Python, Go and .NET Clients
        • Detailed Examples
          • Basic Poll Loop
          • Shutdown with Wakeup
          • Synchronous Commits
          • Asynchronous Commits
        • Administration
          • List Groups
          • Describe Group
      • Kafka Producers
        • Concepts
        • Configuration
        • Examples
          • Initial Setup
          • Asynchronous Writes
          • Synchronous Writes
      • Confluent’s JMS Client for Apache Kafka
        • Overview
        • Examples
          • Producing messages
          • Consuming messages
        • Requirements
        • Installation
        • JMS 1.1 Compatibility
        • Features
        • Unsupported Features
          • Session
          • ConnectionFactory
          • Connection
        • Developing JMS Clients
          • Dependencies
          • Configure and Create a ConnectionFactory
          • Setting Required Properties
          • Enabling Confluent Specific Features (Optional)
          • Enabling TLS Encryption (Optional)
          • Create a ConnectionFactory
          • Creating a Destination
          • Creating a Connection
          • Creating Sessions
          • Producing / Consuming Messages
        • Threading
      • API Docs
    • Connector Developer Guide
      • Core Concepts and APIs
        • Connectors and Tasks
        • Partitions and Records
        • Dynamic Connectors
      • Developing a Simple Connector
        • Connector Example
        • Task Example - Source Task
        • Sink Tasks
        • Resuming from Previous Offsets
      • Dynamic Input/Output Partitions
      • Configuration Validation
      • Working with Schemas
      • Schema Evolution
      • Testing
      • Packaging
        • Creating an Archive
        • Creating an Uber JAR
    • Schema Registry
      • Introduction
        • Quickstart
        • Installation
        • Deployment
        • Development
        • Requirements
        • Contribute
        • License
      • Changelog
        • Version 3.3.0
        • Version 3.2.1
        • Version 3.2.0
        • Version 3.1.1
        • Version 3.1.0
          • Schema Registry
          • Serializers, Formatters, and Converters
        • Version 3.0.1
          • Schema Registry
          • Serializers, Formatters, and Converters
        • Version 3.0.0
          • Schema Registry
          • Serializers, Formatters, and Converters
        • Version 2.0.1
          • Schema Registry
        • Version 2.0.0
          • Schema Registry
          • Serializers, Formatters, and Converters
      • API Reference
        • Overview
          • Compatibility
          • Content Types
          • Errors
        • Schemas
        • Subjects
        • Compatibility
        • Config
      • Configuration Options
      • Design Overview
        • Batch ID Allocation
        • Kafka Backend
        • Single Master Architecture
      • Schema Registry Operations
        • Production Deployment
          • Hardware
            • Memory
            • CPUs
            • Disks
            • Network
          • JVM
          • Important Configuration Options
          • Don’t Touch These Settings!
            • Storage settings
          • Kafka & ZooKeeper
          • Backup and Restore
          • Multi-DC Setup
            • Overview
            • Recommended Deployment
            • Important Settings
            • Setup
            • Run Book
        • Monitoring
          • Global Metrics
          • Per-Endpoint Metrics
          • Endpoints
      • Security Overview
        • Kafka Store
        • ZooKeeper
      • Serializer and Formatter
        • Serializer
        • Formatter
        • Wire Format
          • Compatibility Guarantees
      • Schema Deletion Guidelines
      • Maven Plugin
        • schema-registry:download
        • schema-registry:test-compatibility
        • schema-registry:register
    • Docker Developer Guide
      • Image Design Overview
        • The Bootup Process
        • Configuration
        • Preflight Checks
        • Launching the Process
        • Development Guidelines
      • Setup
        • Building the Images
        • Running Tests
        • Make Targets
      • Extending the Docker Images
        • Prerequisites
        • Adding Connectors to the Kafka Connect Image
        • Examples
      • Utility Scripts
        • Docker Utility Belt (dub)
        • Confluent Platform Utility Belt (cub)
        • Client Properties
      • References
    • Data Serialization and Evolution
      • Avro
        • Defining an Avro Schema
        • Schema Evolution
        • Backward Compatibility
        • Forward Compatibility
        • Full Compatibility
        • Confluent Schema Registry
    • Kafka REST Proxy
      • Kafka REST Proxy
        • Quickstart
          • Produce and Consume JSON Messages
          • Produce and Consume Avro Messages
          • Produce and Consume Binary Messages
          • Inspect Topic Metadata
        • Features
        • Installation
        • Deployment
        • Development
        • Requirements
        • Contribute
        • License
      • Changelog
        • Version 3.3.0
        • Version 3.2.2
        • Version 3.2.1
        • Version 3.2.0
        • Version 3.1.2
        • Version 3.1.1
        • Version 3.1.0
        • Version 3.0.0
        • Version 2.0.1
        • Version 2.0.0
      • API Reference
        • Overview
          • Content Types
          • Errors
        • API v2
          • Topics
          • Partitions
          • Consumers
          • Brokers
        • API v1
          • Topics
          • Partitions
          • Consumers
          • Brokers
      • Configuration Options
        • Security Configuration Options
          • Configuration Options for HTTPS
          • Configuration Options for SSL Encryption between REST Proxy and Apache Kafka Brokers
          • Configuration Options for SASL Authentication between REST Proxy and Apache Kafka Brokers
        • Interceptor Configuration Options
      • Kafka REST Operations
        • Production Deployment
          • Hardware
            • Memory
            • CPUs
            • Disks
            • Network
          • JVM
          • Deployment
          • Important Configuration Options
          • Don’t Touch These Settings!
          • Post Deployment
        • Monitoring
          • Global Metrics
          • Per-Endpoint Metrics
          • Endpoints
      • Security Overview
    • Application Development
      • kafkacat Utility
        • Consumer Mode
        • Producer Mode
        • Metadata Listing Mode
      • Native Clients with Serializers
        • Java
        • C/C++
      • REST Proxy
    • Camus
      • Camus
        • Key Features
        • Quickstart
        • Installation
        • Deployment
        • Development
        • Requirements
        • Contribute
        • License
      • Changelog
        • Version 3.3.1
        • Version 3.3.0
        • Version 3.2.2
        • Version 3.2.1
        • Version 3.2.0
        • Version 3.1.2
        • Version 3.1.1
        • Version 3.1.0
        • Version 3.0.0
        • Version 2.0.1
        • Version 2.0.0
      • Configuration Options
        • Schema Registry Configuration
        • Camus Job Configuration
        • Kafka Configuration
        • Example Configuration
      • Design
      • Camus Operations
        • Important Configuration Options
    • Confluent Proactive Support
      • What is Proactive Support?
      • How it works
      • Which metadata is collected?
        • Version Collector (default)
        • Confluent Support Metrics (add-on package)
      • Which metadata and data is not being collected?
      • Installing Support Metrics
        • DEB Packages via apt
        • RPM Packages via yum
      • Enabling or disabling the Metrics feature
      • Recommended Proactive Support configuration settings for licensed Confluent customers
      • Proactive Support configuration settings
      • Network ports used by Proactive Support
      • Sharing Proactive Support Metadata with Confluent manually
      • Privacy Policy
  • Docs
  • Operations »
  • Security
  • View page source

Security¶

  • Kafka Security
  • Confluent Security Plugins
  • Docker Security

© Copyright 2019, Confluent, Inc. Privacy Policy | Terms & Conditions. Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the Apache Software Foundation. All other trademarks, servicemarks, and copyrights are the property of their respective owners.

Please report any inaccuracies on this page or suggest an edit.

Next Previous

Last updated on Nov 20, 2019.

Built with Sphinx using a theme provided by Read the Docs.
    Expand Content v.--