You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
This section describes Kafka Connect, a component of open source Apache Kafka. Kafka Connect is a framework for scalably and reliably connecting Kafka with external systems such as databases, key-value stores, search indexes, and file systems.
- Installing and Configuring Kafka Connect
- Getting Started
- Planning for Installation
- Installing Connector Plugins
- Running Workers
- Configuring Workers
- Upgrading Kafka Connect Workers
- Managing Connectors
- Bundled Connectors
- Confluent JDBC Connector
- Confluent HDFS Connector
- Confluent Elasticsearch Connector
- Confluent Replicator
- Kafka FileStream Connectors
- Architecture & Internals
- Connector Developer Guide
- Core Concepts and APIs
- Developing a Simple Connector
- Dynamic Input/Output Partitions
- Configuration Validation
- Working with Schemas
- Schema Evolution
- How do I change the output data format of a SinkConnector?
- Why does a connector configuration update trigger a task rebalance?
- Why should I use distributed mode instead of standalone?
- Do I need to write custom code to use Kafka Connect?
- Is the Schema Registry a required service to run Kafka Connect?
- How can I use plain JSON data with Connect?
- Does source connector X support output format Y?
- Why is CPU usage high for my Connect worker when no connectors have been deployed?
- Can connect sink connectors read data written by other clients, e.g. a custom client?
- After testing a connector in standalone mode, restarting it doesn’t write the data again?
- Can I use a newer version of connect with older brokers?