Use connectors to copy data between Apache Kafka® and other systems that you want to pull data from or push data to. You can download connectors from Confluent Hub.
For managed connectors available on Confluent Cloud, see Connect External Systems to Confluent Cloud.
JDBC Source and Sink
The Kafka Connect JDBC Source connector imports data from any relational database with a JDBC driver into an Apache Kafka® topic. The Kafka Connect JDBC Sink connector exports data from Apache Kafka® topics to any relational database with a JDBC driver.
The Kafka Connect JMS Source connector is used to move messages from any JMS-compliant broker into Apache Kafka®.
Elasticsearch Service Sink
The Kafka Connect Elasticsearch Service Sink connector moves data from Apache Kafka® to Elasticsearch. It writes data from a topic in Kafka to an index in Elasticsearch.
Amazon S3 Sink
The Kafka Connect Amazon S3 Sink connector exports data from Apache Kafka® topics to S3 objects in either Avro, JSON, or Bytes formats.
HDFS 2 Sink
The Kafka Connect HDFS 2 Sink connector allows you to export data from Apache Kafka® topics to HDFS 2.x files in a variety of formats. The connector integrates with Hive to make data immediately available for querying with HiveQL.
Replicator allows you to easily and reliably replicate topics from one Apache Kafka® cluster to another.