Kafka Connectors¶
You can use self-managed Apache Kafka® connectors to move data in and out of Kafka. The self-managed connectors are for use with Confluent Platform. For more information on fully-managed connectors, see Confluent Cloud.
![../_images/jdbc.png](../_images/jdbc.png)
JDBC Source and Sink
The Kafka Connect JDBC Source connector imports data from any relational database with a JDBC driver into an Kafka topic. The Kafka Connect JDBC Sink connector exports data from Kafka topics to any relational database with a JDBC driver.
![../_images/jms.jpg](../_images/jms.jpg)
JMS Source
The Kafka Connect JMS Source connector is used to move messages from any JMS-compliant broker into Kafka.
![https://api.hub.confluent.io/api/plugins/confluentinc/kafka-connect-elasticsearch/versions/10.0.0/assets/elasticsearch.jpg](https://api.hub.confluent.io/api/plugins/confluentinc/kafka-connect-elasticsearch/versions/10.0.0/assets/elasticsearch.jpg)
Elasticsearch Service Sink
The Kafka Connect Elasticsearch Service Sink connector moves data from Kafka to Elasticsearch. It writes data from a topic in Kafka to an index in Elasticsearch.
![../_images/s3.png](../_images/s3.png)
Amazon S3 Sink
The Kafka Connect Amazon S3 Sink connector exports data from Kafka topics to S3 objects in either Avro, JSON, or Bytes formats.
![../_images/hdfs.png](../_images/hdfs.png)
HDFS 2 Sink
The Kafka Connect HDFS 2 Sink connector allows you to export data from Apache Kafka® topics to HDFS 2.x files in a variety of formats. The connector integrates with Hive to make data immediately available for querying with HiveQL.
![../_images/replicator.png](../_images/replicator.png)
Replicator
Replicator allows you to easily and reliably replicate topics from one Kafka cluster to another.