Manually Installing Community Connectors¶
This topic describes how to install community connectors that are not available from Confluent Hub. If a connector is not available on Confluent Hub, you must first obtain or build the JARs, and then install the connectors into your Kafka installation.
Confluent Hub hosts many popular connectors developed by companies and open source organizations and individuals. If a connector is available on Confluent Hub, you can skip this topic.
In the following example, the HDFS Sink Connector is installed manually.
Clone the GitHub repo for the connector.
git clone https://github.com/confluentinc/kafka-connect-hdfs.git
Navigate to your cloned repo, checkout the version you want, and build the JAR with Maven. You will want to checkout a released version typically. This example uses the
cd kafka-connect-hdfs; git checkout v3.0.1; mvn package
Locate the connector’s uber JAR or plugin directory, and copy that into one of the directories on the Kafka Connect worker’s plugin path. For example, if the plugin path includes the
/usr/local/share/kafka/pluginsdirectory, you can use one of the following techniques to make the connector available as a plugin.
If the connector were to create an uber JAR file named
kafka-connect-hdfs-3.0.1-package.jar, you could copy that file into the
cp target/kafka-connect-hdfs-3.0.1-package/share/java/kafka-connect-hdfs/kafka-connect-hdfs-3.0.1-package.jar /usr/local/share/kafka/plugins/
Or, if the connector’s JARs are collected in one of the build’s
targetdirectories, you can copy all of these JARs into a plugin directory inside the
mkdir -p /usr/local/share/kafka/plugins/kafka-connect-hdfs cp target/kafka-connect-hdfs-3.0.1-package/share/java/kafka-connect-hdfs/* /usr/local/share/kafka/plugins/kafka-connect-hdfs/
If you’re running Kafka Connect distributed worker processes, you must repeat these steps on all of your machines. Every connector must be available on all workers, since Kafka Connect will distribute the connector tasks to any of the workers.