.. _connect_community_connectors:
Manually Installing Community Connectors
----------------------------------------
This topic describes how to install community connectors that are not available from `Confluent Hub `_.
If a connector is not available on |c-hub|, you must first obtain or build the JARs, and then install the connectors
into your |ak-tm| installation.
.. important:: `Confluent Hub `_ hosts many popular connectors developed by companies and
open source organizations and individuals. If a connector is available on |c-hub|, you can skip this
topic.
In the following example, the HDFS Sink Connector is installed manually.
#. Clone the GitHub repo for the connector.
.. codewithvars:: bash
git clone https://github.com/confluentinc/kafka-connect-hdfs.git
#. Navigate to your cloned repo, checkout the version you want, and build the JAR with Maven.
You will want to checkout a released version typically. This example uses the ``v3.0.1`` release tag:
.. codewithvars:: bash
cd kafka-connect-hdfs; git checkout v3.0.1; mvn package
#. Locate the connector's :ref:`uber JAR or plugin directory `,
and copy that into one of the directories on the |kconnect-long| worker's :ref:`plugin path `.
For example, if the plugin path includes the ``/usr/local/share/kafka/plugins`` directory, you can use one of the following
techniques to make the connector available as a plugin.
If the connector were to create an uber JAR file named ``kafka-connect-hdfs-3.0.1-package.jar``, you could
copy that file into the ``/usr/local/share/kafka/plugins`` directory:
.. codewithvars:: bash
cp target/kafka-connect-hdfs-3.0.1-package/share/java/kafka-connect-hdfs/kafka-connect-hdfs-3.0.1-package.jar /usr/local/share/kafka/plugins/
Or, if the connector's JARs are collected in one of the build's ``target`` directories, you can copy all of these JARs into a plugin directory inside the ``/usr/local/share/kafka/plugins``:
.. codewithvars:: bash
mkdir -p /usr/local/share/kafka/plugins/kafka-connect-hdfs
cp target/kafka-connect-hdfs-3.0.1-package/share/java/kafka-connect-hdfs/* /usr/local/share/kafka/plugins/kafka-connect-hdfs/
#. If you're running |kconnect-long| distributed worker processes, you must repeat these steps on all of your machines.
Every connector must be available on all workers, since |kconnect-long| will distribute the connector tasks to any of
the workers.