.. _connect_jdbc: JDBC Connector (Source and Sink) for |cp| ========================================= You can use the |kconnect-long| JDBC source connector to import data from any relational database with a JDBC driver into |ak-tm| topics. You can use the JDBC sink connector to export data from |ak| topics to any relational database with a JDBC driver. The JDBC connector supports a wide variety of databases without requiring custom code for each one. Install the JDBC Connector -------------------------- .. include:: ../includes/connector-native-install.rst If you are running a multi-node |kconnect| cluster, the JDBC connector and JDBC driver JARs must be installed on every |kconnect| worker in the cluster. See below for details. .. include:: ../includes/connector-install-hub.rst .. codewithvars:: bash confluent-hub install confluentinc/kafka-connect-jdbc:latest .. include:: ../includes/connector-install-version.rst .. codewithvars:: bash confluent-hub install confluentinc/kafka-connect-jdbc:|release| -------------------------- Install Connector Manually -------------------------- `Download and extract the ZIP file `_ for your connector and then follow the manual connector installation :ref:`instructions `. License ------- .. include:: ../includes/community-license.rst Installing JDBC Drivers ----------------------- The JDBC source and sink connectors use the `Java Database Connectivity (JDBC) API `_ that enables applications to connect to and use a wide range of database systems. In order for this to work, the connectors must have a *JDBC Driver* for the particular database systems you will use. The connector comes with JDBC drivers for a few database systems, but before you use the connector with other database systems, you must install the most recent JDBC 4.0 drivers for those database systems. Although the details vary for each JDBC driver, the basic steps are: #. Find the JDBC 4.0 driver JAR file for each database system that will be used. #. Place these JAR files into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation on each of the |kconnect| worker nodes. #. Restart all of the |kconnect| worker nodes. The rest of this section outlines the specific steps for more common database management systems. ------------------ General Guidelines ------------------ The following are additional guidelines to consider: * Use the most recent version of the JDBC 4.0 driver available. The latest version of a JDBC driver supports most versions of the database management system, and includes more bug fixes. * Use the correct JAR file for the Java version used to run |kconnect| workers. Some JDBC drivers have a single JAR that works on multiple Java versions. Other drivers have one JAR for Java 8 and a different JAR for Java 10 or 11. Make sure to use the correct JAR file for the Java version in use. If you install and try to use the JDBC driver JAR file for the wrong version of Java, starting any JDBC source connector or JDBC sink connector will likely fail with ``UnsupportedClassVersionError``. If this happens, remove the JDBC driver JAR file you installed and repeat the driver installation process with the correct JAR file. * The ``share/java/kafka-connect-jdbc`` directory mentioned above is for |cp|. If you are using a different installation, find the location where the Confluent JDBC source and sink connector JAR files are located, and place the JDBC driver JAR file(s) for the target databases into the same directory. * If the JDBC driver specific to the database management system is not installed correctly, the JDBC source or sink connector will fail on startup. Typically, the system throws the error ``No suitable driver found``. If this happens, install the JDBC driver again by following the instructions. -------------------- Microsoft SQL Server -------------------- The JDBC source and sink connectors include the open source `jTDS JDBC driver `_ to read from and write to Microsoft SQL Server. Because the JDBC 4.0 driver is included, no additional steps are necessary before running a connector to Microsoft SQL Server. Alternatively, you can remove the jTDS JDBC driver and install the open source `Microsoft JDBC driver `_. First, download the latest version of the JDBC driver archive (for example, ``sqljdbc_7.2.2.0_enu.tar.gz`` for English), extract the contents of the file to a temporary directory, and find the correct JAR file for your version of Java. For example, if downloading the 7.2.2.0 version of the driver, find *either* the ``mssql-jdbc-7.2.2.jre8.jar`` if running |kconnect| on Java 8 *or* the ``mssql-jdbc-7.2.2.jre11.jar`` if running |kconnect| on Java 11. Then, perform the following steps on each of the |kconnect| worker nodes before deploying a JDBC source or sink connector: #. Remove the existing ``share/java/kafka-connect-jdbc/jtds-1.3.1.jar`` file from the |cp| installation. #. Install the JAR file into the ``share/java/kafka-connect-jdbc/`` directory in the |cp| installation. #. Restart the |kconnect| worker. If you install the JDBC driver JAR file for the wrong version of Java and try to start a JDBC source connector or JDBC sink connector that uses a SQL Server database, the connector will likely fail with an ``UnsupportedClassVersionError``. If this happens, remove the JDBC driver JAR file and repeat the driver installation process with the correct JAR file. .. include:: includes/kerberos-msSQL.rst ------------------- PostgreSQL Database ------------------- The JDBC source and sink connectors include the open source `PostgreSQL JDBC 4.0 driver `_ to read from and write to a PostgreSQL database server. Because the JDBC 4.0 driver is included, no additional steps are necessary before running a connector to PostgreSQL databases. --------------- Oracle Database --------------- Oracle provides a number of `JDBC drivers for Oracle `_. Find the latest version and download *either* ``ojdbc8.jar``, if running |kconnect| on Java 8 *or* ``ojdbc10.jar``, if running |kconnect| on Java 11. Then, place this one JAR file into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation and restart all of the |kconnect| worker nodes. If you download a ``tar.gz`` file with the JDBC driver and companion JARs, extract the files contents of the ``tar.gz`` to a temporary directory, and use the readme file to determine which JAR files are required. Copy the JDBC driver JAR file and other required companion JAR files into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation on each of the |kconnect| worker nodes, and then restart all of the |kconnect| worker nodes. If you install the JDBC driver JAR file for the wrong version of Java and try to start a JDBC source connector or JDBC sink connector that uses an Oracle database, the connector will likely fail with an ``UnsupportedClassVersionError``. If this happens, remove the JDBC driver JAR file and repeat the driver installation process with the correct JAR file. ------- IBM DB2 ------- IBM provides a number of `JDBC drivers for DB2 `_ that depend on the version of DB2. In general, pick the most recent JDBC 4.0 driver, and choose one of the download options. Extract and find the ``db2jcc4.jar`` file within the downloaded ``tar.gz`` file, and place *only* the ``db2jdcc4.jar`` file into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation. For example, if you downloaded a compressed ``tar.gz`` file (e.g., ``v10.5fp10_jdbc_sqlj.tar.gz``), perform the following steps: #. Extract the contents of the ``tar.gz`` file into a temporary directory. #. Find the ZIP file (e.g., ``db2_db2driver_for_jdbc_sqlj``) in the extracted files. #. Extract the contents of the ``zip`` file to a different temporary directory. #. Find the ``db2jdcc4.jar`` file and copy it into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation on each of the |kconnect| worker nodes, and then restart all of the |kconnect| worker nodes. #. Remove the two temporary directories. .. note:: Do not place any other files from the IBM download into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation. UPSERT for DB2 running on AS/400 is not currently supported with the Confluent JDBC Connector. ------------ MySQL Server ------------ MySQL provides the `Connect/J JDBC driver for MySQL `_ for a number of platforms. Choose the **Platform Independent** option, and download the **Compressed TAR Archive**. This file contains both the JAR file and the source code. Extract the contents of this ``tar.gz`` file to a temporary directory. One of the extracted files will be a ``jar`` file (for example, ``mysql-connector-java-8.0.16.jar``), and *copy only this JAR file* into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation on each of the |kconnect| worker nodes, and then restart all of the |kconnect| worker nodes. -------- SAP HANA -------- SAP provides the `SAP HANA JDBC Driver `_, and makes this available on `Maven Central `_. Download the latest version of the JAR file (for example, ``ngdbc-2.4.56.jar``) and place it into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation on each of the |kconnect| worker nodes, and then restart all of the |kconnect| worker nodes. ------------------------ SQLite Embedded Database ------------------------ The JDBC source and sink connectors include the open source `SQLite JDBC 4.0 driver `_ to read from and write to a local SQLite database. Because SQLite is an embedded database, this configuration is more for demonstration purposes. --------------- Other Databases --------------- Find the JDBC 4.0 driver JAR file(s) for other databases, and place only the required JAR file(s) into the ``share/java/kafka-connect-jdbc`` directory in your |cp| installation on each of the |kconnect| worker nodes, and then restart all of the |kconnect| worker nodes. Suggested Reading ----------------- Blog post: `Kafka Connect Deep Dive – JDBC Source Connector `__ Additional Documentation ------------------------ .. toctree:: :maxdepth: 1 source-connector/index sink-connector/index changelog