.. meta:: :description: Kafka Connect, an open source component of Apache Kafka, is a framework for connecting Kafka with external systems such as databases, key-value stores, search indexes, and file systems. .. _kafka_connect: Kafka Connect ============= |kconnect-long| is a tool for scalably and reliably streaming data between |ak-tm| and other data systems. It makes it simple to quickly define connectors that move large data sets in and out of |ak|. |kconnect-long| can ingest entire databases or collect metrics from all your application servers into |ak| topics, making the data available for stream processing with low latency. An export connector can deliver data from |ak| topics into secondary indexes like Elasticsearch, or into batch systems–such as Hadoop for offline analysis. This page describes how |kconnect-long| works, and includes important |kconnect-long| terms and :ref:`key concepts `. You'll learn what |kconnect-long| is–including its benefits and framework–and gain the understanding you need to put your data in motion. .. include:: ../.hidden/docs-common/home/includes/cloud-platform-cta.rst What is Kafka Connect? ---------------------- |kconnect-long| is a free, open-source component of |ak-tm| that serves as a centralized data hub for simple data integration between databases, key-value stores, search indexes, and file systems. You can use |kconnect-long| to stream data between |ak-tm| and other data systems and quickly create connectors that move large data sets in and out of |ak|. Benefits of Kafka Connect ^^^^^^^^^^^^^^^^^^^^^^^^^ |kconnect-long| provides the following benefits: * **Data-centric pipeline**: |kconnect| uses meaningful data abstractions to pull or push data to |ak|. * **Flexibility and scalability**: |kconnect| runs with streaming and batch-oriented systems on a single node (standalone) or scaled to an organization-wide service (distributed). * **Reusability and extensibility**: |kconnect| leverages existing connectors or extends them to fit your needs and provides lower time to production. |kconnect-long| is focused on streaming data to and from |ak|, making it simpler for you to write high quality, reliable, and high performance connector plugins. |kconnect-long| also enables the framework to make guarantees that are difficult to achieve using other frameworks. It is an integral component of an ETL pipeline, when combined with |ak| and a stream processing framework. How Kafka Connect Works ^^^^^^^^^^^^^^^^^^^^^^^ The |kconnect-long| framework allows you to ingest entire databases or collect metrics from all your application servers into |ak| topics, making the data available for stream processing with low latency. An export connector, for example, can deliver data from |ak| topics into secondary indexes like Elasticsearch, or into batch systems–such as Hadoop for offline analysis. You can deploy |kconnect-long| as a standalone process that runs jobs on a single machine (for example, log collection), or as a distributed, scalable, fault-tolerant service supporting an entire organization. |kconnect-long| provides a low barrier to entry and low operational overhead. You can start small with a standalone environment for development and testing, and then scale up to a full production environment to support the data pipeline of a large organization. To deploy |kconnect-long| in your environment, see :ref:`connect_userguide`. .. _connect_concepts: Kafka Connect Concepts ---------------------- This section describes the following |kconnect-long| concepts: - :ref:`Connectors `: The high level abstraction that coordinates data streaming by managing tasks - :ref:`Tasks `: The implementation of how data is copied to or from |ak| - :ref:`Workers `: The running processes that execute connectors and tasks - :ref:`Converters `: The code used to translate data between |kconnect| and the system sending or receiving data - :ref:`Transforms `: Simple logic to alter each message produced by or sent to a connector - :ref:`dead-letter-queues`: How |kconnect| handles connector errors .. _connect_connectors: Connectors ^^^^^^^^^^ |kconnect-long| includes two types of connectors: - **Source connector**: Source connectors ingest entire databases and stream table updates to |ak| topics. Source connectors can also collect metrics from all your application servers and store the data in |ak| topics–making the data available for stream processing with low latency. - **Sink connector**: Sink connectors deliver data from |ak| topics to secondary indexes, such as Elasticsearch, or batch systems such as Hadoop for offline analysis. Confluent offers several `pre-built connectors `__ that can be used to stream data to or from commonly used systems, such as relational databases or HDFS. In order to efficiently discuss the inner workings of |kconnect-long|, it is helpful to establish a few major concepts. .. tip:: :ccloud-cta:`Confluent Cloud|` offers pre-built, fully managed, |ak| connectors that make it easy to instantly connect to popular data sources and sinks. With a simple GUI-based configuration and elastic scaling with no infrastructure to manage, |ccloud| connectors make moving data in and out of |ak| an effortless task, giving you more time to focus on application development. For information about |ccloud| connectors, see :cloud:`Connect External Systems to Confluent Cloud|connectors/index.html`. Connectors in |kconnect-long| define where data should be copied to and from. A **connector instance** is a logical job that is responsible for managing the copying of data between |ak| and another system. All of the classes that implement or are used by a connector are defined in a **connector plugin**. Both connector instances and connector plugins may be referred to as "connectors", but it should always be clear from the context which is being referred to (for example, :connect-common:`"install a connector"|userguide.html#installing-kconnect-plugins` refers to the plugin, and "check the status of a connector" refers to a connector instance). Confluent encourages users to leverage `existing connectors `_. However, it is possible to write a new connector plugin from scratch. At a high level, a developer who wishes to write a new connector plugin should keep to the following workflow. Further information is available in the :ref:`developer guide `. .. figure:: images/connector-model-simple.png :align: center .. _connect_tasks: Tasks ^^^^^ Tasks are the main actor in the data model for |kconnect|. Each connector instance coordinates a set of tasks that copy data. By allowing the connector to break a single job into many tasks, |kconnect-long| provides built-in support for parallelism and scalable data copying with minimal configuration. Tasks themselves have no state stored within them. Rather a task's state is stored in special topics in |ak|, ``config.storage.topic`` and ``status.storage.topic``, and managed by the associated connector. Tasks may be started, stopped, or restarted at any time to provide a resilient and scalable data pipeline. .. figure:: images/data-model-simple.png :align: center High level representation of data passing through a |kconnect| source task into |ak|. Note that internal offsets are stored either in |ak| or on disk rather than within the task itself. Task rebalancing """""""""""""""" When a connector is first submitted to the cluster, the workers rebalance the full set of connectors in the cluster and their tasks so that each worker has approximately the same amount of work. This rebalancing procedure is also used when connectors increase or decrease the number of tasks they require, or when a connector's configuration is changed. When a worker fails, tasks are rebalanced across the active workers. When a task fails, no rebalance is triggered, as a task failure is considered an exceptional case. As such, failed tasks are not restarted by the framework and should be restarted using the :connect-common:`REST API|monitoring.html`. .. figure:: images/task-failover.png :align: center Task failover example showing how tasks rebalance in the event of a worker failure. .. _connect_workers: Workers ^^^^^^^ Connectors and tasks are logical units of work and must be scheduled to execute in a process. |kconnect-long| calls these processes **workers** and has two types of workers: :ref:`standalone ` and distributed :ref:`distrubuted `. .. _standalone-workers: Standalone workers """""""""""""""""" Standalone mode is the simplest mode, where a single process is responsible for executing all connectors and tasks. Since it is a single process, it requires minimal configuration. Standalone mode is convenient for getting started, during development, and in certain situations where only one process makes sense, such as collecting logs from a host. However, because there is only a single process, it also has more limited functionality: scalability is limited to the single process and there is no fault tolerance beyond any monitoring you add to the single process. .. _distributed-workers: Distributed workers """"""""""""""""""" Distributed mode provides scalability and automatic fault tolerance for |kconnect-long|. In distributed mode, you start many worker processes using the same ``group.id`` and they coordinate to schedule execution of connectors and tasks across all available workers. If you add a worker, shut down a worker, or a worker fails unexpectedly, the rest of the workers acknowledge this and coordinate to redistribute connectors and tasks across the updated set of available workers. Note the similarity to consumer group rebalance. Behind the scenes, connect workers use consumer groups to coordinate and rebalance. Note that all workers with the same ``group.id`` will be in the same connect cluster. For example, if worker A has ``group.id=connect-cluster-a`` and worker B has the same ``group.id``, worker A and worker B will form a cluster called ```connect-cluster-a``. .. figure:: images/worker-model-basics.png :align: center A three-node |kconnect-long| distributed mode cluster. Connectors (monitoring the source or sink system for changes that require reconfiguring tasks) and tasks (copying a subset of a connector's data) are balanced across the active workers. The division of work between tasks is shown by the partitions that each task is assigned. .. _connect_converters: Converters ^^^^^^^^^^ Converters are required to have a |kconnect-long| deployment support a particular data format when writing to, or reading from |ak|. Tasks use converters to change the format of data from bytes to a |kconnect| internal data format and vice versa. By default, |cp| provides the following converters: .. include:: includes/converter-list.rst Converters are decoupled from connectors themselves to allow for the reuse of converters between connectors. For example, using the same Avro converter, the JDBC Source Connector can write Avro data to |ak|, and the HDFS Sink Connector can read Avro data from |ak|. This means the same converter can be used even though, for example, the JDBC source returns a ``ResultSet`` that is eventually written to HDFS as a parquet file. The following graphic shows how converters are used to read from a database using a JDBC Source Connector, write to |ak|, and finally write to HDFS with an HDFS Sink Connector. .. figure:: images/converter-basics.png :width: 800px :alt: How converters are used for a source and sink data transfer You can use the following built-in primitive converters with |kconnect|: - ``org.apache.kafka.connect.converters.DoubleConverter``: Serializes to and deserializes from DOUBLE values. When converting from bytes to |kconnect| format, the converter returns an optional FLOAT64 schema. - ``org.apache.kafka.connect.converters.FloatConverter``: Serializes to and deserializes from FLOAT values. When converting from bytes to |kconnect| format, the converter returns an optional FLOAT32 schema. - ``org.apache.kafka.connect.converters.IntegerConverter``: Serializes to and deserializes from INTEGER values. When converting from bytes to |kconnect| format, the converter returns an optional INT32 schema. - ``org.apache.kafka.connect.converters.LongConverter``: Serializes to and deserializes from LONG values. When converting from bytes to |kconnect| format, the converter returns an optional INT64 schema. - ``org.apache.kafka.connect.converters.ShortConverter``: Serializes to and deserializes from SHORT values. When converting from bytes to |kconnect| format, the converter returns an optional INT16 schema. For detailed information about converters, see :connect-common:`Configuring Key and Value Converters|userguide.html#configuring-key-and-value-converters`. For information about how converters and |sr| work, see :ref:`schemaregistry_kafka_connect`. You can also view `Converters and Serialization Explained `__ if you'd like to dive deeper into converters. .. _connect_transforms: Transforms ^^^^^^^^^^ Connectors can be configured with transformations to make simple and lightweight modifications to individual messages. This can be convenient for minor data adjustments and event routing, and many transformations can be chained together in the connector configuration. However, more complex transformations and operations that apply to many messages are best implemented with :ref:`ksql_home` and :ref:`kafka_streams`. A transform is a simple function that accepts one record as an input and outputs a modified record. All transforms provided by |kconnect-long| perform simple but commonly useful modifications. Note that you can implement the :platform:`Transformation|connect/javadocs/javadoc/org/apache/kafka/connect/transforms/Transformation.html` interface with your own custom logic, package them as a :connect-common:`Kafka Connect plugin|userguide.html#installing-kconnect-plugins`, and use them with any connector. When transforms are used with a source connector, |kconnect-long| passes each source record produced by the connector through the first transformation, which makes its modifications and outputs a new source record. This updated source record is then passed to the next transform in the chain, which generates a new modified source record. This continues for the remaining transforms. The final updated source record is :ref:`converted to the binary form ` and written to |ak|. Transforms can also be used with sink connectors. |kconnect-long| reads message from |ak| and :ref:`converts the binary representation to a sink record `. If there is a transform, |kconnect-long| passes the record through the first transformation, which makes its modifications and outputs a new, updated sink record. The updated sink record is then passed through the next transform in the chain, which generates a new sink record. This continues for the remaining transforms, and the final updated sink record is then passed to the sink connector for processing. For more information, see :ref:`connect_transforms_supported`. .. include:: transforms/transforms-list.rst .. _dead-letter-queues: Dead Letter Queue ^^^^^^^^^^^^^^^^^ Dead Letter Queues (DLQs) are only applicable for sink connectors. Note that for |ccloud| sink connectors a DLQ topic is autogenerated. For more information, see :cloud:`Confluent Cloud Dead Letter Queue|connectors/dead-letter-queue.html`. An invalid record may occur for a number of reasons. One example is when a record arrives at a sink connector serialized in JSON format, but the sink connector configuration is expecting Avro format. When an invalid record can't be processed by the sink connector, the error is handled based on the connector ``errors.tolerance`` configuration property. There are two valid values for ``errors.tolerance``: - ``none`` (default) - ``all`` When ``errors.tolerance`` is set to ``none``, an error or invalid record causes the connector task to immediately fail and the connector goes into a failed state. To resolve this issue, you must review the |kconnect-long| Worker log and do the following: #. Examine what caused the failure. #. Fix the issue. #. Restart the connector. When ``errors.tolerance`` is set to ``all``, all errors or invalid records are ignored and processing continues. No errors are written to the |kconnect| Worker log. To determine if records are failing, you must use :ref:`internal metrics `, or count the number of records at the source and compare that with the number of records processed. An error-handling feature is available that will route all invalid records to a special topic and report the error. This topic contains a DLQ of records that could not be processed by the sink connector. Create a Dead Letter Queue topic """""""""""""""""""""""""""""""" To create a DLQ, add the following configuration properties to your sink connector configuration: .. sourcecode:: bash errors.tolerance = all errors.deadletterqueue.topic.name = The following example shows a GCS Sink connector configuration with DLQ enabled: .. sourcecode:: bash { "name": "gcs-sink-01", "config": { "connector.class": "io.confluent.connect.gcs.GcsSinkConnector", "tasks.max": "1", "topics": "gcs_topic", "gcs.bucket.name": "", "gcs.part.size": "5242880", "flush.size": "3", "storage.class": "io.confluent.connect.gcs.storage.GcsStorage", "format.class": "io.confluent.connect.gcs.format.avro.AvroFormat", "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner", "value.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": "http://localhost:8081", "schema.compatibility": "NONE", "confluent.topic.bootstrap.servers": "localhost:9092", "confluent.topic.replication.factor": "1", "errors.tolerance": "all", "errors.deadletterqueue.topic.name": "dlq-gcs-sink-01" } } Even if the DQL topic contains the records that failed, it does not show why. You can add the following configuration property to include failed record header information. .. sourcecode:: bash errors.deadletterqueue.context.headers.enable=true Record headers are added to the DLQ when ``errors.deadletterqueue.context.headers.enable`` parameter is set to ``true``–the default is ``false``. You can then use the :ref:`kafkacat-usage` to view the record header and determine why the record failed. Errors are also sent to :connect-common:`Connect Reporter|userguide.html#userguide-connect-reporter`. To avoid conflicts with the original record header, the DLQ context header keys start with ``_connect.errors``. Here is the same example configuration with headers enabled: .. sourcecode:: bash { "name": "gcs-sink-01", "config": { "connector.class": "io.confluent.connect.gcs.GcsSinkConnector", "tasks.max": "1", "topics": "gcs_topic", "gcs.bucket.name": "", "gcs.part.size": "5242880", "flush.size": "3", "storage.class": "io.confluent.connect.gcs.storage.GcsStorage", "format.class": "io.confluent.connect.gcs.format.avro.AvroFormat", "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner", "value.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": "http://localhost:8081", "schema.compatibility": "NONE", "confluent.topic.bootstrap.servers": "localhost:9092", "confluent.topic.replication.factor": "1", "errors.tolerance": "all", "errors.deadletterqueue.topic.name": "dlq-gcs-sink-01", "errors.deadletterqueue.context.headers.enable":true } } For more information about DLQs, see `Kafka Connect Deep Dive – Error Handling and Dead Letter Queues `__. Use a Dead Letter Queue with security """"""""""""""""""""""""""""""""""""" When you use |cp| with security enabled, the |cp| :ref:`Admin Client ` creates the Dead Letter Queue (DLQ) topic. Invalid records are first passed to an internal producer constructed to send these records, and then, the Admin Client creates the DLQ topic. For the DLQ to work in a secure |cp| environment, you must add additional Admin Client configuration properties (prefixed with ``admin.*``) to the |kconnect| Worker configuration. The following :ref:`SASL/PLAIN ` example shows additional |kconnect| Worker configuration properties: .. sourcecode:: bash admin.ssl.endpoint.identification.algorithm=https admin.sasl.mechanism=PLAIN admin.security.protocol=SASL_SSL admin.request.timeout.ms=20000 admin.retry.backoff.ms=500 admin.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="" \ password=""; For details about configuring your |kconnect| worker, sink connector, and dead letter queue topic in a Role-Based Access Control (RBAC) environment, see :ref:`connect-rbac-index`. Related Content --------------- - Blog post: `Kafka Connect Deep Dive – Error Handling and Dead Letter Queues `__ - Course: `Kafka Connect 101 `__ - Course: `Building Data Pipelines with Apache Kafka and Confluent `__