.. meta:: :description: Kafka Connect, an open source component of Apache Kafka, is a framework for connecting Kafka with external systems such as databases, key-value stores, search indexes, and file systems. .. _kafka_connect: |kconnect-long| =============== |kconnect-long|, an open source component of |ak-tm|, is a framework for connecting |ak| with external systems such as databases, key-value stores, search indexes, and file systems. The information provided here is specific to |kconnect-long| for |cp|. For information about |kconnect| on |ccloud|, see `Connect to External Systems `__. .. tip:: :ccloud-cta:`Confluent Cloud|` offers pre-built, fully managed, |ak| connectors that make it easy to instantly connect to popular data sources and sinks. With a simple GUI-based configuration and elastic scaling with no infrastructure to manage, |ccloud| connectors make moving data in and out of |ak| an effortless task, giving you more time to focus on app development. For information about |ccloud| connectors, see `Connect to External Systems `__. Benefits of |kconnect-long| for |cp| include: * **Data Centric Pipeline** - |kconnect| uses meaningful data abstractions to pull or push data to |ak|. * **Flexibility and Scalability** – |kconnect| runs with streaming and batch-oriented systems on a single node (standalone) or scaled to an organization-wide service (distributed). * **Reusability and Extensibility** – |kconnect| leverages existing connectors or extends them to tailor to your needs and provides lower time to production. |kconnect-long| is focused on streaming data to and from |ak|, making it simpler for you to write high quality, reliable, and high performance connector plugins. It also enables the framework to make guarantees that are difficult to achieve using other frameworks. |kconnect-long| is an integral component of an ETL pipeline, when combined with |ak| and a stream processing framework. |kconnect-long| can be deployed either as a standalone process that runs jobs on a single machine (for example, log collection), or as a distributed, scalable, fault-tolerant service supporting an entire organization. |kconnect-long| provides a low barrier to entry and low operational overhead. You can start small with a standalone environment for development and testing, and then scale up to a full production environment to support a large organization’s data pipeline. Source Connector A source connector ingests entire databases and streams table updates to |ak| topics. It can also collect metrics from all of your application servers and store these in |ak| topics, making the data available for stream processing with low latency. Sink Connector A sink connector delivers data from |ak| topics into secondary indexes such as Elasticsearch, or batch systems such as Hadoop for offline analysis. .. tip:: For a deeper dive into the benefits of using |kconnect-long|, listen to `Why Kafka Connect? featuring Robin Moffatt `__. The following documentation links provide tutorials, concepts, and instructions for deploying |kconnect-long| in your environment. .. toctree:: :maxdepth: 1 userguide Connecting Clients to Confluent Cloud devguide quickstart concepts references/index Kafka Connect Monitoring transforms/index security design faq