Confluent Platform is a full-scale event streaming platform that enables you to easily access,
store, and manage data as continuous, real-time streams. Built by the original creators
of Apache Kafka®, Confluent expands the benefits of Kafka with enterprise-grade
features while removing the burden of Kafka management or monitoring. Today, over
80% of the Fortune 100 leverage event streaming – and the majority of those leverage
By integrating historical and real-time data into a single, central source of truth,
Confluent makes it easy to build an entirely new category of modern, event-driven
applications, gain a universal data pipeline, and unlock powerful new use cases with full
scalability, performance, and reliability.
What is Confluent Used For?
Confluent Platform lets you focus on how to derive business value from your data rather than
worrying about the underlying mechanics, such as how data is being transported or
integrated between disparate systems. Specifically, Confluent Platform simplifies connecting
data sources to Kafka, building streaming applications, as well as securing, monitoring,
and managing your Kafka infrastructure. Today, Confluent Platform is used for a wide array of use
cases across numerous industries, from financial services, omnichannel retail, and
autonomous cars, to fraud detection, microservices, and IoT.
Confluent Platform Components
Overview of Confluent’s Event Streaming Technology
At the core of Confluent Platform is Apache Kafka, the most popular
open source distributed streaming platform. The key capabilities of Kafka are:
- Publish and subscribe to streams of records
- Store streams of records in a fault tolerant way
- Process streams of records
Out of the box, Confluent Platform also includes Schema Registry, REST Proxy, a total of
100+ pre-built Kafka connectors, and ksqlDB.
Kafka is used by 60% of Fortune 500 companies for a variety of
use cases, including collecting user activity data, system logs, application metrics, stock ticker data, and device
The key components of the Kafka open source project are Kafka Brokers and Kafka
Java Client APIs.
- Kafka Brokers
- Kafka brokers that form the messaging, data persistency and storage tier of Kafka.
- Kafka Java Client APIs
- Producer API is a Java Client that allows an application to publish a stream records to one or
more Kafka topics.
- Consumer API is a Java Client that allows an application to subscribe to one or more topics and
process the stream of records produced to them.
- Streams API allows applications to act as a stream processor, consuming an input stream from
one or more topics and producing an output stream to one or more output topics, effectively transforming the input
streams to output streams. It has a very low barrier to entry, easy operationalization, and a high-level DSL for
writing stream processing applications. As such it is the most convenient yet scalable option to process and analyze
data that is backed by Kafka.
- Admin API provides the capability to create, inspect, delete,
and manage topics, brokers, ACLs, and other Kafka objects.
- Kafka Connect API
- Connect API is a component that you can use to stream data between Kafka and other data
systems in a scalable and reliable way. It makes it simple to configure connectors to move data into and out of Kafka.
Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics,
making the data available for stream processing. Connectors can also deliver data from Kafka topics into secondary
indexes like Elasticsearch or into batch systems such as Hadoop for offline analysis.
Overview of Confluent Platform’s Enterprise Features
Confluent Control Center
Confluent Control Center is a GUI-based system for managing and monitoring Kafka. It allows you to easily manage Kafka Connect, to create, edit, and manage connections to other systems. It also allows you to monitor data streams
from producer to consumer, assuring that every message is delivered, and measuring how long it takes to deliver messages. Using Control Center, you can build a production data pipeline based on Kafka without writing a line of code.
Control Center also has the capability to define alerts on the latency and completeness statistics of data streams, which can be delivered by email or queried from a centralized alerting system.
Confluent for Kubernetes
Confluent for Kubernetes is a Kubernetes operator. Kubernetes operators extend the
orchestration capabilities of Kubernetes by providing the unique features and
requirements for a specific platform application. For Confluent Platform, this includes
greatly simplifying the deployment process of Kafka on Kubernetes and automating
typical infrastructure lifecycle tasks.
See Confluent for Kubernetes for more
Confluent Connectors to Kafka
Connectors leverage the Kafka Connect API to connect Kafka to other systems
such as databases, key-value stores, search indexes, and file systems.
Confluent Hub has downloadable connectors for the most popular data sources and sinks.
These include fully tested and supported versions of these connectors with Confluent Platform.
See the following documentation for more information:
Confluent provides both commercial and Community licensed connectors. See
Confluent Hub for details, and to download
Self-Balancing Clusters provides automated load balancing, failure detection and self-healing.
It provides support for adding or decommissioning brokers as needed, with no manual
tuning. Self-Balancing is the next iteration of Auto Data Balancer in that
Self-Balancing auto-monitors clusters for imbalances, and automatically triggers rebalances based
on your configurations. (You can choose to auto-balance Only when brokers are added or Anytime.)
Partition reassignment plans and execution are taken care of for you.
Confluent Cluster Linking
Cluster Linking directly connects clusters together and mirrors topics
from one cluster to another over a link bridge. Cluster Linking simplifies
setup of multi-datacenter, multi-cluster, and hybrid cloud deployments.
Confluent Auto Data Balancer
As clusters grow, topics and partitions grow at different rates, brokers are added and removed and over time this leads to
unbalanced workload across datacenter resources. Some brokers are not doing much at all, while others are heavily taxed with
large or many partitions, slowing down message delivery. When executed, Confluent Auto Data Balancer monitors
your cluster for number of brokers, size of partitions, number of partitions and number of leaders within the cluster. It allows
you to shift data to create an even workload across your cluster, while throttling rebalance traffic to minimize impact on
production workloads while rebalancing.
For more information, see the automatic data balancing documentation.
Replicator makes it easier than ever to maintain multiple Kafka clusters in multiple data centers. Managing replication of data and topic configuration between data centers enables use-cases such as:
- Active-active geo-localized deployments: allows users to access a near-by data center to optimize their architecture for low latency and high performance
- Centralized analytics: Aggregate data from multiple Kafka clusters into one location for organization-wide analytics
- Cloud migration: Use Kafka to synchronize data between on-prem applications and cloud deployments
You can use Replicator to configure and manage replication for all these scenarios from either Confluent Control Center or command-line tools.
To get started, see the Replicator documentation, including the quick start tutorial for Replicator.
Tiered Storage provides options for storing large volumes of Kafka data
using your favorite cloud provider, thereby reducing operational burden and cost.
With Tiered Storage, you can send keep data on cost-effective object storage, and
scale brokers only when you need more compute resources.
Confluent JMS Client
Confluent Platform includes a JMS-compatible client for Kafka. This Kafka client implements the JMS 1.1 standard API, using
Kafka brokers as the backend. This is useful if you have legacy applications using JMS, and you would like to
replace the existing JMS message broker with Kafka. By replacing the legacy JMS message broker with Kafka,
existing applications can integrate with your modern streaming platform without a major rewrite of the application.
For more information, see JMS Client.
Confluent MQTT Proxy
Provides a way to to publish data directly Kafka from MQTT devices and gateways without the need for a MQTT Broker in the middle.
For more information, see MQTT Proxy.
Confluent Security Plugins
Confluent Security Plugins are used to add security capabilities to various Confluent Platform tools and products.
Currently, there is a plugin available for Confluent REST Proxy which helps in authenticating the incoming requests and propagating
the authenticated principal to requests to Kafka. This enables Confluent REST Proxy clients to utilize the multi-tenant security
features of the Kafka broker. For more information, see REST Proxy Security,
the REST Proxy Security Plugin, and
Schema Registry Security Plugin.