Manual Install using Systemd on Ubuntu and Debian¶
This topic provides instructions for installing a production-ready Confluent Platform configuration in a multi-node Ubuntu or Debian environment with a replicated ZooKeeper ensemble.
With this installation method, you connect to every node manually to run the Confluent Platform installation commands.
You must complete these steps for each node in your cluster.
Before installing Confluent Platform, your environment must meet the prerequisites as described in software and hardware requirements.
Get the Software¶
The APT repositories provide packages for Debian-based Linux distributions such as Debian and Ubuntu. You can install
individual Confluent Platform packages or the entire platform. For a list of available packages, see the documentation
or you can search the repository (
apt-cache search <package-name>).
You can install the entire platform or the individual component packages. For a listing of packages, see Confluent Platform Packages.
Install the Confluent public key. This key is used to sign the packages in the APT repository.
wget -qO - https://packages.confluent.io/deb/7.1/archive.key | sudo apt-key add -
Add the repository to your
/etc/apt/sources.listby running this command:
After Confluent Platform 8.0, the librdkafka, Avro, and libserades C/C++ client packages will NOT be available from the
https://packages.confluent.io/deblocation. You will need to obtain those client packages from
https://packages.confluent.io/clientsafter the Confluent Platform 8.0 release.
clientsrepository, you must obtain your Debian distribution’s release “Code Name”, such as
focal, etc. You can do this by calling
lsb_release -cs. The following example makes this call with
$(lsb_release -cs), which should work in most cases. If it does not, you must pick the closest Debian or Ubuntu code name for your Debian Linux distribution that matches the supported Debian & Ubuntu Operating Systems supported by Confluent Platform.
sudo add-apt-repository "deb [arch=amd64] https://packages.confluent.io/deb/7.1 stable main" sudo add-apt-repository "deb https://packages.confluent.io/clients/deb $(lsb_release -cs) main"
Update apt-get and install the entire Confluent Platform platform.
sudo apt-get update && sudo apt-get install confluent-platform
Confluent Platform with RBAC:
sudo apt-get update && \ sudo apt-get install confluent-platform && \ sudo apt-get install confluent-security
Confluent Platform using only Confluent Community components:
sudo apt-get update && sudo apt-get install confluent-community-2.13
Configure Confluent Platform¶
You can store passwords and other configuration data securely by using the confluent secret commands. For more information, see Secrets Management.
Configure Confluent Platform with the individual component properties files. By default these are located in
You must minimally configure the following components.
These instructions assume you are running ZooKeeper in replicated mode. A minimum of three servers are required for replicated mode, and you must have an odd number of servers for failover. For more information, see the ZooKeeper documentation.
Navigate to the ZooKeeper properties file (
/etc/kafka/zookeeper.properties) file and modify as shown.
tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181 initLimit=5 syncLimit=2 server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 autopurge.snapRetainCount=3 autopurge.purgeInterval=24
This configuration is for a three node ensemble. This configuration file should be identical across all nodes in the ensemble.
clientPortare all set to typical single server values. The
syncLimitgovern how long following ZooKeeper servers can take to initialize with the current leader and how long they can be out of sync with the leader. In this configuration, a follower can take 10000 ms to initialize and can be out of sync for up to 4000 ms based on the
tickTimebeing set to 2000ms.
server.*properties set the ensemble membership. The format is
myidis the server identification number. There are three servers that each have a different
myidis set by creating a file named
dataDirthat contains a single integer in human readable ASCII text. This value must match one of the
myidvalues from the configuration file. You will see an error if another ensemble member is already started with a conflicting
leaderportis used by followers to connect to the active leader. This port should be open between all ZooKeeper ensemble members.
electionportis used to perform leader elections between ensemble members. This port should be open between all ZooKeeper ensemble members.
autopurge.purgeIntervalhave been set to purge all but three snapshots every 24 hours.
Navigate to the ZooKeeper log directory (e.g.,
/var/lib/zookeeper/) and create a file named
myidfile consists of a single line that contains the machine ID in the format
<machine-id>. When the ZooKeeper server starts up, it knows which server it is by referencing the
myidfile. For example, server 1 will have a
In a production environment, multiple brokers are required. During startup brokers register themselves in ZooKeeper to become a member of the cluster.
Navigate to the Apache Kafka® properties file (
/etc/kafka/server.properties) and customize the following:
Connect to the same ZooKeeper ensemble by setting the
zookeeper.connectin all nodes to the same value. Replace all instances of
localhostto the hostname or FQDN (fully qualified domain name) of your node. For example, if your hostname is
Configure the broker IDs for each node in your cluster using one of these methods.
Dynamically generate the broker IDs: add
broker.id.generation.enable=trueand comment out
broker.id. For example:
############################# Server Basics ############################# # The ID of the broker. This must be set to a unique integer for each broker. #broker.id=0 broker.id.generation.enable=true
Manually set the broker IDs: set a unique value for
broker.idon each node.
Configure how other brokers and clients communicate with the broker using
listeners, and optionally
listeners: Comma-separated list of URIs and listener names to listen on.
advertised.listeners: Comma-separated list of URIs and listener names for other brokers and clients to use. The
advertised.listenersparameter ensures that the broker advertises an address that is accessible from both local and external hosts.
For more information, see Production Configuration Options.
Configure security for your environment.
- For general security guidance, see Security Overview.
- For role-based access control (RBAC), see Configure Metadata Service (MDS).
- For TLS/SSL encryption, SASL authentication, and authorization, see Security Tutorial.
Navigate to the Control Center properties file (
/etc/confluent-control-center/control-center-production.properties) and customize the following:
# host/port pairs to use for establishing the initial connection to the Kafka cluster bootstrap.servers=<hostname1:port1,hostname2:port2,hostname3:port3,...> # location for Control Center data confluent.controlcenter.data.dir=/var/lib/confluent/control-center # the Confluent license confluent.license=<your-confluent-license> # ZooKeeper connection string with host and port of a ZooKeeper servers zookeeper.connect=<hostname1:port1,hostname2:port2,hostname3:port3,...>
This configuration is for a three node multi-node cluster. For more information, see Control Center configuration details. For information about Confluent Platform licenses, see Managing Confluent Platform Licenses.
Navigate to the Kafka server configuration file (
/etc/kafka/server.properties) and enable Confluent Metrics Reporter.
##################### Confluent Metrics Reporter ####################### # Confluent Control Center and Confluent Auto Data Balancer integration # # Uncomment the following lines to publish monitoring data for # Confluent Control Center and Confluent Auto Data Balancer # If you are using a dedicated metrics cluster, also adjust the settings # to point to your metrics Kafka cluster. metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter confluent.metrics.reporter.bootstrap.servers=localhost:9092 # # Uncomment the following line if the metrics cluster has a single broker confluent.metrics.reporter.topic.replicas=1
Add these lines to the Kafka Connect properties file (
/etc/kafka/connect-distributed.properties) to add support for the interceptors.
# Interceptor setup consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
Navigate to the Schema Registry properties file (
and specify the following properties:
# Specify the address the socket server listens on, e.g. listeners = PLAINTEXT://your.host.name:9092 listeners=http://0.0.0.0:8081 # The host name advertised in ZooKeeper. This must be specified if your running Schema Registry # with multiple nodes. host.name=192.168.50.1 # List of Kafka brokers to connect to, e.g. PLAINTEXT://hostname:9092,SSL://hostname2:9092 kafkastore.bootstrap.servers=PLAINTEXT://hostname:9092,SSL://hostname2:9092
This configuration is for a three node multi-node cluster. For more information, see Running Schema Registry in Production.
Start Confluent Platform¶
Start Confluent Platform and its components using systemd service unit files. You can start immediately by using the
systemctl start command or enable for automatic startup by using the
systemctl enable command. These instructions
use the syntax for immediate startup.
ZooKeeper, Kafka, and Schema Registry must be started in this specific order, and must be started before any other components.
sudo systemctl start confluent-zookeeper
sudo systemctl start confluent-server
Confluent Platform using only Confluent Community components:
sudo systemctl start confluent-kafka
Start Schema Registry.
sudo systemctl start confluent-schema-registry
Start other Confluent Platform components as desired.
sudo systemctl start confluent-control-center
sudo systemctl start confluent-kafka-connect
Confluent REST Proxy
sudo systemctl start confluent-kafka-rest
sudo systemctl start confluent-ksqldb
You can check service status with this command:
systemctl status confluent*. For more information
about the systemd service unit files, see Using Confluent Platform systemd Service Unit Files.
Run this command to remove Confluent Platform, where
<component-name> is either
(Confluent Platform) or
confluent-community-2.13 (Confluent Platform using only Confluent Community components).
sudo apt-get remove <component-name>
For example, run this command to remove Confluent Platform:
sudo apt-get remove confluent-platform
Try out the Quick Start for Confluent Platform.