Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Manual Install using ZIP and TAR Archives¶
This topic provides instructions for installing a production-ready Confluent Platform configuration in a multi-node environment with a replicated ZooKeeper ensemble.
With this installation method, you connect to every node manually, download the archive, and run the Confluent Platform installation commands.
Important
You must complete these steps for each node in your cluster.
- Prerequisites
- Before installing Confluent Platform, your environment must have the following software and hardware requirements.
Get the Software¶
Go to the downloads page and choose your archive package or download directly by using curl.
- Confluent Platform
ZIP
curl -O http://packages.confluent.io/archive/5.1/confluent-5.1.4-2.11.zip
TAR
curl -O http://packages.confluent.io/archive/5.1/confluent-5.1.4-2.11.tar.gz
- Confluent Platform using only Confluent Community components
ZIP
curl -O http://packages.confluent.io/archive/5.1/confluent-community-5.1.4-2.11.zip
TAR
curl -O http://packages.confluent.io/archive/5.1/confluent-community-5.1.4-2.11.tar.gz
Tip
The package name contains the Confluent Platform version followed by the Scala version. For example,
5.1.4-2.11.zip
denotes Confluent Platform version 5.1.4 and Scala version 2.11.Decompress the file. You should have these directories:
Folder Description /bin/ Driver scripts for starting and stopping services /etc/ Configuration files /lib/ Systemd services /logs/ Log files /share/ Jars and licenses /src/ Source files that require a platform-dependent build
Configure Confluent Platform¶
Configure Confluent Platform with the individual component properties files. By default these are located in <path-to-confluent>/etc/
.
You must minimally configure the following components.
ZooKeeper¶
These instructions assume you are running ZooKeeper in replicated mode. A minimum of three servers are required for replicated mode, and you must have an odd number of servers for failover. For more information, see the ZooKeeper documentation.
Navigate to the ZooKeeper properties file (
/etc/kafka/zookeeper.properties
) file and modify as shown.tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181 initLimit=5 syncLimit=2 server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888 autopurge.snapRetainCount=3 autopurge.purgeInterval=24
This configuration is for a three node ensemble. This configuration file should be identical across all nodes in the ensemble.
tickTime
,dataDir
, andclientPort
are all set to typical single server values. TheinitLimit
andsyncLimit
govern how long following ZooKeeper servers can take to initialize with the current leader and how long they can be out of sync with the leader. In this configuration, a follower can take 10000 ms to initialize and can be out of sync for up to 4000 ms based on thetickTime
being set to 2000ms.The
server.*
properties set the ensemble membership. The format isserver.<myid>=<hostname>:<leaderport>:<electionport>
myid
is the server identification number. There are three servers that each have a differentmyid
with values1
,2
, and3
respectively. Themyid
is set by creating a file namedmyid
in thedataDir
that contains a single integer in human readable ASCII text. This value must match one of themyid
values from the configuration file. You will see an error if another ensemble member is already started with a conflictingmyid
value.leaderport
is used by followers to connect to the active leader. This port should be open between all ZooKeeper ensemble members.electionport
is used to perform leader elections between ensemble members. This port should be open between all ZooKeeper ensemble members.
The
autopurge.snapRetainCount
andautopurge.purgeInterval
have been set to purge all but three snapshots every 24 hours.Navigate to the ZooKeeper log directory (e.g.,
/var/lib/zookeeper/
) and create a file namedmyid
. Themyid
file consists of a single line that contains the machine ID in the format'<machine-id>'
. When the ZooKeeper server starts up, it knows which server it is by referencing themyid
file. For example, server 1 will have this myid value.'1'
Kafka¶
In a production environment, multiple brokers are required. During startup brokers register themselves in ZooKeeper to become a member of the cluster.
Navigate to the Apache Kafka® properties file (/etc/kafka/server.properties
) and customize the following:
Connect to the same ZooKeeper ensemble by setting the
zookeeper.connect
in all nodes to the same value. Replace all instances oflocalhost
to the hostname or FQDN (fully qualified domain name) of your node. For example, if your hostname iszookeeper
:zookeeper.connect=zookeeper:2181
Configure the broker IDs for each node in your cluster using one of these methods.
Dynamically generate the broker IDs: add
broker.id.generation.enable=true
and comment outbroker.id
. For example:############################# Server Basics ############################# # The ID of the broker. This must be set to a unique integer for each broker. #broker.id=0 broker.id.generation.enable=true
Manually set the broker IDs: set a unique value for
broker.id
on each node.
Configure how other brokers and clients communicate with the broker using
listeners
, and optionallyadvertised.listeners
.listeners
: Comma-separated list of URIs and listener names to listen on.advertised.listeners
: Comma-separated list of URIs and listener names for other brokers and clients to use. Theadvertised.listeners
parameter ensures that the broker advertises an address that is accessible from both local and external hosts.
For more information, see Production Configuration Options.
Control Center¶
Navigate to the Control Center properties file (
/etc/confluent-control-center/control-center-production.properties
) and customize the following:# host/port pairs to use for establishing the initial connection to the Kafka cluster bootstrap.servers=<hostname1:port1,hostname2:port2,hostname3:port3,...> # location for Control Center data confluent.controlcenter.data.dir=/var/lib/confluent/control-center # the Confluent license confluent.license=<your-confluent-license> # ZooKeeper connection string with host and port of a ZooKeeper servers zookeeper.connect=<hostname1:port1,hostname2:port2,hostname3:port3,...>
This configuration is for a three node multi-node cluster. For more information, see Control Center configuration details.
Navigate to the Kafka server configuration file (
/etc/kafka/server.properties
) and enable Confluent Metrics Reporter.##################### Confluent Metrics Reporter ####################### # Confluent Control Center and Confluent Auto Data Balancer integration # # Uncomment the following lines to publish monitoring data for # Confluent Control Center and Confluent Auto Data Balancer # If you are using a dedicated metrics cluster, also adjust the settings # to point to your metrics Kafka cluster. metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter confluent.metrics.reporter.bootstrap.servers=localhost:9092 # # Uncomment the following line if the metrics cluster has a single broker confluent.metrics.reporter.topic.replicas=1
Add these lines to the Kafka Connect properties file (
/etc/kafka/connect-distributed.properties
) to add support for the interceptors.# Interceptor setup consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
Start Confluent Platform¶
Install Confluent Platform by using Kafka CLI commands.
Tip
ZooKeeper, Kafka, and Schema Registry must be started in this specific order, and must be started before any other components.
Start ZooKeeper. Run this command in its own terminal.
<path-to-confluent>/bin/zookeeper-server-start <path-to-confluent>/etc/kafka/zookeeper.properties
Start Kafka. Run this command in its own terminal.
<path-to-confluent>/bin/kafka-server-start <path-to-confluent>/etc/kafka/server.properties
Start Schema Registry. Run this command in its own terminal.
<path-to-confluent>/bin/schema-registry-start <path-to-confluent>/etc/schema-registry/schema-registry.properties
Start other Confluent Platform components as desired.
Control Center
<path-to-confluent>/bin/control-center-start <path-to-confluent>/etc/confluent-control-center/control-center.properties
Kafka Connect
<path-to-confluent>/bin/connect-distributed <path-to-confluent>/etc/schema-registry/connect-avro-distributed.properties
Confluent REST Proxy
<path-to-confluent>/bin/kafka-rest-start <path-to-confluent>/etc/kafka-rest/kafka-rest.properties
KSQL
<path-to-confluent>/bin/ksql-server-start <path-to-confluent>/etc/ksql/ksql-server.properties
Uninstall¶
Remove the Confluent directory. For example, if you have Confluent Platform 5.1.4 installed:
rm -rf confluent-5.1.4
Remove the Confluent Platform data files.
rm -rf /var/lib/<confluent-platform-data-files>
Next Steps¶
Try out the Confluent Platform Quick Start.