Install Confluent Platform using ZIP and TAR Archives

This topic provides instructions for running Confluent Platform locally or running a production-ready Confluent Platform configuration in a multi-node environment with a replicated ZooKeeper ensemble.

With the production-ready installation method, you will connect to every node manually, download the archive, and run the Confluent Platform installation commands.

Looking for a fully managed cloud-native service for Apache Kafka®?

Sign up for Confluent Cloud and get started for free using the Cloud quick start.

Prerequisites

  • You must complete these steps for each node in your cluster.
  • Before installing Confluent Platform, your environment must meet the prerequisites as described in software and hardware requirements.

Get the software

  1. Go to the installation page. You may be prompted to enter your name, company, email address, and accept license terms before you can access the page.

    After you are on the installation page, choose your archive package or download a package directly by using the curl command.

    Confluent Platform
    • ZIP

      curl -O https://packages.confluent.io/archive/7.4/confluent-7.4.7.zip
      
    • TAR

      curl -O https://packages.confluent.io/archive/7.4/confluent-7.4.7.tar.gz
      
    Confluent Platform using only Confluent Community components
    • ZIP

      curl -O https://packages.confluent.io/archive/7.4/confluent-community-7.4.7.zip
      
    • TAR

      curl -O https://packages.confluent.io/archive/7.4/confluent-community-7.4.7.tar.gz
      
  2. Extract the contents of the archive. For ZIP files, run this command in a terminal.

    unzip confluent-7.4.7.zip
    

    For TAR files run this command:

    tar xzf confluent-7.4.7.tar.gz
    

    You should have these directories:

    Folder Description
    /bin/ Driver scripts for starting and stopping services
    /etc/ Configuration files
    /lib/ Systemd services
    /libexec/ Multi-platform CLI binaries
    /share/ Jars and licenses
    /src/ Source files that require a platform-dependent build

Optionally configure CONFLUENT_HOME

To more easily use the Confluent CLI and additional Confluent tools, you can optionally configure the CONFLUENT_HOME variable and add the Confluent Platform \bin folder to the path.

  1. Set the environment variable for the Confluent Platform home directory.

    export CONFLUENT_HOME=<The directory where Confluent is installed>
    
  2. Add the Confluent Platform bin directory to your PATH

    export PATH=$PATH:$CONFLUENT_HOME/bin
    
  3. Test that you set the CONFLUENT_HOME variable correctly by running the confluent command:

    confluent --help
    

    Your output should show the available commands for managing Confluent Platform.

Start Confluent Platform locally for testing purposes

If you want to start Confluent Platform for testing and investigation purposes, you can do so by configuring the CONFLUENT_HOME variable and then using the confluent local services start command, which will start Confluent Platform locally in ZooKeeper mode.

Important

If you want to use the confluent local commands, you must have Java 11 or 8 installed (version strings 1.11 or 1.8). Java 17 is the recommended Java version for Confluent Platform.

confluent local services start

Your output should resemble:

Starting Zookeeper
Zookeeper is [UP]
Starting Kafka
Kafka is [UP]
Starting Schema Registry
Schema Registry is [UP]
Starting Kafka REST
Kafka REST is [UP]
Starting Connect
Connect is [UP]
Starting KSQL Server
KSQL Server is [UP]
Starting Control Center
Control Center is [UP]

Configure Confluent Platform for production

Tip

You can store passwords and other configuration data securely by using the confluent secret commands. For more information, see Secrets Management.

Configure Confluent Platform with the individual component properties files. By default these are located in <path-to-confluent>/etc/. You must minimally configure the following components.

ZooKeeper

These instructions assume you are running ZooKeeper in replicated mode. A minimum of three servers are required for replicated mode, and you must have an odd number of servers for failover. For more information, see the ZooKeeper documentation.

  1. Navigate to the ZooKeeper properties file (/etc/kafka/zookeeper.properties) file and modify as shown.

    tickTime=2000
    dataDir=/var/lib/zookeeper/
    clientPort=2181
    initLimit=5
    syncLimit=2
    server.1=zoo1:2888:3888
    server.2=zoo2:2888:3888
    server.3=zoo3:2888:3888
    autopurge.snapRetainCount=3
    autopurge.purgeInterval=24
    

    This configuration is for a three node ensemble. This configuration file should be identical across all nodes in the ensemble. tickTime, dataDir, and clientPort are all set to typical single server values. The initLimit and syncLimit govern how long following ZooKeeper servers can take to initialize with the current leader and how long they can be out of sync with the leader. In this configuration, a follower can take 10000 ms to initialize and can be out of sync for up to 4000 ms based on the tickTime being set to 2000ms.

    The server.* properties set the ensemble membership. The format is

    server.<myid>=<hostname>:<leaderport>:<electionport>
    
    • myid is the server identification number. There are three servers that each have a different myid with values 1, 2, and 3 respectively. The myid is set by creating a file named myid in the dataDir that contains a single integer in human readable ASCII text. This value must match one of the myid values from the configuration file. You will see an error if another ensemble member is already started with a conflicting myid value.
    • leaderport is used by followers to connect to the active leader. This port should be open between all ZooKeeper ensemble members.
    • electionport is used to perform leader elections between ensemble members. This port should be open between all ZooKeeper ensemble members.

    The autopurge.snapRetainCount and autopurge.purgeInterval have been set to purge all but three snapshots every 24 hours.

  2. Navigate to the ZooKeeper log directory (e.g., /var/lib/zookeeper/) and create a file named myid. The myid file consists of a single line that contains the machine ID in the format <machine-id>. When the ZooKeeper server starts up, it knows which server it is by referencing the myid file. For example, server 1 will have a myid value of 1.

Kafka

In a production environment, multiple brokers are required.

ZooKeeper mode

During startup in ZooKeeper mode, brokers register themselves in ZooKeeper to become a member of the cluster.

To configure brokers, navigate to the Apache Kafka® properties file (/etc/kafka/server.properties) and customize the following:

  • Connect to the same ZooKeeper ensemble by setting the zookeeper.connect in all nodes to the same value. Replace all instances of localhost to the hostname or FQDN (fully qualified domain name) of your node. For example, if your hostname is zookeeper:

    zookeeper.connect=zookeeper:2181
    
  • Configure the broker IDs for each node in your cluster using one of these methods.

    • Dynamically generate the broker IDs: add broker.id.generation.enable=true and comment out broker.id. For example:

      ############################# Server Basics #############################
      
      # The ID of the broker. This must be set to a unique integer for each broker.
      #broker.id=0
      broker.id.generation.enable=true
      
    • Manually set the broker IDs: set a unique value for broker.id on each node.

  • Configure how other brokers and clients communicate with the broker using listeners, and optionally advertised.listeners.

    • listeners: Comma-separated list of URIs and listener names to listen on.
    • advertised.listeners: Comma-separated list of URIs and listener names for other brokers and clients to use. The advertised.listeners parameter ensures that the broker advertises an address that is accessible from both local and external hosts.

    For more information, see Production Configuration Options.

  • Configure security for your environment.

KRaft mode

For KRaft mode, you must configure a node to be a broker or a controller. Navigate to the Kafka properties file for KRaft (find example KRaft configuration files under /etc/kafka/kraft/) and customize the following:

  • Configure the process.roles, node.id and controller.quorum.voters for each node. Typically in a production environment, you should have a minimum of three brokers and three controllers.

    • For process.roles, set whether the node will be a broker or a controller. combined mode, meaning process.roles is set to broker,controller, is currently not supported for production workloads.

    • Set a system-wide unique ID for the node.id.

    • controller.quorum.voters should be a comma-separated list of controllers in the format nodeID@hostname:port

      ############################# Server Basics #############################
      
      # The role of this server. Setting this puts us in KRaft mode
      process.roles=broker
      
      # The node id associated with this instance's roles
      node.id=2
      
      # The connect string for the controller quorum
      controller.quorum.voters=1@controller1:9093,3@controller3:9093,5@controller5:9093
      
  • Configure how brokers and clients communicate with the broker using listeners, and where controllers listen with controller.listener.names.

    • listeners: Comma-separated list of URIs and listener names to listen on in the format listener_name://host_name:port
    • controller.listener.names: Comma-separated list of listener_name entries for listeners used by the controller.

    For more information, see Production Configuration Options.

  • Configure security for your environment.

Control Center

  1. Navigate to the Control Center properties file (/etc/confluent-control-center/control-center-production.properties) and customize the following:

    # host/port pairs to use for establishing the initial connection to the Kafka cluster
    bootstrap.servers=<hostname1:port1,hostname2:port2,hostname3:port3,...>
    # location for Control Center data
    confluent.controlcenter.data.dir=/var/lib/confluent/control-center
    # the Confluent license
    confluent.license=<your-confluent-license>
    
  2. If running any clusters in ZooKeeper mode, configure ZooKeeper.

      # ZooKeeper connection string with host and port of a ZooKeeper servers
      zookeeper.connect=<hostname1:port1,hostname2:port2,hostname3:port3,...>
    
    This configuration is for a three node multi-node cluster. For more information, see :ref:`Control Center configuration details <controlcenter_configuration>`. For information about |cp| licenses, see :ref:`controlcenter_licenses`.
    
  3. Navigate to the Kafka server configuration file (/etc/kafka/server.properties) and enable Confluent Metrics Reporter.

    ##################### Confluent Metrics Reporter #######################
    # Confluent Control Center and Confluent Auto Data Balancer integration
    #
    # Uncomment the following lines to publish monitoring data for
    # Confluent Control Center and Confluent Auto Data Balancer
    # If you are using a dedicated metrics cluster, also adjust the settings
    # to point to your metrics Kafka cluster.
    metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
    confluent.metrics.reporter.bootstrap.servers=localhost:9092
    #
    # Uncomment the following line if the metrics cluster has a single broker
    confluent.metrics.reporter.topic.replicas=1
    
  4. Add these lines to the Kafka Connect properties file (/etc/kafka/connect-distributed.properties) to add support for the interceptors.

    # Interceptor setup
    consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
    producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
    

Schema Registry

Navigate to the Schema Registry properties file (/etc/schema-registry/schema-registry.properties) and specify the following properties:

# Specify the address the socket server listens on, e.g. listeners = PLAINTEXT://your.host.name:9092
listeners=http://0.0.0.0:8081

# The host name advertised in ZooKeeper. This must be specified if your running Schema Registry
# with multiple nodes.
host.name=192.168.50.1

# List of Kafka brokers to connect to, e.g. PLAINTEXT://hostname:9092,SSL://hostname2:9092
kafkastore.bootstrap.servers=PLAINTEXT://hostname:9092,SSL://hostname2:9092

This configuration is for a three node multi-node cluster. For more information, see Running Schema Registry in Production.

Start Confluent Platform

Start Confluent Platform by using Kafka CLI commands.

Tip

In ZooKeeper mode, ZooKeeper must be started first. Kafka, and Schema Registry must be started in this order, and must be started after ZooKeeper, if you are using it, and before any other components.

  1. Start ZooKeeper. Run this command in its own terminal.

    <path-to-confluent>/bin/zookeeper-server-start <path-to-confluent>/etc/kafka/zookeeper.properties
    
  2. Start Kafka.

    <path-to-confluent>/bin/kafka-server-start <path-to-confluent>/etc/kafka/server.properties
    
  3. Start Schema Registry. Run this command in its own terminal.

    <path-to-confluent>/bin/schema-registry-start <path-to-confluent>/etc/schema-registry/schema-registry.properties
    
  4. Start other Confluent Platform components as desired.

    • Control Center

      <path-to-confluent>/bin/control-center-start <path-to-confluent>/etc/confluent-control-center/control-center.properties
      
    • Kafka Connect

      <path-to-confluent>/bin/connect-distributed <path-to-confluent>/etc/schema-registry/connect-avro-distributed.properties
      
    • Confluent REST Proxy

      <path-to-confluent>/bin/kafka-rest-start <path-to-confluent>/etc/kafka-rest/kafka-rest.properties
      
    • ksqlDB

      <path-to-confluent>/bin/ksql-server-start <path-to-confluent>/etc/ksqldb/ksql-server.properties
      

Uninstall

  1. Remove the Confluent directory. For example, if you have Confluent Platform 7.4.7 installed:

    rm -rf confluent-7.4.7
    
  2. Remove the Confluent Platform data files.

    rm -rf /var/lib/<confluent-platform-data-files>