Upgrade Confluent Platform

Note

If you are a Confluent Platform subscriber, and you have questions about upgrades or need help, contact us through our Support Portal.

Important

The Confluent Platform tarball includes Confluent Server by default and requires a confluent.license key in your server.properties file. If you want to use the Kafka broker, you must download the confluent-community tarball. The Kafka broker is the default in all Debian or RHEL and CentOS packages.

For more information about migrating to Confluent Server, see Migrate to Confluent Server.

Preparation

Consider the guidelines below when preparing to upgrade.

  • Always back up all configuration files before upgrading. This includes, for example, /etc/kafka, /etc/kafka-rest, and /etc/schema-registry.

  • Read the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. In other words, don’t start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan.

  • Give careful consideration to the order in which components are upgraded. Starting with version 0.10.2, Java clients (producer and consumer) have the ability to communicate with older brokers. Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the Apache Kafka® cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients. Before 0.10.2, Kafka is backward compatible, which means that clients from Kafka 0.8.x releases (Confluent Platform 1.0.x) will work with brokers from Kafka release 0.8.x through 2.4 (Confluent Platform 2.0.x through 5.4.x), but not vice-versa. This means you always need to plan upgrades such that all brokers are upgraded before clients. Clients include any application that uses Kafka producer or consumer, command line tools, Camus, Schema Registry, REST Proxy, Kafka Connect and Kafka Streams.

    Important

    Due to a bug introduced in Kafka 0.9.0.0 (Confluent Platform 2.0.0), clients that depend on ZooKeeper (old Scala high-level Consumer and MirrorMaker if used with the old consumer) will not work with 0.10.x.x or newer brokers. Therefore, Kafka 0.9.0.0 (Confluent Platform 2.0.0) clients should be upgraded to 0.9.0.1 (Confluent Platform 2.0.1) before brokers are upgraded to 0.10.x.x or newer. This step is not necessary for 0.8.x or 0.9.0.1 clients.

    Important

    Although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, both the broker and clients share the same packages. This is problematic because all brokers must be upgraded before clients are upgraded. Pay careful attention to this when upgrading.

  • Kafka 2.0.0 contains changes with potential compatibility impact and deprecations with respect to previous major versions (i.e. 0.8.x.x, 0.9.x.x, 0.10.x.x, 0.11.x.x and 1.0.x). Refer to the Apache Kafka documentation to understand how they affect applications using Kafka.

    Important

    Java 7 is no longer supported. Java 8 is now the minimum version required. For complete compatibility information, see the Supported Versions and Interoperability.

  • If you are using Confluent LDAP Authorizer, you must migrate to the commercial confluent-server package when upgrading to 5.4.0. See Migrate to Confluent Server for details.

  • Read the Confluent Platform Release Notes. They contain important information about noteworthy features, and changes to configurations that may impact your upgrade.

Upgrade procedures

  1. Consider using Confluent Control Center to monitor broker status during the rolling restart.
  2. Determine and install the appropriate Java version. See Supported Java Versions to determine which versions are supported for the Confluent Platform version to which you are upgrading.
  3. Determine if clients are co-located with brokers. If they are, then ensure all client processes are not upgraded until all Kafka brokers have been upgraded.
  4. Decide on performing a rolling upgrade or a downtime upgrade. Confluent Platform supports both rolling upgrades (upgrade one broker at a time to avoid cluster downtime) and downtime upgrades (take down the entire cluster, upgrade it, and bring everything back up).
  5. Upgrade all Kafka brokers (see Upgrade all Kafka brokers).
  6. Upgrade Schema Registry, REST Proxy, and Camus (see Upgrade Camus).
  7. If it makes sense, build applications that use Kafka producers and consumers against the new 5.4.x libraries and deploy the new versions. See Application Development for more details about using the 5.4.x libraries.

Upgrade all Kafka brokers

In a rolling upgrade scenario, upgrade one Kafka broker at a time, taking into consideration the recommendations for doing rolling restarts to avoid downtime for end users. In a downtime upgrade scenario, take the entire cluster down, upgrade each Kafka broker, then start the cluster.

Steps to upgrade for any fix pack release

Any fix pack release can perform a rolling upgrade (for example, 3.1.1 to 3.1.2) by simply upgrading each broker one at a time.

To upgrade each broker:

  1. Stop the broker.
  2. Upgrade the software (see below for your packaging type).
  3. Start the broker.

Steps for upgrading previous versions to 5.4.x

In a rolling upgrade scenario, upgrading to Confluent Platform 5.4.x (Kafka 2.4.x) requires special steps because Kafka 2.4.x includes a change to the on-disk data format (unless the upgrade is from 3.3.x or newer) and the inter-broker protocol.

Follow these steps for a rolling upgrade:

  1. Update server.properties on all Kafka brokers by modifying the properties inter.broker.protocol.version and log.message.format.version to match the currently installed version:
    • For Confluent Platform 2.0.x, use inter.broker.protocol.version=0.9.0 and log.message.format.version=0.9.0
    • For Confluent Platform 3.0.x, use inter.broker.protocol.version=0.10.0 and log.message.format.version=0.10.0
    • For Confluent Platform 3.1.x, use inter.broker.protocol.version=0.10.1 and log.message.format.version=0.10.1
    • For Confluent Platform 3.2.x, use inter.broker.protocol.version=0.10.2 and log.message.format.version=0.10.2
    • For Confluent Platform 3.3.x, use inter.broker.protocol.version=0.11.0 and log.message.format.version=0.11.0
    • For Confluent Platform 4.0.x, use inter.broker.protocol.version=1.0 and log.message.format.version=1.0
    • For Confluent Platform 4.1.x, use inter.broker.protocol.version=1.1 and log.message.format.version=1.1
    • For Confluent Platform 5.0.x, use inter.broker.protocol.version=2.0 and log.message.format.version=2.0
    • For Confluent Platform 5.1.x, use inter.broker.protocol.version=2.1 and log.message.format.version=2.1
    • For Confluent Platform 5.2.x, use inter.broker.protocol.version=2.2 and log.message.format.version=2.2
    • For Confluent Platform 5.3.x, use inter.broker.protocol.version=2.3 and log.message.format.version=2.3
    • For Confluent Platform 5.4.x, use inter.broker.protocol.version=2.4 and log.message.format.version=2.4
    • If you use Confluent Server, you must provide a valid Confluent license key. For example: confluent.license=123.
  2. Upgrade each Kafka broker, one at a time.
  3. After all Kafka brokers have been upgraded, make the following update in server.properties: inter.broker.protocol.version=2.4.
  4. Restart each Kafka broker, one at a time, to apply the configuration change.
  5. After most clients are using 5.4.x, modify server.properties by changing the following property: log.message.format.version=2.3. Because the message format is the same in Confluent Platform 3.3.x through 5.4.x, this step is optional if the upgrade is from Confluent Platform 3.3.x or newer.
  6. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.4 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.

Additional ZooKeeper upgrade information

ZooKeeper has been upgraded to 3.5.6. The ZooKeeper upgrade from 3.4.X to 3.5.6 can fail if there are no snapshot files in the 3.4 data directory. This usually happens in test upgrades where ZooKeeper 3.5.6 is trying to load an existing 3.4 data directory in which no snapshot file has been created. For more details about this issue refer to ZOOKEEPER-3056 . A fix is provided in ZOOKEEPER-3056, which is to specify the configuration snapshot.trust.empty=true in zookeeper.properties before the upgrade. Data loss has been observed in standalone cluster upgrades when using the snapshot.trust.empty=true configuration. snapshot.trust.empty=true config. For more details about the issue, refer to ZOOKEEPER-3644. The safe workaround recommended is to copy the empty snapshot file to the 3.4 data directory, if there are no snapshot files present in the 3.4 data directory. For more details about the workaround, refer to the ZooKeeper Upgrade FAQ.

An embedded Jetty-based AdminServer has been added in ZooKeeper 3.5. AdminServer is enabled by default in ZooKeeper and is started on port 8080. AdminServer is disabled by default in the ZooKeeper configuration (zookeeper.properties) provided by the Apache Kafka® distribution. Make sure to update your local zookeeper.properties file with admin.enableServer=false if you want to disable the AdminServer. Refer to the AdminServer configuration to configure the AdminServer.

Four letter words whitelist in ZooKeeper

Starting in ZooKeeper 3.5.3, Four Letter Words commands need to be explicitly white listed in the zookeeper.4lw.commands.whitelist setting for ZooKeeper server to enable the commands. By default the whitelist only contains the srvr command which zkServer.sh uses. The rest of Four Letter Words commands are disabled by default.

Example to whitelist stat, ruok, conf, and isro commands while disabling the rest of Four Letter Words command:

4lw.commands.whitelist=stat, ruok, conf, isro

Example to whitelist all Four Letter Words commands:

4lw.commands.whitelist=*

See The Four Letter Words for detail.

Upgrading deb packages using apt

  1. Back up all configuration files from /etc, including, for example, /etc/kafka, /etc/kafka-rest, and /etc/schema-registry.

  2. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading previous versions to 5.4.x).

    # The example below removes the Kafka package (for Scala 2.12)
      sudo kafka-server-stop
      sudo apt-get remove confluent-kafka-2.12
    
    # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services
      sudo apt-get autoremove confluent-platform-2.12
    
  3. Remove the repository files of the previous version.

    sudo add-apt-repository -r "deb https://packages.confluent.io/deb/5.4 stable main"
    
  4. Add the 5.4 repository to /etc/apt/sources.list.

    sudo add-apt-repository "deb https://packages.confluent.io/deb/5.4 stable main"
    
  5. Refresh repository metadata.

    sudo apt-get update
    
  6. If you modified the configuration files, apt will prompt you to resolve the conflicts. Be sure to keep your original configuration. Install the new version:

      sudo apt-get install confluent-platform-2.12
    
    # Or install the packages you need one by one. For example, to install only Kafka:
      sudo apt-get install confluent-kafka-2.12
    

    Note

    The installation package names end with the Scala version that the Kafka is built on. For example, the confluent-platform-2.12 package is for Confluent Platform 5.4.1 and is based on Scala 2.12.

    The Zip and Tar packages contain the Confluent Platform version followed by the Scala version. For example, a Zip package, confluent-5.4.1-2.12.zip denotes Confluent Platform version 5.4.1 and Scala version 2.12.

    Tip

    You can view all available Confluent Platform builds with this command:

    apt-cache show confluent-platform-2.12
    

    You can install specific Confluent Platform builds by appending the version (<version>) to the install command:

    sudo apt-get install confluent-platform-2.12-<version>
    
  7. Start Confluent Platform components.

    kafka-server-start -daemon /etc/kafka/server.properties
    

Upgrading rpm packages using yum

  1. Back up all configuration files from /etc, including, for example, /etc/kafka, /etc/kafka-rest, and /etc/schema-registry.

  2. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading previous versions to 5.4.x).

    # The example below removes the Kafka package (for Scala 2.12)
      sudo kafka-server-stop
      sudo yum remove confluent-kafka-2.12
    
    # To remove Confluent-Platform and all its dependencies at once, run the following after stopping all services
      sudo yum autoremove confluent-platform-2.12
    
  3. Remove the repository files of the previous version.

    sudo rm /etc/yum.repos.d/confluent.repo
    
  4. Add the repository to your /etc/yum.repos.d/ directory in a file named confluent-5.4.repo.

    [confluent-5.4]
    name=Confluent repository for 5.4.x packages
    baseurl=https://packages.confluent.io/rpm/5.4
    gpgcheck=1
    gpgkey=https://packages.confluent.io/rpm/5.4/archive.key
    enabled=1
    
  5. Refresh repository metadata.

    sudo yum clean all
    
  6. Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from the backup after installing the packages.

      sudo yum install confluent-platform-2.12
    
    # Or install the packages you need one by one. For example, to install just Kafka:
      sudo yum install confluent-kafka-2.12
    

    Note

    The installation package names end with the Scala version that the Kafka is built on. For example, the confluent-platform-2.12 package is for Confluent Platform 5.4.1 and is based on Scala 2.12.

    The Zip and Tar packages contain the Confluent Platform version followed by the Scala version. For example, a Zip package, confluent-5.4.1-2.12.zip denotes Confluent Platform version 5.4.1 and Scala version 2.12.

  7. Start services.

    kafka-server-start -daemon /etc/kafka/server.properties
    

TAR or ZIP archives

For ZIP and TAR archives, you can delete the old archives directory after the new archive folder has been created and any previous configuration files have been copied into it as described in the following steps.

  1. Return to the directory where you installed Confluent Platform.

  2. Back up all configuration files from ./etc, including, for example, ./etc/kafka, ./etc/kafka-rest, ./etc/schema-registry, and ./etc/confluent-control-center.

  3. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading previous versions to 5.4.x).

      ./bin/control-center-stop
      ./bin/kafka-rest-stop
      ./bin/schema-registry-stop
      ./bin/kafka-server-stop
      ./bin/zookeeper-server-stop
    
    # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services
      cd ..
      rm -R confluent-3.3.1 (use the installed version number)
    
  4. Unpack the new archive. Note that yum may override your existing configuration files, so you will need to restore them from the backup after installing the packages.

      tar xzf confluent-5.4.1-2.12.tar.gz
    
    # Or for ZIP archives:
    
      unzip confluent-5.4.1-2.12.zip
    

    Note

    The installation package names end with the Scala version that the Kafka is built on. For example, the confluent-platform-2.12 package is for Confluent Platform 5.4.1 and is based on Scala 2.12.

    The Zip and Tar packages contain the Confluent Platform version followed by the Scala version. For example, a Zip package, confluent-5.4.1-2.12.zip denotes Confluent Platform version 5.4.1 and Scala version 2.12.

  5. Start services.

    sudo confluent-5.4.1/bin/zookeeper-server-start -daemon /etc/kafka/zookeeper.properties
    sudo confluent-5.4.1/bin/kafka-server-start -daemon /etc/kafka/server.properties
    

Upgrade Schema Registry

You can upgrade Schema Registry after all Kafka brokers have been upgraded.

To upgrade Schema Registry, follow the same steps above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). Then restart Schema Registry.

Tip

If you have a multi-node Schema Registry cluster running Confluent Platform 4.1.1 or earlier, do not perform a rolling upgrade to 5.2.x directly. Doing so might generate errors caused by an intermediate state mixed live cluster of legacy and newer Schema Registry nodes, which require feature parity for managing schema IDs. Instead, do one of the following:

  • If you want to use a rolling upgrade, first upgrade to version 4.1.2 or 4.1.3, then perform a rolling upgrade to 5.2.x or newer version.
  • Or, stop the whole 4.1.1 cluster and upgrade all Schema Registry nodes at the same time to 5.2.x or newer version.

Upgrade Confluent REST Proxy

You can upgrade the Confluent REST Proxy service after all Kafka brokers have been upgraded.

To upgrade the REST Proxy service, follow the same steps above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). Then restart the Confluent REST Proxy service.

Upgrade Kafka Streams applications

You can upgrade Kafka Streams applications independently, without requiring Kafka brokers to be upgraded first.

Follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.

Upgrade Kafka Connect

You can upgrade Kafka Connect in either standalone or distributed mode.

Upgrade Kafka Connect standalone mode

You can upgrade Kafka Connect in standalone mode after all Kafka brokers have been upgraded.

To upgrade Kafka Connect, follow the same steps above to upgrade the package (back up config files, remove packages, install upgraded packages, etc.). Then, restart the client processes.

Upgrade Kafka Connect distributed mode

A new required configuration, status.storage.topic was added to Kafka Connect in 0.10.0.1. To upgrade a Kafka Connect cluster, add this configuration before updating to the new version. The setting will be ignored by older versions of Kafka Connect.

  1. Back up worker configuration files.
  2. Modify your configuration file to add the status.storage.topic setting. You can safely modify the configuration file while the worker is running. Note that you should create this topic manually. See Distributed Mode Configuration in the Connect User Guide for a detailed explanation.
  3. Perform a rolling restart of the workers.

Upgrade Camus

Camus was deprecated in Confluent Platform 3.0.0 and was removed in Confluent Platform 5.0.0.

Upgrade Confluent Control Center

Follow the instructions in the Confluent Control Center Upgrade Guide.

Upgrade other client applications

Review Cross-Component Compatibility before you upgrade your client applications.

Version 0.10.2 or newer Java clients (producer and consumer) work with the version 0.10.0 or newer Kafka brokers.

If your brokers are older than 0.10.0, you must upgrade all the brokers in the Apache Kafka® cluster before upgrading your Java clients.

Version 0.10.2 brokers support version 0.8.x and newer Java clients.

Confluent’s C/C++, Python, Go and .NET clients support all released Kafka broker versions, but not all features may be available on all broker versions since some features rely on newer broker functionality. See Kafka Clients for the list of Kafka features supported in the latest versions of clients.

If it makes sense, build applications that use Kafka producers and consumers against the new 2.4.x libraries and deploy the new versions. See Application Development for details about using the 2.4.x libraries.

Additional client application upgrade information

  • The Consumer API has changed between Kafka 0.9.0.x and 0.10.0.0.
  • In librdkafka version 0.11.0, the default value for the api.version.request configuration property has changed from false to true, meaning that librdkafka will make use of the latest protocol features of the broker without the need to set this property to true explicitly. Due to a bug in Kafka 0.9.0.x, this will cause client-broker connections to stall for 10 seconds during connection-startup on 0.9.0.x brokers. The workaround for this is to explicitly configure api.version.request to false on clients communicating with <=0.9.0.x brokers.