Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Upgrade¶
Note
If you are a Confluent Platform subscriber, and you have questions about upgrades (or need help), please feel free to contact us through our Support Portal.
Preparation¶
Consider the below guidelines in preparation for the upgrade.
- Always back up all configuration files before upgrading. This includes, for example,
/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
. - Read through the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. Put differently, don’t start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan.
- Pay careful consideration to the order in which components are upgraded. Starting with version 0.10.2, Java clients (producer and consumer) have acquired the ability to communicate with older brokers. Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the Apache Kafka® cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients. Before 0.10.2, Kafka is backward compatible, which means that clients from Kafka 0.8.x releases (Confluent Platform 1.0.x) will work with brokers from Kafka release 0.8.x through 2.1 (Confluent Platform 2.0.x through 5.1.x), but not vice-versa. This means you always need to plan upgrades such that all brokers are upgraded before clients. Clients include any application that uses Kafka producer or consumer, command line tools, Camus, Schema Registry, REST Proxy, Kafka Connect and Kafka Streams.
- IMPORTANT: Due to a bug introduced in Kafka 0.9.0.0 (Confluent Platform 2.0.0), clients that depend on ZooKeeper (old Scala high-level Consumer and MirrorMaker if used with the old consumer) will not work with 0.10.x.x or newer brokers. Therefore, Kafka 0.9.0.0 (Confluent Platform 2.0.0) clients should be upgraded to 0.9.0.1 (Confluent Platform 2.0.1) before brokers are upgraded to 0.10.x.x or newer. This step is not necessary for 0.8.x or 0.9.0.1 clients.
- IMPORTANT: Although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, both the broker and clients share the same packages. This is problematic because all brokers must be upgraded before clients are upgraded. Pay careful attention to this when upgrading.
- Kafka 2.0.0 contains changes with potential compatibility impact and deprecations with respect to previous major versions (i.e. 0.8.x.x, 0.9.x.x, 0.10.x.x, 0.11.x.x and 1.0.x). Refer to the Apache Kafka documentation to understand how they affect applications using Kafka.
- IMPORTANT: Support for Java 7 has been dropped, Java 8 is now the minimum version required. For complete compatibility information, see the Supported Versions and Interoperability.
- Read the Release Notes. They contain not only information about noteworthy features, but also changes to configurations that may impact your upgrade.
Step-by-step Guide¶
- Consider using Confluent Control Center to monitor broker status during the rolling restart.
- Determine and install the appropriate Java version. See Supported Java Versions to determine which versions are supported for the Confluent Platform version you are upgrading to.
- Determine if clients are co-located with brokers. If yes, ensure all client processes are not upgraded until all Kafka brokers have been upgraded.
- Decide on performing a rolling upgrade or a downtime upgrade. Confluent Platform supports both rolling upgrades (upgrade one broker at a time to avoid cluster downtime) and downtime upgrades (take down the entire cluster, upgrade it, and bring everything back up).
- Upgrade all Kafka brokers (more below).
- Upgrade Schema Registry, REST Proxy and Camus (more below).
- If it makes sense, build applications that use Kafka producers and consumers against the new 5.1.x libraries and deploy the new versions. See Application Development documentation for more details on using the 5.1.x libraries.
Upgrade All Kafka Brokers¶
In a rolling upgrade scenario, upgrade one Kafka broker at a time, taking into consideration the recommendations for doing rolling restarts to avoid downtime for end users. In a downtime upgrade scenario, take the entire cluster down, upgrade each Kafka broker, then start the cluster.
Steps to upgrade for any fix pack release (e.g. 3.1.1 to 3.1.2): Any fix pack release will be able to perform a rolling upgrade by simply upgrading each broker one at a time. For each broker:
- Stop the broker.
- Upgrade the software (see below for your packaging type).
- Start the broker.
Steps for upgrading previous versions to 5.1.x: In a rolling upgrade scenario, upgrading to Confluent Platform 5.1.x (Kafka 2.1.x) requires special steps because Kafka 2.1.x includes a change to the on-disk data format (unless the upgrade is from 3.3.x or newer) and the inter-broker protocol. Follow the below steps for a rolling upgrade:
- Update
server.properties
on all Kafka brokers by modifying the propertiesinter.broker.protocol.version
andlog.message.format.version
to match the currently installed version:- For Confluent Platform 2.0.x, use
inter.broker.protocol.version=0.9.0
andlog.message.format.version=0.9.0
- For Confluent Platform 3.0.x, use
inter.broker.protocol.version=0.10.0
andlog.message.format.version=0.10.0
- For Confluent Platform 3.1.x, use
inter.broker.protocol.version=0.10.1
andlog.message.format.version=0.10.1
- For Confluent Platform 3.2.x, use
inter.broker.protocol.version=0.10.2
andlog.message.format.version=0.10.2
- For Confluent Platform 3.3.x, use
inter.broker.protocol.version=0.11.0
andlog.message.format.version=0.11.0
- For Confluent Platform 4.0.x, use
inter.broker.protocol.version=1.0
andlog.message.format.version=1.0
- For Confluent Platform 4.1.x, use
inter.broker.protocol.version=1.1
andlog.message.format.version=1.1
- For Confluent Platform 5.0.x, use
inter.broker.protocol.version=2.0
andlog.message.format.version=2.0
- For Confluent Platform 2.0.x, use
- Upgrade each Kafka broker, one at a time.
- Once all Kafka brokers have been upgraded, modify
server.properties
again by changing the following property:inter.broker.protocol.version=2.1
. - Restart each Kafka broker, one at a time, to apply the configuration change.
- Once most clients are using 5.1.x, modify
server.properties
by changing the following property:log.message.format.version=2.1
. Since the message format is the same in Confluent Platform 3.3.x through 5.1.x, this step is optional if the upgrade is from Confluent Platform 3.3.x or newer. - If you have overriden the message format as instructed above, you need to do one more rolling restart. Restart each Kafka broker, one at a time, to apply the configuration change.
Instructions for both deb packages and rpm packages are below. For ZIP and TAR archives, the old archives directory can be simply deleted after the new archive folder has been created and any old configuration files copied over.
deb packages via apt
Backup all configuration files from /etc, including, for example,
/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
.Stop the services and remove the existing packages and their dependencies. As mentioned above, this can be done on one server at a time for rolling upgrade.
# The example below removes the Kafka package (for Scala 2.11) sudo kafka-server-stop sudo apt-get remove confluent-kafka-2.11 # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services sudo apt-get autoremove confluent-platform-2.11
Remove the older GPG key and import the updated key. If you have already imported the updated
8b1da6120c2bf624
key, then you can skip this step. However, if you still have the old670540c841468433
key installed, now is the time to remove it and import the8b1da6120c2bf624
key:sudo apt-key del 41468433 wget -qO - https://packages.confluent.io/deb/5.1/archive.key | sudo apt-key add -
Remove the repository files of the previous version
sudo add-apt-repository -r "deb https://packages.confluent.io/deb/5.1 stable main"
Add the 5.1 repository to /etc/apt/sources.list
sudo add-apt-repository "deb https://packages.confluent.io/deb/5.1 stable main"
Refresh repository metadata
sudo apt-get update
Install the new version: (Note that if you modified the configuration files, apt will prompt you to resolve the conflicts. You will want to keep your original configuration).
sudo apt-get install confluent-platform-2.11 # Or install the packages you need one by one. For example, to install just Kafka: sudo apt-get install confluent-kafka-2.11
Start Confluent Platform components.
kafka-server-start -daemon /etc/kafka/server.properties
rpm packages via yum
Backup all configuration files from /etc, including, for example,
/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
.Stop the services and remove the existing packages and their dependencies. As mentioned above, this can be done on one server at a time for rolling upgrade.
# The example below removes the Kafka package (for Scala 2.11) sudo kafka-server-stop sudo yum remove confluent-kafka-2.11 # To remove Confluent-Platform and all its dependencies at once, run the following after stopping all services sudo yum autoremove confluent-platform-2.11
Remove the repository files of the previous version
sudo rm /etc/yum.repos.d/confluent.repo
Remove the older GPG key. This step is optional if you haven’t removed Confluent’s older (
670540c841468433
) GPG key. Confluent’s newer (8b1da6120c2bf624
) key would appear in the RPM Database asgpg-pubkey-0c2bf624-60904208
sudo rpm -e gpg-pubkey-41468433-54d512a8 sudo rpm --import https://packages.confluent.io/rpm/5.1/archive.key
Add the repository to your
/etc/yum.repos.d/
directory in a file namedconfluent-5.1.repo
.[confluent-5.1] name=Confluent repository for 5.1.x packages baseurl=https://packages.confluent.io/rpm/5.1 gpgcheck=1 gpgkey=https://packages.confluent.io/rpm/5.1/archive.key enabled=1
Refresh repository metadata
sudo yum clean all
Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from backup after installing the packages:
sudo yum install confluent-platform-2.11 # Or install the packages you need one by one. For example, to install just Kafka: sudo yum install confluent-kafka-2.11
Start services.
kafka-server-start -daemon /etc/kafka/server.properties
TAR or ZIP archives
Go to the directory where you installed Confluent Platform.
Backup all configuration files from ./etc, including, for example,
./etc/kafka
,./etc/kafka-rest
,./etc/schema-registry
, and./etc/confluent-control-center
.Stop the services and remove the existing packages and their dependencies. As mentioned above, this can be done on one server at a time for rolling upgrade.
./bin/control-center-stop ./bin/kafka-rest-stop ./bin/schema-registry-stop ./bin/kafka-server-stop ./bin/zookeeper-server-stop # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services cd .. rm -R confluent-3.3.1 (use the installed version number)
Unpack the new archive. Note that yum may override your existing configuration files, so you will need to restore them from backup after installing the packages:
tar xzf confluent-5.1.4-2.11.tar.gz # Or for ZIP archives: unzip confluent-5.1.4-2.11.zip
Start services.
sudo confluent-5.1.4/bin/zookeeper-server-start -daemon /etc/kafka/zookeeper.properties sudo confluent-5.1.4/bin/kafka-server-start -daemon /etc/kafka/server.properties
Upgrade Schema Registry¶
Schema Registry can be upgraded once all Kafka brokers have been upgraded.
To upgrade Schema Registry, follow the same steps above to upgrade the package (backup config files, remove packages, install upgraded packages, etc.). Then, restart Schema Registry.
Upgrade Confluent REST Proxy¶
The Confluent REST Proxy service can be upgraded once all Kafka brokers have been upgraded.
To upgrade the REST Proxy service, follow the same steps above to upgrade the package (backup config files, remove packages, install upgraded packages, etc.). Then, restart the Confluent REST Proxy service.
Upgrade Kafka Streams applications¶
Kafka Streams applications can be upgraded independently and without requiring Kafka brokers to have been upgraded first.
Please follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.
Upgrade Kafka Connect¶
Upgrade Kafka Connect Standalone Mode¶
Kafka Connect in standalone mode can be upgraded once all Kafka brokers have been upgraded.
To upgrade Kafka Connect, follow the same steps above to upgrade the package (backup config files, remove packages, install upgraded packages, etc.). Then, restart the client processes.
Upgrade Kafka Connect Distributed Mode¶
A new required configuration, status.storage.topic
was added to Kafka Connect in 0.10.0.1. To upgrade a Kafka Connect cluster, this configuration should be added before updating to the new version. The setting will be ignored by older versions of Kafka Connect.
- Backup worker configuration files.
- Modify your configuration file to add the
status.storage.topic
setting. You can safely modify the configuration file while the worker is running. Note that you should create this topic manually. See the Distributed Mode Configuration section of the Connect User Guide for a detailed explanation. - Perform a rolling restart of the workers.
Upgrade Camus¶
Camus was deprecated in Confluent Platform 3.0.0 and has been removed in Confluent Platform 5.0.0.
Confluent Control Center¶
Control Center can be upgraded once all Kafka brokers have been upgraded.
Please follow the instructions in the Control Center Upgrade Guide.
Upgrade Other Client Applications¶
Any other client application (e.g. applications that use Kafka’s Java client, or Confluent’s C/C++, Python, Go or .NET clients) can be upgraded once all Kafka brokers have been upgraded.
As mentioned above, if it makes sense, build applications that use Kafka producers and consumers against the new 2.1.x libraries and deploy the new versions. See Application Development documentation for more details on using the 2.1.x libraries.
- NOTE: the Consumer API has changed between Kafka 0.9.0.x and 0.10.0.0.
NOTES:
- The Consumer API has changed between Kafka 0.9.0.x and 0.10.0.0.
- In librdkafka version 0.11.0, the default value for the
api.version.request
configuration property has changed fromfalse
totrue
, meaning that librdkafka will make use of the latest protocol features of the broker without the need to set this property totrue
explicitly. Due to a bug in Kafka 0.9.0.x, this will cause client-broker connections to stall for 10 seconds during connection-startup on 0.9.0.x brokers. The workaround for this is to explicitly configureapi.version.request
tofalse
on clients communicating with <=0.9.0.x brokers.