Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Upgrade Confluent Platform¶
Note
If you are a Confluent Platform subscriber, and you have questions about upgrades or need help, contact us through our Support Portal.
Important
The Confluent Platform package includes Confluent Server by default and requires a
confluent.license
key in your server.properties
file. Starting with
Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must
supply a license string in each broker’s properties file using the
confluent.license
property as below:
confluent.license=LICENCE_STRING_HERE_NO_QUOTES
If you want to use the Kafka broker, download the confluent-community
package.
The Kafka broker is the default in all Debian or RHEL and CentOS packages.
Preparation¶
Consider the guidelines below when preparing to upgrade.
Always back up all configuration files before upgrading. This includes, for example,
/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
.Read the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. In other words, don’t start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan.
Give careful consideration to the order in which components are upgraded. Starting with version 0.10.2, Java clients (producer and consumer) have the ability to communicate with older brokers. Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the Apache Kafka® cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients. Before 0.10.2, Kafka is backward compatible, which means that clients from Kafka 0.8.x releases (Confluent Platform 1.0.x) will work with brokers from Kafka release 0.8.x through 2.5 (Confluent Platform 2.0.x through 5.5.x), but not vice-versa. This means you always need to plan upgrades such that all brokers are upgraded before clients. Clients include any application that uses Kafka producer or consumer, command line tools, Camus, Schema Registry, REST Proxy, Kafka Connect and Kafka Streams.
Important
Due to a bug introduced in Kafka 0.9.0.0 (Confluent Platform 2.0.0), clients that depend on ZooKeeper (old Scala high-level Consumer and MirrorMaker if used with the old consumer) will not work with 0.10.x.x or newer brokers. Therefore, Kafka 0.9.0.0 (Confluent Platform 2.0.0) clients should be upgraded to 0.9.0.1 (Confluent Platform 2.0.1) before brokers are upgraded to 0.10.x.x or newer. This step is not necessary for 0.8.x or 0.9.0.1 clients.
Important
Although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, both the broker and clients share the same packages. This is problematic because all brokers must be upgraded before clients are upgraded. Pay careful attention to this when upgrading.
Kafka 2.0.0 contains changes with potential compatibility impact and deprecations with respect to previous major versions (i.e. 0.8.x.x, 0.9.x.x, 0.10.x.x, 0.11.x.x and 1.0.x). Refer to the Apache Kafka documentation to understand how they affect applications using Kafka.
Important
Java 7 is no longer supported. Java 8 is now the minimum version required. For complete compatibility information, see the Supported Versions and Interoperability.
If you are using Confluent LDAP Authorizer, you must migrate to the commercial
confluent-server
package when upgrading to 5.5.0. See Migrate to Confluent Server for details.Read the Confluent Platform 5.5.15 Release Notes. They contain important information about noteworthy features, and changes to configurations that may impact your upgrade.
Upgrade procedures¶
- Consider using Confluent Control Center to monitor broker status during the rolling restart.
- Determine and install the appropriate Java version. See Supported Java Versions to determine which versions are supported for the Confluent Platform version to which you are upgrading.
- Determine if clients are co-located with brokers. If they are, then ensure all client processes are not upgraded until all Kafka brokers have been upgraded.
- Decide on performing a rolling upgrade or a downtime upgrade. Confluent Platform supports both rolling upgrades (upgrade one broker at a time to avoid cluster downtime) and downtime upgrades (take down the entire cluster, upgrade it, and bring everything back up).
- Upgrade all Kafka brokers (see Upgrade Kafka brokers).
- Upgrade Schema Registry, REST Proxy, and Camus (see Upgrade Camus).
- If it makes sense, build applications that use Kafka producers and consumers against the new 5.5.x libraries and deploy the new versions. See Application Development for more details about using the 5.5.x libraries.
Upgrade Kafka brokers¶
In a rolling upgrade scenario, upgrade one Kafka broker at a time, taking into consideration the recommendations for doing rolling restarts to avoid downtime for end users.
In a downtime upgrade scenario, take the entire cluster down, upgrade each Kafka broker, then start the cluster.
Steps to upgrade for any fix pack release¶
Any fix pack release can perform a rolling upgrade (for example, 3.1.1 to 3.1.2) by simply upgrading each broker one at a time.
To upgrade each broker:
- Stop the broker.
- Upgrade the software (see below for your packaging type).
- Start the broker.
Steps for upgrading previous versions to 5.5.x¶
In a rolling upgrade scenario, upgrading to Confluent Platform 5.5.x (Kafka 2.5.x) requires special steps because Kafka 2.5.x includes changes to wire protocol and the inter-broker protocol.
Follow these steps for a rolling upgrade:
Update
server.properties
on all Kafka brokers by modifying the propertiesinter.broker.protocol.version
andlog.message.format.version
to match the currently installed version:- For Confluent Platform 2.0.x, use
inter.broker.protocol.version=0.9.0
andlog.message.format.version=0.9.0
- For Confluent Platform 3.0.x, use
inter.broker.protocol.version=0.10.0
andlog.message.format.version=0.10.0
- For Confluent Platform 3.1.x, use
inter.broker.protocol.version=0.10.1
andlog.message.format.version=0.10.1
- For Confluent Platform 3.2.x, use
inter.broker.protocol.version=0.10.2
andlog.message.format.version=0.10.2
- For Confluent Platform 3.3.x, use
inter.broker.protocol.version=0.11.0
andlog.message.format.version=0.11.0
- For Confluent Platform 4.0.x, use
inter.broker.protocol.version=1.0
andlog.message.format.version=1.0
- For Confluent Platform 4.1.x, use
inter.broker.protocol.version=1.1
andlog.message.format.version=1.1
- For Confluent Platform 5.0.x, use
inter.broker.protocol.version=2.0
andlog.message.format.version=2.0
- For Confluent Platform 5.1.x, use
inter.broker.protocol.version=2.1
andlog.message.format.version=2.1
- For Confluent Platform 5.2.x, use
inter.broker.protocol.version=2.2
andlog.message.format.version=2.2
- For Confluent Platform 5.3.x, use
inter.broker.protocol.version=2.3
andlog.message.format.version=2.3
- For Confluent Platform 5.4.x, use
inter.broker.protocol.version=2.4
andlog.message.format.version=2.4
If you use Confluent Server, you must provide a valid Confluent license key. For example:
confluent.license=123
.- For Confluent Platform 2.0.x, use
If upgrading from Confluent Platform version 5.4.x and lower, set
max.message.bytes=1048588
in theserver.properties
file. The default value ofmax.message.bytes
was changed in Confluent Platform 5.5.0.Upgrade each Kafka broker, one at a time.
After all Kafka brokers have been upgraded, make the following update in
server.properties
:inter.broker.protocol.version=2.5
.Restart each Kafka broker, one at a time, to apply the configuration change.
If upgrading from Confluent Platform 3.2.x or older, take the following additional steps.
Because the message format is the same in Confluent Platform 3.3.x through 5.5.x, this step is optional if the upgrade is from Confluent Platform 3.3.x or newer.
- Once all (or most) consumers have been upgraded to 5.5.x, set
log.message.format.version=2.5
on each broker. - Restart each Kafka broker, one at a time.
- Once all (or most) consumers have been upgraded to 5.5.x, set
Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used.
Confluent license¶
Add the confluent.license
configuration parameter to server.properties
. Confluent Platform 5.4.x and later,
requires confluent.license
to start. For more information, see confluent-server.
Security¶
Starting with 5.4.x, the new authorizer class
kafka.security.authorizer.AclAuthorizer
replaces
kafka.security.auth.SimpleAclAuthorizer
. You must manually change existing
instances of kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
in the server.properties file
.
For more information, see ACL concepts.
Upgrade DEB packages using APT¶
Back up all configuration files from
/etc
, including, for example,/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
.Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading previous versions to 5.5.x).
# The example below removes the Kafka package (for Scala 2.12) sudo kafka-server-stop sudo apt-get remove confluent-kafka-2.12 # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services sudo apt-get autoremove confluent-platform-2.12
Remove the older GPG key and import the updated key. If you have already imported the updated
8b1da6120c2bf624
key, then you can skip this step. However, if you still have the old670540c841468433
key installed, now is the time to remove it and import the8b1da6120c2bf624
key:sudo apt-key del 41468433 wget -qO - https://packages.confluent.io/deb/5.5/archive.key | sudo apt-key add -
Remove the repository files of the previous version
sudo add-apt-repository -r "deb https://packages.confluent.io/deb/<currently installed version> stable main"
Add the 5.5 repository to
/etc/apt/sources.list
.sudo add-apt-repository "deb https://packages.confluent.io/deb/5.5 stable main"
Refresh repository metadata.
sudo apt-get update
If you modified the configuration files, apt will prompt you to resolve the conflicts. Be sure to keep your original configuration. Install the new version:
sudo apt-get install confluent-platform-2.12 # Or install the packages you need one by one. For example, to install only Kafka: sudo apt-get install confluent-kafka-2.12
Note
The installation package names end with the Scala version that the Kafka is built on. For example, the
confluent-platform-2.12
package is for Confluent Platform 5.5.15 and is based on Scala 2.12.The Zip and Tar packages contain the Confluent Platform version followed by the Scala version. For example, a Zip package,
confluent-5.5.15-2.12.zip
denotes Confluent Platform version 5.5.15 and Scala version 2.12.Tip
You can view all available Confluent Platform builds with this command:
apt-cache show confluent-platform-2.12
You can install specific Confluent Platform builds by appending the version (
<version>
) to the install command:sudo apt-get install confluent-platform-2.12-<version>
Start Confluent Platform components.
kafka-server-start -daemon /etc/kafka/server.properties
Upgrade RPM packages using YUM¶
Back up all configuration files from
/etc
, including, for example,/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
.Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading previous versions to 5.5.x).
# The example below removes the Kafka package (for Scala 2.12) sudo kafka-server-stop sudo yum remove confluent-kafka-2.12 # To remove Confluent-Platform and all its dependencies at once, run the following after stopping all services sudo yum autoremove confluent-platform-2.12
Remove the repository files of the previous version.
sudo rm /etc/yum.repos.d/confluent.repo
Remove the older GPG key. This step is optional if you haven’t removed Confluent’s older (
670540c841468433
) GPG key. Confluent’s newer (8b1da6120c2bf624
) key would appear in the RPM Database asgpg-pubkey-0c2bf624-60904208
sudo rpm -e gpg-pubkey-41468433-54d512a8 sudo rpm --import https://packages.confluent.io/rpm/5.5/archive.key
Add the repository to your
/etc/yum.repos.d/
directory in a file namedconfluent-5.5.repo
.[confluent-5.5] name=Confluent repository for 5.5.x packages baseurl=https://packages.confluent.io/rpm/5.5 gpgcheck=1 gpgkey=https://packages.confluent.io/rpm/5.5/archive.key enabled=1
Refresh repository metadata.
sudo yum clean all
Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from the backup after installing the packages.
sudo yum install confluent-platform-2.12 # Or install the packages you need one by one. For example, to install just Kafka: sudo yum install confluent-kafka-2.12
Note
The installation package names end with the Scala version that the Kafka is built on. For example, the
confluent-platform-2.12
package is for Confluent Platform 5.5.15 and is based on Scala 2.12.The Zip and Tar packages contain the Confluent Platform version followed by the Scala version. For example, a Zip package,
confluent-5.5.15-2.12.zip
denotes Confluent Platform version 5.5.15 and Scala version 2.12.Start services.
kafka-server-start -daemon /etc/kafka/server.properties
Upgrade using TAR or ZIP archives¶
For ZIP and TAR archives, you can delete the old archives directory after the new archive folder has been created and any previous configuration files have been copied into it as described in the following steps.
Return to the directory where you installed Confluent Platform.
Back up all configuration files from
./etc
, including, for example,./etc/kafka
,./etc/kafka-rest
,./etc/schema-registry
, and./etc/confluent-control-center
.Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading previous versions to 5.5.x).
./bin/control-center-stop ./bin/kafka-rest-stop ./bin/schema-registry-stop ./bin/kafka-server-stop ./bin/zookeeper-server-stop # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services cd .. rm -R confluent-3.3.1 (use the installed version number)
Unpack the new archive. Note that YUM may override your existing configuration files, so you will need to restore them from the backup after installing the packages.
tar xzf confluent-5.5.15-2.12.tar.gz # Or for ZIP archives: unzip confluent-5.5.15-2.12.zip
Note
The installation package names end with the Scala version that the Kafka is built on. For example, the
confluent-platform-2.12
package is for Confluent Platform 5.5.15 and is based on Scala 2.12.The Zip and Tar packages contain the Confluent Platform version followed by the Scala version. For example, a Zip package,
confluent-5.5.15-2.12.zip
denotes Confluent Platform version 5.5.15 and Scala version 2.12.Start services.
sudo confluent-5.5.15/bin/zookeeper-server-start -daemon /etc/kafka/zookeeper.properties sudo confluent-5.5.15/bin/kafka-server-start -daemon /etc/kafka/server.properties
Upgrade ZooKeeper¶
ZooKeeper has been upgraded to 3.5.x in Confluent Platform 5.5.
Consider the below guidelines in preparation for the upgrade:
- Back up all configuration files before upgrading.
- Back up ZooKeeper data from the leader. It will get you back to the latest committed state in case of a failure.
- Read through the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process.
Rolling upgrade of ZooKeeper¶
Perform the following steps to gather information needed for a rolling upgrade:
To find who is the leader, run the following command:
echo mntr | nc localhost 2181 | grep zk_server_state
Verify that there is only one leader in the entire ZooKeeper ensemble.
To find how many nodes are in sync with the leader, run the following command:
echo mntr | nc localhost 2181 | grep zk_synced_followers
Verify that all the followers are in sync with the leader. The following commands should return the same results:
echo mntr | nc localhost 2181 | grep zk_synced_followers echo mntr | nc localhost 2181 | grep zk_followers
For each ZooKeeper server, repeat the following steps. The leader ZooKeeper server should be upgraded last:
Stop the ZooKeeper process gracefully.
Upgrade the ZooKeeper binary.
Start the ZooKeeper process.
Wait until all the followers are in sync with the leader. The following commands should return the same results when all the followers are in sync with the leader:
echo mntr | nc localhost 2181 | grep zk_synced_followers echo mntr | nc localhost 2181 | grep zk_followers
If there is an issue during an upgrade, you can rollback using the same steps.
The AdminServer¶
An embedded Jetty-based AdminServer was added in ZooKeeper 3.5.
The AdminServer is disabled by default in ZooKeeper distributed as part of Confluent Platform. To
enable the AdminServer, set admin.enableServer=true
in your local
zookeeper.properties
file.
The AdminServer is enabled by default (on port 8080) in ZooKeeper provided by the Apache Kafka® distribution. To configure the AdminServer, see the AdminServer configuration.
Four letter words whitelist in ZooKeeper¶
Starting in ZooKeeper 3.5.3, the Four Letter Words commands must be explicitly white
listed in the zookeeper.4lw.commands.whitelist
setting for ZooKeeper server to
enable the commands. By default the whitelist only contains the srvr
command
which zkServer.sh
uses. The rest of the Four Letter Words commands are
disabled by default.
An example to whitelist stat
, ruok
, conf
, and isro
commands
while disabling the rest of Four Letter Words command:
4lw.commands.whitelist=stat, ruok, conf, isro
An example to whitelist all Four Letter Words commands:
4lw.commands.whitelist=*
When running ZooKeeper in a Docker container, use the Java system property, -e
KAFKA_OPTS=-'Dzookeeper.4lw.commands.whitelist='
, in the docker run
command.
For example:
docker run -d \
--net=host
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
-e ZOOKEEPER_TICK_TIME=2000 \
-e ZOOKEEPER_SYNC_LIMIT=2 \
-e KAFKA_OPTS='-Dzookeeper.4lw.commands.whitelist=*' \
confluentinc/cp-zookeeper:5.5.15
See The Four Letter Words for more information.
Upgrade issue with missing snapshot file¶
The ZooKeeper upgrade from 3.4.X to 3.5.X can fail with the following error if there are no snapshot files in the 3.4 data directory.
ERROR Unable to load database on disk
(org.apache.zookeeper.server.quorum.QuorumPeer) java.io.IOException: No
snapshot found, but there are log entries. Something is broken! at
org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:222)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240) at
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:919)
at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:905) at
org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:205)
at
org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123)
at
org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
This usually happens in test upgrades where ZooKeeper 3.5.X is trying to load an existing 3.4 data directory in which no snapshot file has been created. For more details about this issue, refer to ZOOKEEPER-3056.
The recommended workaround is:
- Take a backup of the current ZooKeeper
data
directory. - Look for snapshot files (file names starting with
snapshot
) in the ZooKeeperdata
directory.- If there are snapshot files, you can safely upgrade.
- If there is no snapshot file, download the empty snapshot file and place in the
data
directory. Empty snapshot file is available as an attachment in https://issues.apache.org/jira/browse/ZOOKEEPER-3056
For more details about the workaround, refer to the ZooKeeper Upgrade FAQ.
Upgrade Schema Registry¶
You can upgrade Schema Registry after all Kafka brokers have been upgraded.
To upgrade Schema Registry, follow the same steps above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). Then restart Schema Registry.
Tip
If you have a multi-node Schema Registry cluster running Confluent Platform 4.1.1 or earlier, do not perform a rolling upgrade to 5.2.x directly. Doing so might generate errors caused by an intermediate state mixed live cluster of legacy and newer Schema Registry nodes, which require feature parity for managing schema IDs. Instead, do one of the following:
- If you want to use a rolling upgrade, first upgrade to version 4.1.2 or 4.1.3, then perform a rolling upgrade to 5.2.x or newer version.
- Or, stop the whole 4.1.1 cluster and upgrade all Schema Registry nodes at the same time to 5.2.x or newer version.
Upgrade Confluent REST Proxy¶
You can upgrade the Confluent REST Proxy service after all Kafka brokers have been upgraded.
To upgrade the REST Proxy service, follow the same steps above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). Then restart the Confluent REST Proxy service.
Upgrade Kafka Streams applications¶
You can upgrade Kafka Streams applications independently, without requiring Kafka brokers to be upgraded first.
Follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.
Upgrade Kafka Connect¶
You can upgrade Kafka Connect in either standalone or distributed mode.
Upgrade Kafka Connect standalone mode¶
You can upgrade Kafka Connect in standalone mode after all Kafka brokers have been upgraded.
To upgrade Kafka Connect, follow the same steps above to upgrade the package (back up config files, remove packages, install upgraded packages, etc.). Then, restart the client processes.
Upgrade Kafka Connect distributed mode¶
A new required configuration, status.storage.topic
was added to
Kafka Connect in 0.10.0.1. To upgrade a Kafka Connect cluster, add this configuration
before updating to the new version. The setting will be ignored by older versions
of Kafka Connect.
- Back up worker configuration files.
- Modify your configuration file to add the
status.storage.topic
setting. You can safely modify the configuration file while the worker is running. Note that you should create this topic manually. See Distributed Mode Configuration in the Connect User Guide for a detailed explanation. - Perform a rolling restart of the workers.
Upgrade ksqlDB¶
To upgrade from KSQL 5.4 and earlier to ksqlDB 5.5, follow the steps in Upgrading ksqlDB.
Upgrade Camus¶
Camus was deprecated in Confluent Platform 3.0.0 and was removed in Confluent Platform 5.0.0.
Upgrade Confluent Control Center¶
Follow the instructions in the Confluent Control Center Upgrade Guide.
Upgrade other client applications¶
Review Cross-Component Compatibility before you upgrade your client applications.
Version 0.10.2 or newer Java clients (producer and consumer) work with the version 0.10.0 or newer Kafka brokers.
If your brokers are older than 0.10.0, you must upgrade all the brokers in the Apache Kafka® cluster before upgrading your Java clients.
Version 0.10.2 brokers support version 0.8.x and newer Java clients.
Confluent’s C/C++, Python, Go and .NET clients support all released Kafka broker versions, but not all features may be available on all broker versions since some features rely on newer broker functionality. See Kafka Clients for the list of Kafka features supported in the latest versions of clients.
If it makes sense, build applications that use Kafka producers and consumers against the new 2.5.x libraries and deploy the new versions. See Application Development for details about using the 2.5.x libraries.
Additional client application upgrade information¶
- The Consumer API has changed between Kafka 0.9.0.x and 0.10.0.0.
- In librdkafka version 0.11.0, the default value for the
api.version.request
configuration property has changed fromfalse
totrue
, meaning that librdkafka will make use of the latest protocol features of the broker without the need to set this property totrue
explicitly. Due to a bug in Kafka 0.9.0.x, this will cause client-broker connections to stall for 10 seconds during connection-startup on 0.9.0.x brokers. The workaround for this is to explicitly configureapi.version.request
tofalse
on clients communicating with <=0.9.0.x brokers.