Upgrade Confluent Platform¶
Use these instructions to upgrade earlier versions of Confluent Platform to the latest.
Note
If you are a Confluent Platform subscriber, and you have questions about upgrades or need help, you can contact Confluent using the Support Portal.
Preparation¶
Follow these guidelines when you prepare to upgrade.
- Form a plan.
Read the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. In other words, don’t start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan.
- Perform backups.
Before upgrading, always back up all configuration and unit files with their file permissions, ownership, and customizations. Confluent Platform may not run if the proper ownership isn’t preserved on configuration files. By default, configuration files are located in the
$CONFLUENT_HOME/etc
directory and are organized by component.
- Upgrade all components.
Upgrade your entire platform deployment so that all components are running the same version. Do not bridge component versions. If you are running a Confluent Platform version later than 7.4.0 and your clusters are running in KRaft mode, you may not need to upgrade ZooKeeper. See Upgrade procedures..
- Consider upgrade order.
Give careful consideration to the order in which components are upgraded. Java clients (producer and consumer) can communicate with older brokers so you should plan to upgrade all brokers before clients. Clients include any application that uses Kafka producer or consumer, command line tools, Schema Registry, REST Proxy, Kafka Connect and Kafka Streams.
- Determine if clients are colocated with brokers.
Although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, brokers and clients share the same packages. In this colocation case, ensure all client processes are not upgraded until all Kafka brokers have been upgraded.
- Decide between a rolling upgrade or a downtime upgrade.
Confluent Platform supports both rolling upgrades, meaning you upgrade one broker at a time to avoid cluster downtime, and downtime upgrades meaning you take down the entire cluster, upgrade it, and bring everything back up.
- Use Confluent Control Center for monitoring for a rolling restart.
Consider using Confluent Control Center to monitor broker status during the rolling restart.
- Read the Release Notes for Confluent Platform 7.7.
The release notes contain important information about noteworthy features, and changes to configurations that may impact your upgrade.
- Set a license string.
The Confluent Platform package includes Confluent Server by default and requires a
confluent.license
key in yourserver.properties
file. Starting with Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must supply a license string in each broker’s properties file using theconfluent.license
property as below:confluent.license=LICENCE_STRING_HERE_NO_QUOTES
If you want to use the Kafka broker, download the
confluent-community
package. The Kafka broker is the default in all Debian or RHEL and CentOS packages.
- Run the correct version of Java.
Determine and install the appropriate Java version. Java 8 is now the minimum version supported, and it is deprecated and will be removed in a future release. See Supported Java Versions for a list of Confluent Platform versions and the corresponding Java version support before you upgrade. For complete compatibility information, see the Supported Versions and Interoperability for Confluent Platform.
Upgrade procedures¶
Following is the recommended order of upgrades:
Upgrade each controller and broker.
- For clusters running in ZooKeeper mode, Upgrade ZooKeeper and then Upgrade all Kafka brokers.
- For clusters running in KRaft mode, upgrade each controller and then each broker. For details, see KRaft clusters.
Upgrade the rest of Confluent Platform components as described in the later part of this topic.
We recommend you upgrade Confluent Control Center last among the Confluent Platform components.
If it makes sense, build applications that use Kafka producers and consumers against the new 7.7.x libraries and deploy the new versions. See Schemas, Serializers, and Deserializers for Confluent Platform for more details about using the 7.7.x libraries.
Upgrade ZooKeeper¶
Important
As of Confluent Platform 7.5, ZooKeeper is deprecated for new deployments. Confluent recommends KRaft mode for new deployments. For more information, see KRaft Overview for Confluent Platform.
You may not need to upgrade ZooKeeper if you started using Confluent Platform with version 7.4 or later.
Use the following guidelines to prepare for the upgrade:
- ZooKeeper has been upgraded to version 3.8.3 from Confluent Platform version 6.1.x to Confluent Platform 7.6 to mitigate CVE-2023-44981 and because ZooKeeper version 3.6.3 has reached its end-of-life. Confluent Platform version 5.4.x and later (Kafka clusters version >= 2.4) can be updated with no special steps.
- Back up all configuration files and customizations to your configuration and unit files before upgrading.
- Back up ZooKeeper data from the leader. It will get you back to the latest committed state in case of a failure.
- Read through the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process.
Rolling upgrade of ZooKeeper¶
Perform the following steps to gather information needed for a rolling upgrade:
To find who is the leader, run the following command:
echo mntr | nc localhost 2181 | grep zk_server_state
Verify that there is only one leader in the entire ZooKeeper ensemble.
To find how many nodes are in sync with the leader, run the following command:
echo mntr | nc localhost 2181 | grep zk_synced_followers
Verify that all the followers are in sync with the leader:
echo mntr | nc localhost 2181 | grep zk_synced_followers
For each ZooKeeper server, repeat the following steps. The leader ZooKeeper server should be upgraded last:
Stop the ZooKeeper process gracefully.
Upgrade the ZooKeeper binary.
Start the ZooKeeper process.
Wait until all the followers are in sync with the leader:
echo mntr | nc localhost 2181 | grep zk_synced_followers
If there is an issue during an upgrade, you can rollback using the same steps.
The AdminServer¶
An embedded Jetty-based AdminServer was added in ZooKeeper 3.5.
The AdminServer is disabled by default in ZooKeeper distributed as part of Confluent Platform. To
enable the AdminServer, set admin.enableServer=true
in your local
zookeeper.properties
file.
The AdminServer is enabled by default (on port 8080) in ZooKeeper provided by the Apache Kafka® distribution. To configure the AdminServer, see the AdminServer configuration.
Four letter words whitelist in ZooKeeper¶
Starting in ZooKeeper 3.5.3, the Four Letter Words commands must be explicitly white
listed in the zookeeper.4lw.commands.whitelist
setting for ZooKeeper server to
enable the commands. By default the whitelist only contains the srvr
command
which zkServer.sh
uses. The rest of the Four Letter Words commands are
disabled by default.
An example to whitelist stat
, ruok
, conf
, and isro
commands
while disabling the rest of Four Letter Words command:
4lw.commands.whitelist=stat, ruok, conf, isro
An example to whitelist all Four Letter Words commands:
4lw.commands.whitelist=*
When running ZooKeeper in a Docker container, use the Java system property, -e
KAFKA_OPTS=-'Dzookeeper.4lw.commands.whitelist='
, in the docker run
command.
For example:
docker run -d \
--net=host
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=32181 \
-e ZOOKEEPER_TICK_TIME=2000 \
-e ZOOKEEPER_SYNC_LIMIT=2 \
-e KAFKA_OPTS='-Dzookeeper.4lw.commands.whitelist=*' \
confluentinc/cp-zookeeper:7.7.1
See The Four Letter Words for more information.
Upgrade Kafka¶
In a rolling upgrade scenario, upgrade one Kafka controller (for KRaft mode) or broker at a time, taking into consideration the recommendations for doing rolling restarts to avoid downtime for end users.
In a downtime upgrade scenario, take the entire cluster down, upgrade each Kafka controller (for KRaft mode) or broker, and then start the cluster.
Steps to upgrade for any fix pack release¶
Any fix pack release can perform a rolling upgrade (for example, 7.1.0 to 7.1.1) by simply upgrading each controller or broker, one at a time.
To upgrade each broker:
- Stop the controller/broker.
- Upgrade the software (see below for your packaging type).
- Start the controller/broker.
Steps for upgrading to 7.7.x (ZooKeeper mode)¶
Follow these steps for a rolling upgrade:
Add or make sure the
inter.broker.protocol.version
in theserver.properties
file on all Kafka brokers matches the currently installed version. In earlier Confluent Platform versions,log.message.format.version
was an important setting, but for versions 7.0.x and later, thelog.message.format.version
parameter is ignored and the value is inferred frominter.broker.protocol.version
.If you use Confluent Server, you must provide a valid Confluent license key to this file. for example,
confluent.license=123
.You don’t need to restart the broker to activate the modified property.
The following table shows the broker protocol version that corresponds with each Confluent Platform version.
Confluent Platform version Broker protocol version 7.7.x inter.broker.protocol.version=3.7
7.6.x inter.broker.protocol.version=3.6
7.5.x inter.broker.protocol.version=3.5
`7.4.x inter.broker.protocol.version=3.4
7.3.x inter.broker.protocol.version=3.3
7.2.x inter.broker.protocol.version=3.2
7.1.x inter.broker.protocol.version=3.1
7.0.x inter.broker.protocol.version=3.0
Upgrade each Kafka broker, one at a time.
Stop the broker.
Upgrade the software as necessary for your packaging type.
Start the broker.
The active controller should be the last broker you restart. This ensures that the active controller isn’t moved on each broker restart, which would slow down the restart. To identify which broker in the cluster is the active controller, check the
kafka.controller:type=KafkaController,name=ActiveControllerCount
metric. The active controller reports1
and the remaining brokers report0
. For more information, see Broker metrics.
After all Kafka brokers have been upgraded, make the following update in
server.properties
:inter.broker.protocol.version=3.7
.Restart each Kafka broker, one at a time, to apply the configuration change.
Steps for upgrading to 7.7.x (KRaft-mode)¶
You can upgrade KRaft clusters for any Confluent Platform version 7.4.0 and later to the latest Confluent Platform version.
You should upgrade all of the controllers and then all of the brokers, one at a time. Follow these steps for a rolling upgrade:
Shut down the controller/broker. Starting with Kafka 3.7, you can specify a node ID to stop a specific node, using the kafka-server-stop tool.
Upgrade the software on the node per the packaging type.
Restart the controller/broker.
Verify that the cluster behavior and performance meets your expectations.
After you have upgraded each controller and broker, and you are satisfied that the upgraded cluster’s performance meets your expectations, increment the
metadata.version
for the cluster by running thekafka-features
tool with theupgrade
argument:./bin/kafka-features upgrade --bootstrap-server <server:port> --metadata 3.7
Each MetadataVersion later than 3.2.x provides a boolean parameter, which when true indicates there are breaking metadata changes. After you have changed the
metadata.version
to the latest version, you can only downgrade if there are no metadata changes between the current and earlier version. The following table lists the metadata versions supported for each platform version. Note that IV means “internal version” and each metadata version after 0.10.0 also provides an internal version.Confluent Platform version Metadata versions 7.7.x 3.7IV0-3.7IV4 7.6.x 3.6IV0-3.6IV2 7.5.x 3.5IV0-3.5IV2 7.4.x 3.4IV0 7.3.x 3.3IV0-3.3IV3
Confluent license¶
Add the confluent.license
configuration parameter to server.properties
. Confluent Platform 5.4.x and later,
requires confluent.license
to start. For more information, see confluent-server.
Advertised listeners¶
When you upgrade from 5.x or earlier version to 6.x or later version, you need
to include the following properties in the server.properties
file:
advertised.listeners
is required for all deployments. See Admin REST APIs Configuration Options for Confluent Server for more info.confluent.metadata.server.advertised.listeners
is required for RBAC deployments. See Metadata Service Configuration Settings for more info.
Security¶
Starting with 5.4.x, the new authorizer class
kafka.security.authorizer.AclAuthorizer
replaces
kafka.security.auth.SimpleAclAuthorizer
. You must manually change existing
instances of kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
in the server.properties file
.
For more information, see ACL concepts.
Replication factor for Self-Balancing Clusters¶
If the value of confluent.balancer.topic.replication.factor
is greater than
the total number of brokers, the brokers will not start.
The default value of confluent.balancer.topic.replication.factor
is 3
.
For details on the setting, see confluent.balancer.topic.replication.factor.
Upgrade DEB packages using APT¶
Back up all configuration files from
/etc
, including, for example,/etc/kafka
,/etc/kafka-rest
, and/etc/schema-registry
.Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade.
# The example below removes the Kafka package sudo kafka-server-stop sudo apt-get remove confluent-kafka # To remove Confluent Platform and all its dependencies at once, # run the following command after all services are stopped. sudo apt-get autoremove confluent-platform-* If you're running |cs|, the previous ``autoremove`` command may not work. Instead, run the following command: .. code:: bash sudo yum remove confluent-*
Remove the older GPG key and import the updated key. If you have already imported the updated
8b1da6120c2bf624
key, then you can skip this step. However, if you still have the old670540c841468433
key installed, now is the time to remove it and import the8b1da6120c2bf624
key:sudo apt-key del 41468433 wget -qO - https://packages.confluent.io/deb/7.7/archive.key | sudo apt-key add -
Remove the repository files of the previous version.
sudo add-apt-repository -r "deb https://packages.confluent.io/deb/<currently installed version> stable main"
Add the 7.7 repository to
/etc/apt/sources.list
.Attention
After Confluent Platform 8.0, the librdkafka, Avro, and libserdes C/C++ client packages will NOT be available from the
https://packages.confluent.io/deb
location. You will need to obtain those client packages fromhttps://packages.confluent.io/clients
after the Confluent Platform 8.0 release.For the
clients
repository, you must obtain your Debian distribution’s release “Code Name”, such asbuster
,focal
, etc. You can do this by callinglsb_release -cs
. The following example makes this call with$(lsb_release -cs)
, which should work in most cases. If it does not, you must pick the closest Debian or Ubuntu code name for your Debian Linux distribution that matches the supported Debian & Ubuntu Operating Systems supported by Confluent Platform.sudo add-apt-repository "deb https://packages.confluent.io/deb/7.7 stable main" sudo add-apt-repository "deb https://packages.confluent.io/clients/deb $(lsb_release -cs) main"
Refresh repository metadata.
sudo apt-get update
If you modified the configuration files, apt will prompt you to resolve the conflicts. Be sure to keep your original configuration. Install the new version:
sudo apt-get install confluent-platform # Or install the packages you need one by one. For example, to install only Kafka: sudo apt-get install confluent-kafka
If you want to install a specific version, you can view all available Confluent Platform builds with this command:
apt-cache show confluent-platform
Then, install a specific Confluent Platform build by appending the version (
<version>
) to the install command:sudo apt-get install confluent-platform-<version>
Start Confluent Platform components.
kafka-server-start -daemon /etc/kafka/server.properties
Upgrade RPM packages by using YUM¶
Back up all configuration files from
/etc
, including, for example,/etc/kafka
(ZooKeeper mode) oretc/kafka/kraft
(KRaft mode),/etc/kafka-rest
, and/etc/schema-registry
. You should also back up all customizations to your configuration and unit files.Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading to 7.7.x (ZooKeeper mode)).
To stop and remove the Kafka package:
sudo kafka-server-stop sudo yum remove confluent-kafka
To remove Confluent Platform and its dependencies at once, run the following after stopping all services.
sudo yum autoremove confluent-platform-7.0.0
If you’re running Confluent Server, the previous
autoremove
command won’t work. Instead, run the following command:sudo yum remove confluent-*
To upgrade from previous minor versions, run the following command instead:
sudo yum autoremove confluent-platform
Remove the repository files of the previous version.
sudo rm /etc/yum.repos.d/confluent.repo
Remove the older GPG key. This step is optional if you haven’t removed Confluent’s older (
670540c841468433
) GPG key. Confluent’s newer (8b1da6120c2bf624
) key would appear in the RPM Database asgpg-pubkey-0c2bf624-60904208
sudo rpm -e gpg-pubkey-41468433-54d512a8 sudo rpm --import https://packages.confluent.io/rpm/7.7/archive.key
Add the repository to your
/etc/yum.repos.d/
directory in a file namedconfluent-7.7.repo
.Attention
After Confluent Platform 8.0, the librdkafka, Avro, and libserdes C/C++ client packages will NOT be available from the
https://packages.confluent.io/rpm
location. You will need to obtain those client packages fromhttps://packages.confluent.io/clients
after the Confluent Platform 8.0 release.The
$releasever
and$basearch
are Yum placeholder variables that change depending on what release version of the OS and CPU Architecture the OS is running. These are meant to be literal$releasever
$basearch
values in the Yum configuration, not shell variables.[confluent-7.7] name=Confluent repository for 7.7.x packages baseurl=https://packages.confluent.io/rpm/7.7 gpgcheck=1 gpgkey=https://packages.confluent.io/rpm/7.7/archive.key enabled=1 [Confluent-Clients] name=Confluent Clients repository baseurl=https://packages.confluent.io/clients/rpm/centos/$releasever/$basearch gpgcheck=1 gpgkey=https://packages.confluent.io/clients/rpm/archive.key enabled=1
Refresh repository metadata.
sudo yum clean all
Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from the backup after installing the packages.
You can install the entire platform or individual packages. For a list of packages, see Confluent Platform Packages.
To install Confluent Platform, run:
sudo yum install confluent-platform
Or install the packages you need one by one. Note that
confluent-server
andconfluent-kafka
can’t co-exist so install each desired package individually. For example, to install Confluent Server, run:yum install confluent-serverto install |ak|, run:
To install Kafka run:
sudo yum install confluent-kafka
Start services.
kafka-server-start -daemon /etc/kafka/server.properties
Upgrade using TAR or ZIP archives¶
For ZIP and TAR archives, you can delete the old archives directory after the new archive folder has been created and any previous configuration files have been copied into it as described in the following steps.
Return to the directory where you installed Confluent Platform.
Back up all configuration files from
./etc
, including, for example,./etc/kafka
,./etc/kafka-rest
,./etc/schema-registry
, and./etc/confluent-control-center
.Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to Steps for upgrading to 7.7.x (ZooKeeper mode)).
./bin/control-center-stop ./bin/kafka-rest-stop ./bin/schema-registry-stop ./bin/kafka-server-stop ./bin/zookeeper-server-stop # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services cd .. rm -R confluent-7.0.0 (use the installed version number)
Unpack the new archive. Note that YUM may override your existing configuration files, so you will need to restore them from the backup after installing the packages.
tar xzf confluent-7.7.1.tar.gz # Or for ZIP archives: unzip confluent-7.7.1.zip
Start services.
sudo confluent-7.7.1/bin/zookeeper-server-start -daemon /etc/kafka/zookeeper.properties sudo confluent-7.7.1/bin/kafka-server-start -daemon /etc/kafka/server.properties
Upgrade Schema Registry¶
You can upgrade Schema Registry after all Kafka brokers have been upgraded.
To upgrade Schema Registry:
For RBAC-enabled environments only: Add
ResourceOwner
for the REST Proxy user for the Confluent license topic resource (default name is_confluent-command
). For example:confluent iam rbac role-binding create \ --role ResourceOwner \ --principal User:<service-account-id> \ --resource Topic:_confluent-command \ --kafka-cluster <kafka-cluster-id>
Stop Schema Registry:
schema-registry-stop
Back up the configuration file. For example:
cp SR.properties /back-up/SR.properties
Remove the old version:
yum remove confluent-schema-registry-<old release number>
For example:
yum remove confluent-schema-registry-7.0.0
Install the new version:
yum install -y confluent-schema-registry-<new release number>
Restart Schema Registry:
schema-registry-start SR.properties
Upgrade Confluent REST Proxy¶
You can upgrade the Confluent REST Proxy service after all Kafka brokers have been upgraded.
To upgrade the REST Proxy:
For RBAC-enabled environments only: Add
ResourceOwner
for the REST Proxy user for the Confluent license topic resource (default name is_confluent-command
). For example:confluent iam rbac role-binding create \ --role ResourceOwner \ --principal User:<service-account-id> \ --resource Topic:_confluent-command \ --kafka-cluster <kafka-cluster>
Follow the same steps as described above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.).
Restart the Confluent REST Proxy service.
Upgrade Kafka Streams applications¶
You can upgrade Kafka Streams applications independently, without requiring Kafka brokers to be upgraded first.
Follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.
Upgrade Kafka Connect¶
You can upgrade Kafka Connect in either standalone or distributed mode.
Note
The Confluent Replicator version must match the Connect version it is deployed on. For example, Replicator 7.7 should only be deployed to Connect 7.7, so if you upgrade Connect, you must upgrade Replicator.
Upgrade Kafka Connect standalone mode¶
You can upgrade Kafka Connect in standalone mode after all Kafka brokers have been upgraded.
To upgrade Kafka Connect, follow the same steps above to upgrade the package (back up config files, remove packages, install upgraded packages, etc.). Then, restart the client processes.
Upgrade Kafka Connect distributed mode¶
- Back up worker configuration files.
- Modify your configuration file to add the
status.storage.topic
setting. You can safely modify the configuration file while the worker is running. Note that you should create this topic manually. See Distributed Mode Configuration in the Connect User Guide for a detailed explanation. - Perform a rolling restart of the workers.
Upgrade ksqlDB¶
To upgrade ksqlDB to the latest version, follow the steps in Upgrading ksqlDB.
Upgrade Confluent Control Center¶
Follow the instructions in the Confluent Control Center Upgrade Guide.
Upgrade other client applications¶
Review Cross-component compatibility before you upgrade your client applications.
Confluent clients (C/C++, Python, Go and .NET) support all released Kafka broker versions, but not all features may be available on all broker versions because some features rely on newer broker functionality. See Build Client Applications for Confluent Platform for the list of Kafka features supported in the latest versions of clients.
If it makes sense, build applications that use Kafka producers and consumers against the new 3.7.x libraries and deploy the new versions. See Schemas, Serializers, and Deserializers for Confluent Platform for details about using the 3.7.x libraries.