.. _upgrade: Upgrade |cp| ============ Use these instructions to upgrade earlier versions of |cp| to the latest. .. note:: If you are a |cp| subscriber, and you have questions about upgrades or need help, you can contact Confluent using the `Support Portal `__. .. _upgrade-preparation: Preparation ----------- Follow these guidelines when you prepare to upgrade. * Before upgrading, always back up all configuration and unit files with their file permissions, ownership, and customizations. |cp| may not run if the proper ownership isn't preserved on configuration files. By default, configuration files are located in the ``$CONFLUENT_HOME/etc`` directory and are organized by component. * .. include:: includes/cs-license.rst * If you are running a |cp| version later than 7.4.0 and your clusters are running in |kraft| mode, you may not need to upgrade |zk|. See :ref:`upgrade-procedures`. * Upgrade your entire platform deployment so that all components are running the same version. Do not bridge component versions. * Read the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. In other words, don't start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan. * Give careful consideration to the order in which components are upgraded. Java clients (producer and consumer) can communicate with older brokers so you should plan to upgrade *all* brokers before clients. Clients include any application that uses |ak| producer or consumer, command line tools, |sr|, REST Proxy, |kconnect-long| and |kstreams|. .. important:: Determine if clients are colocated with brokers. Although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, both the broker and clients share the same packages. If they are colocated, then ensure all client processes are not upgraded until *all* |ak| brokers have been upgraded. * Determine and install the appropriate Java version. Java 8 is now the minimum version supported, and it is deprecated and will be removed in a future release. See :ref:`Supported Java Versions ` for a list of |cp| versions and the corresponding Java version support before you upgrade. For complete compatibility information, see the :ref:`interoperability-versions`. * The LDAP Authorizer is deprecated. To configure group-based authorization using LDAP, you must migrate to the commercial ``confluent-server`` package when upgrading to |version|.0. See :ref:`migrate-confluent-server` for details. * Consider using :ref:`Confluent Control Center ` to monitor broker status during the :ref:`rolling restart `. * Decide on performing a rolling upgrade or a downtime upgrade. |cp| supports both rolling upgrades (upgrade one broker at a time to avoid cluster downtime) and downtime upgrades (take down the entire cluster, upgrade it, and bring everything back up). * Read the :ref:`release_notes`. They contain important information about noteworthy features, and changes to configurations that may impact your upgrade. .. _upgrade-procedures: Upgrade procedures ------------------ Following is the recommended order of upgrades: #. Upgrade brokers. - For clusters running in |zk| mode, :ref:`Upgrade ZooKeeper ` and then :ref:`Upgrade all Kafka brokers `. - For clusters running in |kraft| mode, upgrade :ref:`KRaft clusters `. #. Upgrade the rest of |cp| components as described in the later part of this topic. In most cases, we recommend you upgrade |c3| last among the |cp| components. However, if you are upgrading from 6.0.1 or 6.0.2 to 6.1.0 or later, you should upgrade |c3| first, and then upgrade your |ak| brokers to avoid |c3-short| instability. If you have already upgraded your |ak| brokers and |c3| crashes, upgrade |c3-short| and the situation should resolve itself. #. If it makes sense, build applications that use |ak| producers and consumers against the new |version|.x libraries and deploy the new versions. See :ref:`app_development` for more details about using the |version|.x libraries. .. _upgrade-zk: Upgrade |zk| ^^^^^^^^^^^^ .. important:: .. include:: ../includes/zk-deprecation.rst You may not need to upgrade |zk| if you started using |cp| with version 7.4 or later. Use the following guidelines to prepare for the upgrade: * |zk| has been upgraded to version 3.8.3 from |cp| version 6.1.x to |cp| |version| to mitigate `CVE-2023-44981 `__ and because |zk| version 3.6.3 has reached its end-of-life. To upgrade both |ak| and |zk| clusters to the latest versions, note the following: - |cp| version 5.4.x and later (|ak| clusters version >= 2.4) can be updated with no special steps. This means that |zk|-based clusters that are running binaries bundled with |cp| 5.4.6 or later can be updated directly. - |cp| versions lower than 5.4.x (|ak| version 2.4) first need to be updated to a version equal to or greater than |cp| 5.4.x and earlier than |cp| 7.6. This means that |zk|-based clusters which are running binaries bundled with |cp| versions earlier than 5.4.x need to be updated to any binaries bundled with |cp| versions 5.4.6 or later and earlier than 7.6. You can then update the clusters to version |cp| 7.6. * Back up all configuration files and customizations to your configuration and unit files before upgrading. * Back up |zk| data from the leader. It will get you back to the latest committed state in case of a failure. * Read through the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. Rolling upgrade of |zk| """"""""""""""""""""""" Perform the following steps to gather information needed for a rolling upgrade: #. To find who is the leader, run the following command: :: echo mntr | nc localhost 2181 | grep zk_server_state Verify that there is only one leader in the entire |zk| ensemble. #. To find how many nodes are in sync with the leader, run the following command: :: echo mntr | nc localhost 2181 | grep zk_synced_followers #. Verify that all the followers are in sync with the leader: :: echo mntr | nc localhost 2181 | grep zk_synced_followers For each |zk| server, repeat the following steps. The leader |zk| server should be upgraded last: #. Stop the |zk| process gracefully. #. Upgrade the |zk| binary. #. Start the |zk| process. #. Wait until all the followers are in sync with the leader: :: echo mntr | nc localhost 2181 | grep zk_synced_followers If there is an issue during an upgrade, you can rollback using the same steps. The AdminServer """"""""""""""" An embedded Jetty-based `AdminServer `_ was added in |zk| 3.5. The AdminServer is disabled by default in |zk| distributed as part of |cp|. To enable the AdminServer, set ``admin.enableServer=true`` in your local ``zookeeper.properties`` file. The AdminServer is enabled by default (on port 8080) in |zk| provided by the |ak-tm| distribution. To configure the AdminServer, see the `AdminServer configuration `_. Four letter words whitelist in |zk| """"""""""""""""""""""""""""""""""" Starting in |zk| 3.5.3, the Four Letter Words commands must be explicitly white listed in the ``zookeeper.4lw.commands.whitelist`` setting for |zk| server to enable the commands. By default the whitelist only contains the ``srvr`` command which ``zkServer.sh`` uses. The rest of the Four Letter Words commands are disabled by default. An example to whitelist ``stat``, ``ruok``, ``conf``, and ``isro`` commands while disabling the rest of Four Letter Words command: .. codewithvars:: bash 4lw.commands.whitelist=stat, ruok, conf, isro An example to whitelist all Four Letter Words commands: .. codewithvars:: bash 4lw.commands.whitelist=* When running |zk| in a Docker container, use the Java system property, ``-e KAFKA_OPTS=-'Dzookeeper.4lw.commands.whitelist='``, in the ``docker run`` command. For example: .. codewithvars:: bash docker run -d \ --net=host --name=zookeeper \ -e ZOOKEEPER_CLIENT_PORT=32181 \ -e ZOOKEEPER_TICK_TIME=2000 \ -e ZOOKEEPER_SYNC_LIMIT=2 \ -e KAFKA_OPTS='-Dzookeeper.4lw.commands.whitelist=*' \ confluentinc/cp-zookeeper:|release| See `The Four Letter Words `_ for more information. Upgrade issue with missing snapshot file """""""""""""""""""""""""""""""""""""""" The |zk| upgrade from 3.4.X to 3.5.X can fail with the following error if there are no snapshot files in the 3.4 data directory. :: ERROR Unable to load database on disk (org.apache.zookeeper.server.quorum.QuorumPeer) java.io.IOException: No snapshot found, but there are log entries. Something is broken! at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:222) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240) at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:919) at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:905) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:205) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82) This usually happens in test upgrades where |zk| 3.5.X is trying to load an existing 3.4 data directory in which no snapshot file has been created. For more details about this issue, refer to `ZOOKEEPER-3056 `_. The recommended workaround is: #. Take a backup of the current |zk| ``data`` directory. #. Look for snapshot files (file names starting with ``snapshot``) in the |zk| ``data`` directory. * If there are snapshot files, you can safely upgrade. * If there is no snapshot file, download the empty snapshot file and place in the ``data`` directory. Empty snapshot file is available as an attachment in https://issues.apache.org/jira/browse/ZOOKEEPER-3056 For more details about the workaround, refer to the |zk| `Upgrade FAQ `_. .. _upgrade-brokers: Upgrade |ak| brokers ^^^^^^^^^^^^^^^^^^^^ In a rolling upgrade scenario, upgrade one |ak| broker at a time, taking into consideration the recommendations for doing :ref:`rolling restarts ` to avoid downtime for end users. In a downtime upgrade scenario, take the entire cluster down, upgrade each |ak| broker, then start the cluster. Steps to upgrade for any fix pack release """"""""""""""""""""""""""""""""""""""""" Any fix pack release can perform a rolling upgrade (for example, 6.1.7 to 6.1.7) by simply upgrading each broker one at a time. To upgrade each broker: #. Stop the broker. #. Upgrade the software (see below for your packaging type). #. Start the broker. .. _rolling-upgrade: Steps for upgrading to |version|.x (|zk| mode) """""""""""""""""""""""""""""""""""""""""""""" In a rolling upgrade scenario, upgrading to |cp| |version|.x (|ak| |kafka_branch|.x) requires special steps, because |ak| |kafka_branch|.x includes changes to wire protocol and the inter-broker protocol. Follow these steps for a rolling upgrade: #. Update ``server.properties`` on all |ak| brokers by modifying the ``inter.broker.protocol.version`` and ``log.message.format.version`` properties to match the currently installed version. In |cp| version 7.0.0 and later, the broker configuration ``log.message.format.version`` and topic configuration ``message.format.version`` parameters are deprecated. The values of both configurations are assumed to be ``3.0`` if ``inter.broker.protocol.version`` is ``3.0`` or later. If ``log.message.format.version`` or ``message.format.version`` is set, you should clear them when ``inter.broker.protocol.version`` is upgraded to ``3.0``, to avoid potential compatibility issues if the ``inter.broker.protocol.version`` is downgraded. For more information, see `KIP-724 `__. For |cp| versions before 7.0.0: - After upgrading from earlier versions to 6.1.x, customers with high partition counts (greater than 100,000) may experience cluster instability. For this reason, |confluent| recommends against using the version 2.7 inter broker protocol (IBP). If you use |cs|, you must provide a valid Confluent license key, for example, ``confluent.license=123``. .. note:: You don't need to restart the broker to activate the modified properties. The following table shows the broker protocol and log message format versions that correspond with each |cp| version. ============ ======================================== ===================================== |cp| version Broker protocol version Log message format version ============ ======================================== ===================================== 7.6.x ``inter.broker.protocol.version=3.6`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 7.5.x ``inter.broker.protocol.version=3.5`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 7.4.x ``inter.broker.protocol.version=3.4`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 7.3.x ``inter.broker.protocol.version=3.3`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 7.2.x ``inter.broker.protocol.version=3.2`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 7.1.x ``inter.broker.protocol.version=3.1`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 7.0.x ``inter.broker.protocol.version=3.0`` ``log.message.format.version`` parameter is ignored and the value is inferred from ``inter.broker.protocol.version`` 6.2.x ``inter.broker.protocol.version=2.8`` ``log.message.format.version=2.8`` 6.1.x ``inter.broker.protocol.version=2.7`` ``log.message.format.version=2.7`` ============ ======================================== ===================================== * |confluent| recommends against setting ``inter.broker.protocol.version`` to ``2.7``. #. Upgrade each |ak| broker, one at a time. #. Stop the broker. #. Upgrade the software as necessary for your packaging type. - :ref:`upgrade-deb-packages` - :ref:`upgrade-rpm-packages` - :ref:`upgrade-tar-zip-archives` #. Start the broker. .. important:: The active controller should be the last broker you restart. This ensures that the active controller isn't moved on each broker restart, which would slow down the restart. To identify which broker in the cluster is the active controller, check the ``kafka.controller:type=KafkaController,name=ActiveControllerCount`` metric. The active controller reports ``1`` and the remaining brokers report ``0``. For more information, see :ref:`kafka-monitoring-metrics-broker`. #. After all |ak| brokers have been upgraded, make the following update in ``server.properties``: :litwithvars:`inter.broker.protocol.version=|kafka_branch|`. #. Restart each |ak| broker, one at a time, to apply the configuration change. .. _upgrade-kraft-cluster: Steps for upgrading to |version|.x (|kraft|-mode) """"""""""""""""""""""""""""""""""""""""""""""""" You can upgrade |kraft| clusters for any |cp| version 7.4.0 and later to the latest |cp| version. You should upgrade all of the brokers, one at a time. Follow these steps for a rolling upgrade: #. Shut down the broker. #. Upgrade the software for the broker per the packaging type. - :ref:`upgrade-deb-packages` - :ref:`upgrade-rpm-packages` - :ref:`upgrade-tar-zip-archives` #. Restart the broker. #. Verify that the cluster behavior and performance meets your expectations. #. After you are satisfied that the broker performance meets your expectations, increment the ``metadata.version`` for the broker by running the ``kafka-features`` tool with the ``upgrade`` argument: .. codewithvars:: ./bin/kafka-features upgrade --bootstrap-server --metadata |kafka_branch| For more information, see `MetadataVersion `__. Note that IV means "internal version" and each metadata version after 0.10.0 also provides an internal version. In addition, each metadata version later than 3.2.x provides a boolean parameter, which when true indicates there are breaking metadata changes. After you have changed the ``metadata.version`` to the latest version, you can only downgrade if there are no metadata changes between the current and earlier version. The following table lists the metadata versions supported for each platform version. ============ ================== |cp| version Metadata versions ============ ================== 7.6.x 3.6IV0 7.5.x 3.5IV0-3.5IV2 7.4.x 3.4IV0 7.3.x 3.3IV0-3.3IV3 ============ ================== .. _upgrade-cp-license: Confluent license """"""""""""""""" Add the ``confluent.license`` configuration parameter to ``server.properties``. |cp| 5.4.x and later, requires ``confluent.license`` to start. For more information, see :ref:`confluent-server-package`. Advertised listeners """""""""""""""""""" When you upgrade from 5.x or earlier version to 6.x or later version, you need to include the following properties in the ``server.properties`` file: * ``advertised.listeners`` is required for all deployments. See :ref:`confluent-server-rest-config` for more info. * ``confluent.metadata.server.advertised.listeners`` is required for RBAC deployments. See :ref:`mds-configuration-options` for more info. Security """""""" Starting with 5.4.x, the new authorizer class ``kafka.security.authorizer.AclAuthorizer`` replaces ``kafka.security.auth.SimpleAclAuthorizer``. You must manually change existing instances of ``kafka.security.auth.SimpleAclAuthorizer`` to ``kafka.security.authorizer.AclAuthorizer`` in the ``server.properties file``. For more information, see :ref:`acl-concepts`. Replication factor for |sbc-long| """"""""""""""""""""""""""""""""" The ``confluent.balancer.topic.replication.factor`` setting was added in |cp| 6.0.0 for |sbc| configuration. If the value of ``confluent.balancer.topic.replication.factor`` is greater than the total number of brokers, the brokers will not start. The default value of ``confluent.balancer.topic.replication.factor`` is ``3``. For details on the setting, see :ref:`sbc-config-replication-factor`. .. _upgrade-deb-packages: Upgrade DEB packages using APT """""""""""""""""""""""""""""""" #. Back up all configuration files from ``/etc``, including, for example, ``/etc/kafka``, ``/etc/kafka-rest``, and ``/etc/schema-registry``. #. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a :ref:`rolling upgrade `. .. codewithvars:: bash # The example below removes the Kafka package sudo kafka-server-stop sudo apt-get remove confluent-kafka # To remove Confluent Platform and all its dependencies at once, # run the following command after all services are stopped. sudo apt-get autoremove confluent-platform-* .. important:: If you're running |cs|, the previous ``autoremove`` may not work. If not, try the following command: .. code:: bash # Remove Confluent Platform sudo apt-get remove confluent-platform # And then remove the dependencies sudo apt-get autoremove #. Remove the older GPG key and import the updated key. If you have already imported the updated ``8b1da6120c2bf624`` key, then you can skip this step. However, if you still have the old ``670540c841468433`` key installed, now is the time to remove it and import the ``8b1da6120c2bf624`` key: .. codewithvars:: bash sudo apt-key del 41468433 wget -qO - https://packages.confluent.io/deb/|version|/archive.key | sudo apt-key add - #. Remove the repository files of the previous version. .. codewithvars:: bash sudo add-apt-repository -r "deb https://packages.confluent.io/deb/ stable main" #. Add the |version| repository to ``/etc/apt/sources.list``. .. include:: includes/installing-cp.rst :start-after: deb-clients-notices-start :end-before: deb-clients-notices-stop .. codewithvars:: bash sudo add-apt-repository "deb https://packages.confluent.io/deb/|version| stable main" sudo add-apt-repository "deb https://packages.confluent.io/clients/deb $(lsb_release -cs) main" #. Refresh repository metadata. .. codewithvars:: bash sudo apt-get update #. If you modified the configuration files, apt will prompt you to resolve the conflicts. Be sure to keep your original configuration. Install the new version: .. codewithvars:: bash sudo apt-get install confluent-platform # Or install the packages you need one by one. For example, to install only Kafka: sudo apt-get install confluent-kafka .. include:: includes/installing-cp.rst :start-after: tip_for_installation :end-before: tip-for-available-packages-start .. tip:: You can view all available |cp| builds with this command: .. codewithvars:: bash apt-cache show confluent-platform You can install specific |cp| builds by appending the version (````) to the install command: .. codewithvars:: bash sudo apt-get install confluent-platform- #. Start |cp| components. .. codewithvars:: bash kafka-server-start -daemon /etc/kafka/server.properties .. _upgrade-rpm-packages: Upgrade RPM packages by using YUM """"""""""""""""""""""""""""""""" #. Back up all configuration files from ``/etc``, including, for example, ``/etc/kafka`` (|zk| mode) or ``etc/kafka/kraft`` (|kraft| mode), ``/etc/kafka-rest``, and ``/etc/schema-registry``. You should also back up all customizations to your configuration and unit files. #. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to :ref:`rolling-upgrade`). .. codewithvars:: bash # The example below removes the Kafka package sudo kafka-server-stop sudo yum remove confluent-kafka # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services sudo yum autoremove confluent-platform-5.5.0 .. note:: If you're running |cs|, the previous ``autoremove`` command won't work. Instead, run the following command: .. codewithvars:: bash sudo yum remove confluent-* To upgrade from previous minor versions of 6.0, run the following command instead: .. codewithvars:: bash sudo yum autoremove confluent-platform #. Remove the repository files of the previous version. .. codewithvars:: bash sudo rm /etc/yum.repos.d/confluent.repo #. Remove the older GPG key. This step is optional if you haven't removed Confluent's older (``670540c841468433``) GPG key. Confluent's newer (``8b1da6120c2bf624``) key would appear in the RPM Database as ``gpg-pubkey-0c2bf624-60904208`` .. codewithvars:: bash sudo rpm -e gpg-pubkey-41468433-54d512a8 sudo rpm --import https://packages.confluent.io/rpm/|version|/archive.key #. Add the repository to your ``/etc/yum.repos.d/`` directory in a file named :litwithvars:`confluent-|version|.repo`. .. include:: includes/installing-cp.rst :start-after: rpm-clients-notices-start :end-before: rpm-clients-notices-stop .. codewithvars:: ini [confluent-|version|] name=Confluent repository for |version|.x packages baseurl=https://packages.confluent.io/rpm/|version| gpgcheck=1 gpgkey=https://packages.confluent.io/rpm/|version|/archive.key enabled=1 [Confluent-Clients] name=Confluent Clients repository baseurl=https://packages.confluent.io/clients/rpm/centos/$releasever/$basearch gpgcheck=1 gpgkey=https://packages.confluent.io/clients/rpm/archive.key enabled=1 #. Refresh repository metadata. .. codewithvars:: bash sudo yum clean all #. Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from the backup after installing the packages. .. codewithvars:: bash sudo yum install confluent-platform # Or install the packages you need one by one. For example, to install just Kafka: sudo yum install confluent-kafka .. important:: If you are running |cs|, you install ``confluent-server`` instead of ``confluent-kafka``. Both packages can't co-exist on the same system, so install each desired package individually, for example, run ``yum install confluent-server`` instead of ``yum install confluent-platform``. .. include:: includes/installing-cp.rst :start-after: tip-for-available-packages-start :end-before: tip-for-available-packages-end #. Start services. .. codewithvars:: bash kafka-server-start -daemon /etc/kafka/server.properties .. _upgrade-tar-zip-archives: Upgrade using TAR or ZIP archives """"""""""""""""""""""""""""""""" For ZIP and TAR archives, you can delete the old archives directory after the new archive folder has been created and any previous configuration files have been copied into it as described in the following steps. #. Return to the directory where you installed |cp|. #. Back up all configuration files from ``./etc``, including, for example, ``./etc/kafka``, ``./etc/kafka-rest``, ``./etc/schema-registry``, and ``./etc/confluent-control-center``. #. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to :ref:`rolling-upgrade`). .. codewithvars:: bash ./bin/control-center-stop ./bin/kafka-rest-stop ./bin/schema-registry-stop ./bin/kafka-server-stop ./bin/zookeeper-server-stop # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services cd .. rm -R confluent-6.2.1 (use the installed version number) #. Unpack the new archive. Note that YUM may override your existing configuration files, so you will need to restore them from the backup after installing the packages. .. codewithvars:: bash tar xzf confluent-|release|.tar.gz # Or for ZIP archives: unzip confluent-|release|.zip .. include:: includes/installing-cp.rst :start-after: tip_for_installation :end-before: tip-for-available-packages-start #. Start services. .. codewithvars:: bash sudo confluent-|release|/bin/zookeeper-server-start -daemon /etc/kafka/zookeeper.properties sudo confluent-|release|/bin/kafka-server-start -daemon /etc/kafka/server.properties .. _upgrade-sr: Upgrade |sr| ^^^^^^^^^^^^ You can upgrade |sr| after *all* |ak| brokers have been upgraded. The following are a few of the version-specific upgrade considerations: * To live update a |sr| cluster using |ak| election to version 6.0, first make sure that the previous instances are already updated to version 5.0.1 or newer. Instances running |cp| 5.0.1 or later support expansion of |ak| election protocol as a part of upgrades without breaking live upgrade compatibility. * Starting with |cp| 6.2.2, |sr| provides a new endpoint for |c3| to filter visible |sr| clusters which supports |c3-short| interaction with an RBAC enabled |sr|, and |c3-short| is upgraded to call this endpoint. Therefore, post-6.2.2 versions of |c3-short| are not compatible with pre-6.2.2 versions of |sr| because |c3-short| will try to call an endpoint which does not exist in pre-6.2.2 schema registries. If you are upgrading your deployments from |cp| 6.2.1 or older, you must upgrade both |c3-short| and |sr| together to 6.2.2 or newer in order for |c3-short| to work properly with RBAC enabled |sr| clusters. Specifically, if you upgrade |c3-short| from pre-6.2.2, you must also upgrade |sr|. To upgrade |sr|: #. **For RBAC-enabled environments only:** Add ``ResourceOwner`` for the |crest| user for the Confluent license topic resource (default name is ``_confluent-command``). For example: :: confluent iam rbac role-binding create \ --role ResourceOwner \ --principal User: \ --resource Topic:_confluent-command \ --kafka-cluster-id #. Stop |sr|: .. codewithvars:: bash schema-registry-stop #. Back up the configuration file. For example: .. codewithvars:: bash cp SR.properties /back-up/SR.properties #. Remove the old version: .. codewithvars:: bash yum remove confluent-schema-registry- For example: .. codewithvars:: bash yum remove confluent-schema-registry-5.1.2 #. Install the new version: .. codewithvars:: bash yum install -y confluent-schema-registry- #. Restart |sr|: .. codewithvars:: bash schema-registry-start SR.properties .. _upgrade-rest-proxy: Upgrade |crest-long| ^^^^^^^^^^^^^^^^^^^^ You can upgrade the |crest-long| service after *all* |ak| brokers have been upgraded. To upgrade the |crest|: #. **For RBAC-enabled environments only:** Add ``ResourceOwner`` for the |crest| user for the Confluent license topic resource (default name is ``_confluent-command``). For example: :: confluent iam rbac role-binding create \ --role ResourceOwner \ --principal User: \ --resource Topic:_confluent-command \ --kafka-cluster-id #. Follow the same steps as described above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). #. Restart the |crest-long| service. .. _upgrade-kafka-streams: Upgrade |kstreams| applications ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can upgrade |kstreams| applications independently, without requiring |ak| brokers to be upgraded first. Follow the instructions in the :ref:`Kafka Streams Upgrade Guide ` to upgrade your applications to use the latest version of |kstreams|. .. _upgrade_connect: Upgrade |kconnect-long| ^^^^^^^^^^^^^^^^^^^^^^^ You can upgrade |kconnect-long| in either standalone or distributed mode. .. note:: .. include:: ./includes/connector-replicator-note.rst Upgrade |kconnect-long| standalone mode """"""""""""""""""""""""""""""""""""""" You can upgrade |kconnect-long| in standalone mode after *all* Kafka brokers have been upgraded. To upgrade |kconnect-long|, follow the same steps above to upgrade the package (back up config files, remove packages, install upgraded packages, etc.). Then, restart the client processes. .. _upgrade-connect-distributed: Upgrade |kconnect-long| distributed mode """""""""""""""""""""""""""""""""""""""" A new required configuration, ``status.storage.topic`` was added to |kconnect-long| in 0.10.0.1. To upgrade a |kconnect-long| cluster, add this configuration before updating to the new version. The setting will be ignored by older versions of |kconnect-long|. #. Back up worker configuration files. #. Modify your configuration file to add the ``status.storage.topic`` setting. You can safely modify the configuration file while the worker is running. Note that you should create this topic manually. See :connect-common:`Distributed Mode Configuration|userguide.html#distributed-mode` in the :connect-common:`Connect User Guide|userguide.html` for a detailed explanation. #. Perform a rolling restart of the workers. .. _upgrade-ksqldb: Upgrade |ksqldb| ^^^^^^^^^^^^^^^^ To upgrade |cp| |ksqldb| to the latest version, follow the steps in :ref:`Upgrading ksqlDB `. Upgrade |c3| ^^^^^^^^^^^^ Follow the instructions in the :ref:`Confluent Control Center Upgrade Guide `. .. _upgrade-other-client-apps: Upgrade other client applications ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Review :ref:`cross-component-compatibility` before you upgrade your client applications. Confluent clients (C/C++, Python, Go and .NET) support all released |ak| broker versions, but not all features may be available on all broker versions because some features rely on newer broker functionality. See :ref:`kafka_clients` for the list of |ak| features supported in the latest versions of clients. If it makes sense, build applications that use |ak| producers and consumers against the new |kafka_branch|.x libraries and deploy the new versions. See :ref:`app_development` for details about using the |kafka_branch|.x libraries. Related content --------------- - :ref:`cp-upgrade-checklist` - :ref:`interoperability-versions` - :ref:`migrate-zk-kraft`