.. _upgrade: Upgrade |cp| ============ .. note:: If you are a |cp| subscriber, and you have questions about upgrades or need help, contact us through our `Support Portal `__. .. include:: includes/cs-license.rst .. _upgrade-preparation: Preparation ----------- Consider the guidelines below when preparing to upgrade. * Always back up all configuration files before upgrading. This includes, for example, ``/etc/kafka``, ``/etc/kafka-rest``, and ``/etc/schema-registry``. * Read the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process. In other words, don't start working through this guide on a live cluster. Read the guide entirely, make a plan, then execute the plan. * Give careful consideration to the order in which components are upgraded. **Starting with version 0.10.2, Java clients (producer and consumer) have the ability to communicate with older brokers.** Version 0.10.2 clients can talk to version 0.10.0 or newer brokers. However, if your brokers are older than 0.10.0, you must upgrade all the brokers in the |ak-tm| cluster before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients. Before 0.10.2, |ak| is backward compatible, which means that clients from |ak| 0.8.x releases (|cp| 1.0.x) will work with brokers from |ak| release 0.8.x through |kafka_branch| (|cp| 2.0.x through |version|.x), but not vice-versa. This means you always need to plan upgrades such that *all* brokers are upgraded before clients. Clients include any application that uses |ak| producer or consumer, command line tools, Camus, |sr|, REST Proxy, |kconnect-long| and |kstreams|. .. important:: Due to a bug introduced in |ak| 0.9.0.0 (|cp| 2.0.0), clients that depend on |zk| (old Scala high-level Consumer and MirrorMaker if used with the old consumer) will not work with 0.10.x.x or newer brokers. Therefore, |ak| 0.9.0.0 (|cp| 2.0.0) clients should be upgraded to 0.9.0.1 (|cp| 2.0.1) *before* brokers are upgraded to 0.10.x.x or newer. This step is not necessary for 0.8.x or 0.9.0.1 clients. .. important:: Although not recommended, some deployments have clients co-located with brokers (on the same node). In these cases, both the broker and clients share the same packages. This is problematic because *all* brokers must be upgraded before clients are upgraded. Pay careful attention to this when upgrading. * |ak| 2.0.0 contains changes with potential compatibility impact and deprecations with respect to previous major versions (i.e. 0.8.x.x, 0.9.x.x, 0.10.x.x, 0.11.x.x and 1.0.x). Refer to the `Apache Kafka documentation `__ to understand how they affect applications using |ak|. .. important:: Java 7 is no longer supported. Java 8 is now the minimum version required. For complete compatibility information, see the :ref:`interoperability-versions`. * If you are using :ref:`kafka_ldap_authorizer`, you must migrate to the commercial ``confluent-server`` package when upgrading to |version|.0. See :ref:`migrate-confluent-server` for details. * Read the :ref:`release_notes`. They contain important information about noteworthy features, and changes to configurations that may impact your upgrade. Upgrade procedures ------------------ #. Consider using :ref:`Confluent Control Center ` to monitor broker status during the :ref:`rolling restart `. #. Determine and install the appropriate Java version. See :ref:`Supported Java Versions ` to determine which versions are supported for the |cp| version to which you are upgrading. #. Determine if clients are co-located with brokers. If they are, then ensure all client processes are not upgraded until *all* |ak| brokers have been upgraded. #. Decide on performing a rolling upgrade or a downtime upgrade. |cp| supports both rolling upgrades (upgrade one broker at a time to avoid cluster downtime) and downtime upgrades (take down the entire cluster, upgrade it, and bring everything back up). #. Upgrade *all* |ak| brokers (see :ref:`upgrade-brokers`). #. Upgrade |sr|, |crest|, and Camus (see :ref:`upgrade-camus`). #. If it makes sense, build applications that use |ak| producers and consumers against the new |version|.x libraries and deploy the new versions. See :ref:`app_development` for more details about using the |version|.x libraries. .. _upgrade-brokers: Upgrade all |ak| brokers ^^^^^^^^^^^^^^^^^^^^^^^^ In a rolling upgrade scenario, upgrade one |ak| broker at a time, taking into consideration the recommendations for doing :ref:`rolling restarts ` to avoid downtime for end users. In a downtime upgrade scenario, take the entire cluster down, upgrade each |ak| broker, then start the cluster. Steps to upgrade for any fix pack release """"""""""""""""""""""""""""""""""""""""" Any fix pack release can perform a rolling upgrade (for example, 3.1.1 to 3.1.2) by simply upgrading each broker one at a time. To upgrade each broker: #. Stop the broker. #. Upgrade the software (see below for your packaging type). #. Start the broker. .. _rolling-upgrade: Steps for upgrading previous versions to |version|.x """""""""""""""""""""""""""""""""""""""""""""""""""" In a rolling upgrade scenario, upgrading to |cp| |version|.x (|ak| |kafka_branch|.x) requires special steps because |ak| |kafka_branch|.x includes a change to the on-disk data format (unless the upgrade is from 3.3.x or newer) and the inter-broker protocol. Follow these steps for a rolling upgrade: #. Update ``server.properties`` on all |ak| brokers by modifying the properties ``inter.broker.protocol.version`` and ``log.message.format.version`` to match the currently installed version: * For |cp| 2.0.x, use ``inter.broker.protocol.version=0.9.0`` and ``log.message.format.version=0.9.0`` * For |cp| 3.0.x, use ``inter.broker.protocol.version=0.10.0`` and ``log.message.format.version=0.10.0`` * For |cp| 3.1.x, use ``inter.broker.protocol.version=0.10.1`` and ``log.message.format.version=0.10.1`` * For |cp| 3.2.x, use ``inter.broker.protocol.version=0.10.2`` and ``log.message.format.version=0.10.2`` * For |cp| 3.3.x, use ``inter.broker.protocol.version=0.11.0`` and ``log.message.format.version=0.11.0`` * For |cp| 4.0.x, use ``inter.broker.protocol.version=1.0`` and ``log.message.format.version=1.0`` * For |cp| 4.1.x, use ``inter.broker.protocol.version=1.1`` and ``log.message.format.version=1.1`` * For |cp| 5.0.x, use ``inter.broker.protocol.version=2.0`` and ``log.message.format.version=2.0`` * For |cp| 5.1.x, use ``inter.broker.protocol.version=2.1`` and ``log.message.format.version=2.1`` * For |cp| 5.2.x, use ``inter.broker.protocol.version=2.2`` and ``log.message.format.version=2.2`` * For |cp| 5.3.x, use ``inter.broker.protocol.version=2.3`` and ``log.message.format.version=2.3`` * For |cp| 5.4.x, use ``inter.broker.protocol.version=2.4`` and ``log.message.format.version=2.4`` * If you use |cs|, you must provide a valid Confluent license key. For example: ``confluent.license=123``. #. Upgrade each |ak| broker, one at a time. #. After all |ak| brokers have been upgraded, make the following update in ``server.properties``: ``inter.broker.protocol.version=2.4``. #. Restart each |ak| broker, one at a time, to apply the configuration change. #. After most clients are using |version|.x, modify ``server.properties`` by changing the following property: ``log.message.format.version=2.3``. Because the message format is the same in |cp| 3.3.x through |version|.x, this step is optional if the upgrade is from |cp| 3.3.x or newer. #. If you have overridden the message format version as instructed above, then you need to do one more rolling restart to upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later, change log.message.format.version to 2.4 on each broker and restart them one by one. Note that the older Scala clients, which are no longer maintained, do not support the message format introduced in 0.11, so to avoid conversion costs (or to take advantage of exactly once semantics), the newer Java clients must be used. .. _upgrade-cp-license: Confluent license """"""""""""""""" Add the ``confluent.license`` configuration parameter to ``server.properties``. |cp| 5.4.x and later, requires ``confluent.license`` to start. For more information, see :ref:`confluent-server-package`. Additional |zk| upgrade information """"""""""""""""""""""""""""""""""" |zk| has been upgraded to 3.5.6. The |zk| upgrade from 3.4.X to 3.5.6 can fail if there are no snapshot files in the 3.4 data directory. This usually happens in test upgrades where |zk| 3.5.6 is trying to load an existing 3.4 data directory in which no snapshot file has been created. For more details about this issue refer to `ZOOKEEPER-3056 `_ . A fix is provided in `ZOOKEEPER-3056 `_, which is to specify the configuration ``snapshot.trust.empty=true`` in ``zookeeper.properties`` before the upgrade. Data loss has been observed in standalone cluster upgrades when using the ``snapshot.trust.empty=true`` configuration. snapshot.trust.empty=true config. For more details about the issue, refer to `ZOOKEEPER-3644 `_. The safe workaround recommended is to copy the empty `snapshot `_ file to the 3.4 data directory, if there are no snapshot files present in the 3.4 data directory. For more details about the workaround, refer to the |zk| `Upgrade FAQ `_. An embedded Jetty-based `AdminServer `_ has been added in |zk| 3.5. AdminServer is enabled by default in |zk| and is started on port 8080. AdminServer is disabled by default in the |zk| configuration (``zookeeper.properties``) provided by the |ak-tm| distribution. Make sure to update your local ``zookeeper.properties`` file with ``admin.enableServer=false`` if you want to disable the AdminServer. Refer to the `AdminServer configuration `_ to configure the AdminServer. Four letter words whitelist in |zk| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Starting in |zk| 3.5.3, Four Letter Words commands need to be explicitly white listed in the ``zookeeper.4lw.commands.whitelist`` setting for |zk| server to enable the commands. By default the whitelist only contains the ``srvr`` command which ``zkServer.sh`` uses. The rest of Four Letter Words commands are disabled by default. Example to whitelist ``stat``, ``ruok``, ``conf``, and ``isro`` commands while disabling the rest of Four Letter Words command: .. codewithvars:: bash 4lw.commands.whitelist=stat, ruok, conf, isro Example to whitelist all Four Letter Words commands: .. codewithvars:: bash 4lw.commands.whitelist=* See `The Four Letter Words `_ for detail. Upgrading DEB packages using APT """""""""""""""""""""""""""""""" #. Back up all configuration files from ``/etc``, including, for example, ``/etc/kafka``, ``/etc/kafka-rest``, and ``/etc/schema-registry``. #. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to :ref:`rolling-upgrade`). .. codewithvars:: bash # The example below removes the Kafka package (for Scala |scala_version|) sudo kafka-server-stop sudo apt-get remove confluent-kafka-|scala_version| # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services sudo apt-get autoremove confluent-platform-|scala_version| #. Remove the older GPG key and import the updated key. If you have already imported the updated ``8b1da6120c2bf624`` key, then you can skip this step. However, if you still have the old ``670540c841468433`` key installed, now is the time to remove it and import the ``8b1da6120c2bf624`` key: .. codewithvars:: bash sudo apt-key del 41468433 wget -qO - https://packages.confluent.io/deb/|version|/archive.key | sudo apt-key add - #. Remove the repository files of the previous version .. codewithvars:: bash sudo add-apt-repository -r "deb https://packages.confluent.io/deb/ stable main" #. Add the |version| repository to ``/etc/apt/sources.list``. .. codewithvars:: bash sudo add-apt-repository "deb https://packages.confluent.io/deb/|version| stable main" #. Refresh repository metadata. .. codewithvars:: bash sudo apt-get update #. If you modified the configuration files, apt will prompt you to resolve the conflicts. Be sure to keep your original configuration. Install the new version: .. codewithvars:: bash sudo apt-get install confluent-platform-|scala_version| # Or install the packages you need one by one. For example, to install only Kafka: sudo apt-get install confluent-kafka-|scala_version| .. include:: includes/installing-cp.rst :start-after: tip_for_installation :end-before: tip-for-available-packages-start .. tip:: You can view all available |cp| builds with this command: .. codewithvars:: bash apt-cache show confluent-platform-|scala_version| You can install specific |cp| builds by appending the version (````) to the install command: .. codewithvars:: bash sudo apt-get install confluent-platform-|scala_version|- #. Start |cp| components. .. codewithvars:: bash kafka-server-start -daemon /etc/kafka/server.properties Upgrading RPM packages using YUM """""""""""""""""""""""""""""""" #. Back up all configuration files from ``/etc``, including, for example, ``/etc/kafka``, ``/etc/kafka-rest``, and ``/etc/schema-registry``. #. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to :ref:`rolling-upgrade`). .. codewithvars:: bash # The example below removes the Kafka package (for Scala |scala_version|) sudo kafka-server-stop sudo yum remove confluent-kafka-|scala_version| # To remove Confluent-Platform and all its dependencies at once, run the following after stopping all services sudo yum autoremove confluent-platform-|scala_version| #. Remove the repository files of the previous version. .. codewithvars:: bash sudo rm /etc/yum.repos.d/confluent.repo #. Remove the older GPG key. This step is optional if you haven't removed Confluent's older (``670540c841468433``) GPG key. Confluent's newer (``8b1da6120c2bf624``) key would appear in the RPM Database as ``gpg-pubkey-0c2bf624-60904208`` .. codewithvars:: bash sudo rpm -e gpg-pubkey-41468433-54d512a8 sudo rpm --import https://packages.confluent.io/rpm/|version|/archive.key #. Add the repository to your ``/etc/yum.repos.d/`` directory in a file named :litwithvars:`confluent-|version|.repo`. .. codewithvars:: ini [confluent-|version|] name=Confluent repository for |version|.x packages baseurl=https://packages.confluent.io/rpm/|version| gpgcheck=1 gpgkey=https://packages.confluent.io/rpm/|version|/archive.key enabled=1 #. Refresh repository metadata. .. codewithvars:: bash sudo yum clean all #. Install the new version. Note that yum may override your existing configuration files, so you will need to restore them from the backup after installing the packages. .. codewithvars:: bash sudo yum install confluent-platform-|scala_version| # Or install the packages you need one by one. For example, to install just Kafka: sudo yum install confluent-kafka-|scala_version| .. include:: includes/installing-cp.rst :start-after: tip_for_installation :end-before: tip-for-available-packages-start #. Start services. .. codewithvars:: bash kafka-server-start -daemon /etc/kafka/server.properties TAR or ZIP archives """"""""""""""""""" For ZIP and TAR archives, you can delete the old archives directory after the new archive folder has been created and any previous configuration files have been copied into it as described in the following steps. #. Return to the directory where you installed |cp|. #. Back up all configuration files from ``./etc``, including, for example, ``./etc/kafka``, ``./etc/kafka-rest``, ``./etc/schema-registry``, and ``./etc/confluent-control-center``. #. Stop the services and remove the existing packages and their dependencies. This can be done on one server at a time for a rolling upgrade (refer to :ref:`rolling-upgrade`). .. codewithvars:: bash ./bin/control-center-stop ./bin/kafka-rest-stop ./bin/schema-registry-stop ./bin/kafka-server-stop ./bin/zookeeper-server-stop # To remove Confluent Platform and all its dependencies at once, run the following after stopping all services cd .. rm -R confluent-3.3.1 (use the installed version number) #. Unpack the new archive. Note that YUM may override your existing configuration files, so you will need to restore them from the backup after installing the packages. .. codewithvars:: bash tar xzf confluent-|release|-|scala_version|.tar.gz # Or for ZIP archives: unzip confluent-|release|-|scala_version|.zip .. include:: includes/installing-cp.rst :start-after: tip_for_installation :end-before: tip-for-available-packages-start #. Start services. .. codewithvars:: bash sudo confluent-|release|/bin/zookeeper-server-start -daemon /etc/kafka/zookeeper.properties sudo confluent-|release|/bin/kafka-server-start -daemon /etc/kafka/server.properties .. _upgrade-sr: Upgrade |sr| ^^^^^^^^^^^^ You can upgrade |sr| after *all* |ak| brokers have been upgraded. To upgrade |sr|, follow the same steps above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). Then restart |sr|. .. tip:: If you have a multi-node |sr| cluster running |cp| 4.1.1 or earlier, do not perform a rolling upgrade to 5.2.x directly. Doing so might generate errors caused by an intermediate state mixed live cluster of legacy and newer |sr| nodes, which require feature parity for managing schema IDs. Instead, do one of the following: - If you want to use a rolling upgrade, first upgrade to version 4.1.2 or 4.1.3, then perform a rolling upgrade to 5.2.x or newer version. - Or, stop the whole 4.1.1 cluster and upgrade all |sr| nodes at the same time to 5.2.x or newer version. .. _upgrade-rest-proxy: Upgrade |crest-long| ^^^^^^^^^^^^^^^^^^^^ You can upgrade the |crest-long| service after *all* |ak| brokers have been upgraded. To upgrade the |crest| service, follow the same steps above to upgrade the package (back up configuration files, remove packages, install upgraded packages, etc.). Then restart the |crest-long| service. .. _upgrade-kafka-streams: Upgrade |kstreams| applications ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can upgrade |kstreams| applications independently, without requiring |ak| brokers to be upgraded first. Follow the instructions in the :ref:`Kafka Streams Upgrade Guide ` to upgrade your applications to use the latest version of |kstreams|. .. _upgrade_connect: Upgrade |kconnect-long| ^^^^^^^^^^^^^^^^^^^^^^^ You can upgrade |kconnect-long| in either standalone or distributed mode. Upgrade |kconnect-long| standalone mode """"""""""""""""""""""""""""""""""""""" You can upgrade |kconnect-long| in standalone mode after *all* Kafka brokers have been upgraded. To upgrade |kconnect-long|, follow the same steps above to upgrade the package (back up config files, remove packages, install upgraded packages, etc.). Then, restart the client processes. .. _upgrade-connect-distributed: Upgrade |kconnect-long| distributed mode """""""""""""""""""""""""""""""""""""""" A new required configuration, ``status.storage.topic`` was added to |kconnect-long| in 0.10.0.1. To upgrade a |kconnect-long| cluster, add this configuration before updating to the new version. The setting will be ignored by older versions of |kconnect-long|. #. Back up worker configuration files. #. Modify your configuration file to add the ``status.storage.topic`` setting. You can safely modify the configuration file while the worker is running. Note that you should create this topic manually. See :ref:`Distributed Mode Configuration ` in the :ref:`Connect User Guide ` for a detailed explanation. #. Perform a rolling restart of the workers. .. _upgrade-camus: Upgrade Camus ^^^^^^^^^^^^^ Camus was deprecated in |cp| 3.0.0 and was removed in |cp| 5.0.0. Upgrade |c3| ^^^^^^^^^^^^ Follow the instructions in the :ref:`Confluent Control Center Upgrade Guide `. .. _upgrade-other-client-apps: Upgrade other client applications ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Review :ref:`cross-component-compatibility` before you upgrade your client applications. Version 0.10.2 or newer Java clients (producer and consumer) work with the version 0.10.0 or newer |ak| brokers. If your brokers are older than 0.10.0, you must upgrade all the brokers in the |ak-tm| cluster before upgrading your Java clients. Version 0.10.2 brokers support version 0.8.x and newer Java clients. Confluent's C/C++, Python, Go and .NET clients support all released |ak| broker versions, but not all features may be available on all broker versions since some features rely on newer broker functionality. See :ref:`kafka_clients` for the list of |ak| features supported in the latest versions of clients. If it makes sense, build applications that use |ak| producers and consumers against the new |kafka_branch|.x libraries and deploy the new versions. See :ref:`app_development` for details about using the |kafka_branch|.x libraries. Additional client application upgrade information """"""""""""""""""""""""""""""""""""""""""""""""" * The `Consumer API has changed `_ between |ak| 0.9.0.x and 0.10.0.0. * In librdkafka version 0.11.0, the default value for the ``api.version.request`` configuration property has changed from ``false`` to ``true``, meaning that librdkafka will make use of the latest protocol features of the broker without the need to set this property to ``true`` explicitly. Due to a bug in |ak| 0.9.0.x, this will cause client-broker connections to stall for 10 seconds during connection-startup on 0.9.0.x brokers. The workaround for this is to explicitly configure ``api.version.request`` to ``false`` on clients communicating with <=0.9.0.x brokers.