Overview of Confluent Platform Upgrade¶
Upgrading to Confluent Platform 7.6 enables you to leverage the latest innovative features that bring a powerful cloud-native experience to your event streaming platform.
For details on the new features in Confluent Platform 7.6, see Release Notes for Confluent Platform 7.6 and the 7.6 blog post.
The following checklist provides a quick guide for how to upgrade to the latest version. For detailed guidance, see Upgrade Confluent Platform.
Step 0: Prepare for the upgrade¶
Here’s what you need to get started:
- An existing Confluent Platform deployment. If you’re starting with a new deployment, follow the steps in Install Confluent Platform On-Premises.
- An upgrade plan that matches your specific requirements and environment. You should not start working through this checklist on a live cluster. Review the Upgrade Guide fully and draft an upgrade plan.
Step 1: Upgrade ZooKeeper¶
Important
Starting with Confluent Platform 7.4, KRaft is the default for metadata management for a Kafka cluster. If you started using Confluent Platform with version 7.4 or later, you can likely skip this step. For more information, see KRaft Configuration Reference for Confluent Platform and Steps for upgrading to 7.6.x (KRaft-mode).
- Back up all configuration files before upgrading.
- Back up ZooKeeper data from the leader. In case of an upgrade failure, this backup gets you back to the latest committed state.
For details on how to upgrade ZooKeeper, see Upgrade ZooKeeper.
Step 2: Upgrade Kafka brokers¶
You have these options for upgrading your Kafka brokers:
- Downtime upgrade: If downtime is acceptable for your business case, you can take down the entire cluster, upgrade each Kafka broker individually, and restart the cluster.
- Rolling upgrade: In a rolling upgrade scenario, you upgrade one Kafka broker at a time while the cluster continues to run. To avoid downtime for end users, follow the recommendations in rolling restarts.
For details on how to upgrade Kafka brokers, see Upgrade Kafka brokers.
Step 3: Upgrade Confluent Platform components¶
In this step, you will upgrade the Confluent Platform components. For a rolling upgrade, you can do this on one server at a time while the cluster continues to run. The details depend on your environment, but the steps to upgrade components are the same.
You should always upgrade Confluent Control Center as the final Confluent Platform component.
Upgrade steps:
- Stop the Confluent Platform components.
- Back up configuration files, for example in
./etc/kafka
. - Remove existing packages and their dependencies.
- Install new packages.
- Restart the Confluent Platform components.
For details on how to upgrade different package types, see the following sections:
For details on how to upgrade individual Confluent Platform components, see the following sections:
-
The Confluent Replicator version must match the Connect version it is deployed on. For example, Replicator 7.6 should only be deployed to Connect 7.6, so if you upgrade Connect, you must upgrade Replicator.
Step 4: Update configuration files¶
Some configuration settings change from one version to the next. The following sections describe changes that are required for specific versions.
Connect Log Redactor configuration¶
Starting in Confluent Platform 7.1.0, the Log Redactor enables you to redact logs based on regex rules. Log Redactor is a log4j appender that you configure for a component like Connect by updating its log4j properties file.
To configure the Log Redactor for Connect, after you complete your upgrade to
Confluent Platform 7.1 or later, add the following settings to your connect-log4j.properties
file .
# Configure the Log Redactor rewrite appender to redact log messages using the
# specified redaction regex rules. The `policy.rules` property specifies the
# location of the redaction rules file to be used. The appender redacts logs
# before forwarding them to other appenders specified in the `appenderRefs` property.
log4j.appender.redactor=io.confluent.log4j.redactor.RedactorAppender
log4j.appender.redactor.appenderRefs=stdout, connectAppender
log4j.appender.redactor.policy=io.confluent.log4j.redactor.RedactorPolicy
log4j.appender.redactor.policy.rules=<path_to_log4j_config>/connect-log-redactor-rules.json
# Attach the redactor appender to the root logger.
log4j.rootLogger=INFO, stdout, connectAppender, redactor
Confluent license¶
When you upgrade to Confluent Platform 5.4.x and later, add the confluent.license
configuration parameter to the server.properties
file. Confluent Platform 5.4.x and
later requires the confluent.license
setting to start. For more information,
see Manage Confluent Platform Licenses.
Replication factor for Self-Balancing Clusters¶
In Confluent Platform 6.0.0, the confluent.balancer.topic.replication.factor
setting was
added for Self-Balancing configuration. Ensure that its value is less than or equal to
the total number of brokers.
For more information, see confluent.balancer.topic.replication.factor.
Confluent Log Redactor¶
Confluent Log Redactor is enabled for Connect by default.
As required during Confluent Platform upgrades, however,
you must back up your configuration files pre-install and restore them post-install
to retain any configuration changes. As a result, Log Redactor is no longer enabled
and you must manually configure the Connect configuration in the
connect-log4j.properties
file as follows:
# Configures the Log Redactor rewrite appender, which redacts log messages using the specified redaction regex
# rules. The `policy.rules` property specifies the location of the redaction rules file to be used.
# The appender redacts logs before forwarding them to other appenders specified in the `appenderRefs` property.
log4j.appender.redactor=io.confluent.log4j.redactor.RedactorAppender
log4j.appender.redactor.appenderRefs=stdout, connectAppender
log4j.appender.redactor.policy=io.confluent.log4j.redactor.RedactorPolicy
log4j.appender.redactor.policy.rules=${log4j.config.dir}/connect-log-redactor-rules.json
Step 5: Enable Health+¶
Health+ enables you to identify issues before downtime occurs, ensuring high availability for your event streaming applications.
- Enable Telemetry – The Confluent Telemetry Reporter is a plugin that runs inside each Confluent Platform service to push metadata about the service to Confluent. Telemetry Reporter enables product features based on the metadata, like Health+. Telemetry is limited to metadata required to provide Health+ (for example, no topic data) and is used solely to assist Confluent in the provisioning of support services.
- Enable Health+ – After you enable Telemetry Reporter, you can activate Health+, which provides ongoing, real-time analysis of performance and configuration data for your Confluent Platform deployment.
Note
While enabling Telemetry and Health+ is highly encouraged and beneficial to minimize downtime, it is not mandatory in order to upgrade to Confluent Platform 6.0 and later. Speak with your Confluent account team if you have any questions about the features.
Step 6: Rebuild applications¶
If you have applications that use Kafka producers and consumers against the new 7.6.x libraries, rebuild and redeploy them. For more information, see Schemas, Serializers, and Deserializers for Confluent Platform.
You can upgrade Kafka Streams applications independently, without requiring Kafka brokers to be upgraded first. Follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.
For more information, see Upgrade other client applications.
Other Considerations¶
Confluent Platform 7.2 and later has idempotence enabled by default for Kafka producers. This may cause certain proprietary Confluent connectors (which use Kafka producers to write the license to the license topic) to fail if you’re not using a Centralized License in your Connect worker–that is, if you’re using license properties in each connector’s configuration, you’ve upgraded your Connect cluster to any of the new Confluent Platform versions, and the Connect cluster’s backing Kafka cluster is Apache Kafka 2.7.x or Confluent Platform 6.1.x, or earlier. Since Kafka brokers on older versions don’t support idempotent producers out of the box, you can try any of the following workarounds:
Workaround 1: You can switch to using the Centralized License feature (introduced in Confluent Platform 6.0) which explicitly disables producer idempotence.
Workaround 2: Add the following property to each proprietary connector’s configuration:
confluent.topic.producer.enable.idempotence = false
Workaround 3: Upgrade your Kafka broker to a version newer than Apache Kafka 2.7.x or Confluent Platform 6.1.x.