Confluent Platform Upgrade Checklist

Upgrading to Confluent Platform 7.2+ enables you to leverage the latest innovative features that bring a powerful cloud-native experience to your event streaming platform. Here are some of the exciting new features that enhance your event streaming use cases with greater elasticity, improved cost-effectiveness, increased reliability, and global availability:

  • Cluster Linking: Mirror topic names can now be prefixed.
  • ksqlDB: Support for stream-stream and table-table right joins, new JSON functions, improvements to aggregate functions, and complex Protobuf Schema Registry subjects.
  • Kafka Streams: Rack awareness, Interactive Query v2 preview, and record metadata in state store context.

The following checklist provides a quick guide for how to upgrade to the latest version. For detailed guidance, see Upgrade Confluent Platform.

Step 0: Prepare for the upgrade

Here’s what you need to get started:

  • An existing Confluent Platform deployment. If you’re starting with a new deployment, follow the steps in On-Premises Deployments for Confluent Platform.
  • An upgrade plan that matches your specific requirements and environment. You should not start working through this checklist on a live cluster. Review the Upgrade Guide fully and draft an upgrade plan.

Important

If you’re running a Confluent Platform version that’s lower than 5.3.1, upgrade to 5.3.1 before upgrading to 6.1.x and higher.

Step 1: Upgrade ZooKeeper

  • Back up all configuration files before upgrading.
  • Back up ZooKeeper data from the leader. In case of an upgrade failure, this backup gets you back to the latest committed state.

For more information, see Upgrade ZooKeeper.

Note

KIP-500: Kafka Raft (KRaft) replaces ZooKeeper, but KRaft is in early access and should be used in development only. It is not suitable for production. For more information, see Kafka Raft (KRaft) in Confluent Platform Preview.

Step 2: Upgrade Kafka brokers

You have these options for upgrading your Kafka brokers:

  • Downtime upgrade: If downtime is acceptable for your business case, you can take down the entire cluster, upgrade each Kafka broker individually, and restart the cluster.
  • Rolling upgrade: In a rolling upgrade scenario, you upgrade one Kafka broker at a time while the cluster continues to run. To avoid downtime for end users, follow the recommendations in rolling restarts.

For more information, see Upgrade Kafka brokers.

Step 3: Upgrade Confluent Platform components

In this step, you will upgrade the Confluent Platform components. For a rolling upgrade, you can do this on one server at a time while the cluster continues to run. The details depend on your environment, but the steps are the same.

  1. Stop the Confluent Platform components.
  2. Back up configuration files, for example in ./etc/kafka.
  3. Remove existing packages and their dependencies.
  4. Install new packages.
  5. Restart the Confluent Platform components.

You should upgrade Confluent Control Center as the final Confluent Platform component.

For more information, see:

Upgrade steps for individual Confluent Platform components:

Step 4: Update configuration files

Some configuration settings change from one version to the next. The following sections describe changes that are required for specific versions.

Connect Log Redactor configuration

Starting in Confluent Platform 7.1.0, the Log Redactor enables you to redact logs based on regex rules. Log Redactor is a log4j appender that you configure for a component like Connect by updating its log4j properties file.

To configure the Log Redactor for Connect, after you complete your upgrade to Confluent Platform 7.1 or later, add the following settings to your connect-log4j.properties file .

# Configure the Log Redactor rewrite appender to redact log messages using the
# specified redaction regex rules. The `policy.rules` property specifies the
# location of the redaction rules file to be used. The appender redacts logs
# before forwarding them to other appenders specified in the `appenderRefs` property.
log4j.appender.redactor=io.confluent.log4j.redactor.RedactorAppender
log4j.appender.redactor.appenderRefs=stdout, connectAppender
log4j.appender.redactor.policy=io.confluent.log4j.redactor.RedactorPolicy
log4j.appender.redactor.policy.rules=<path_to_log4j_config>/connect-log-redactor-rules.json

# Attach the redactor appender to the root logger.
log4j.rootLogger=INFO, stdout, connectAppender, redactor

Confluent license

When you upgrade to Confluent Platform 5.4.x and later, add the confluent.license configuration parameter to the server.properties file. Confluent Platform 5.4.x and later requires the confluent.license setting to start. For more information, see Confluent Platform Licenses.

Security

When you upgrade to Confluent Platform 5.4.x and later, update the authorizer class in the server.properties file.

Starting with Confluent Platform 5.4.x, the new authorizer class, kafka.security.authorizer.AclAuthorizer, replaces kafka.security.auth.SimpleAclAuthorizer. In the server.properties file, change existing instances of kafka.security.auth.SimpleAclAuthorizer to kafka.security.authorizer.AclAuthorizer. For more information, see ACL concepts.

Replication factor for Self-Balancing Clusters

In Confluent Platform 6.0.0, the confluent.balancer.topic.replication.factor setting was added for Self-Balancing configuration. Ensure that its value is less than or equal to the total number of brokers.

For more information, see confluent.balancer.topic.replication.factor.

Confluent Log Redactor

Confluent Log Redactor is enabled for Connect by default. As required during Confluent Platform upgrades, however, you must back up your configuration files pre-install and restore them post-install to retain any configuration changes. As a result, Log Redactor is no longer enabled and you must manually configure the Connect configuration in the connect-log4j.properties file as follows:

# Configures the Log Redactor rewrite appender, which redacts log messages using the specified redaction regex
# rules. The `policy.rules` property specifies the location of the redaction rules file to be used.
# The appender redacts logs before forwarding them to other appenders specified in the `appenderRefs` property.
log4j.appender.redactor=io.confluent.log4j.redactor.RedactorAppender
log4j.appender.redactor.appenderRefs=stdout, connectAppender
log4j.appender.redactor.policy=io.confluent.log4j.redactor.RedactorPolicy
log4j.appender.redactor.policy.rules=${log4j.config.dir}/connect-log-redactor-rules.json

Step 5: Enable Health+

Health+ enables you to identify issues before downtime occurs, ensuring high availability for your event streaming applications.

  • Enable Telemetry – The Confluent Telemetry Reporter is a plugin that runs inside each Confluent Platform service to push metadata about the service to Confluent. Telemetry Reporter enables product features based on the metadata, like Health+. Telemetry is limited to metadata required to provide Health+ (for example, no topic data) and is used solely to assist Confluent in the provisioning of support services.
  • Enable Health+ – After you enable Telemetry Reporter, you can activate Health+, which provides ongoing, real-time analysis of performance and configuration data for your Confluent Platform deployment.

Note

While enabling Telemetry and Health+ is highly encouraged and beneficial to minimize downtime, it is not mandatory in order to upgrade to Confluent Platform 6.0 and later. Speak with your Confluent account team if you have any questions about the features.

Step 6: Rebuild applications

If you have applications that use Kafka producers and consumers against the new 7.2.x libraries, rebuild and redeploy them. For more information, see Application Development.

You can upgrade Kafka Streams applications independently, without requiring Kafka brokers to be upgraded first. Follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.

For more information, see Upgrade other client applications.

Other Considerations

Confluent Platform 7.2+ has idempotence enabled by default for Kafka producers. This may cause certain proprietary Confluent connectors (which use Kafka producers to write the license to the license topic) to fail if you’re not using a Centralized License in your Connect worker–that is, if you’re using license properties in each connector’s configuration, you’ve upgraded your Connect cluster to any of the new Confluent Platform versions, and the Connect cluster’s backing Kafka cluster is Apache Kafka 2.7.x or Confluent Platform 6.1.x, or earlier. Since Kafka brokers on older versions don’t support idempotent producers out of the box, you can try any of the following workarounds:

  • Workaround 1: You can switch to using the Centralized License feature (introduced in Confluent Platform 6.0) which explicitly disables producer idempotence.

  • Workaround 2: Add the following property to each proprietary connector’s configuration:

    confluent.topic.producer.enable.idempotence = false
    
  • Workaround 3: Upgrade your Kafka broker to a version newer than Apache Kafka 2.7.x or Confluent Platform 6.1.x.