Upgrade Confluent Operator, Confluent Platform, and Helm

Before you start the upgrade process, make sure your Kubernetes cluster is among the Supported Environments for the target version of Confluent Operator.

The examples in this guide use the following assumptions:

  • $VALUES_FILE refers to the configuration file you set up in Create the global configuration file.

  • To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file ($VALUES_FILE). However, in your production deployments, use the --set or --set-file option when applying sensitive data with Helm. For example:

    helm upgrade --install kafka \
     --set kafka.services.mds.ldap.authentication.simple.principal=”cn=mds,dc=test,dc=com” \
     --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \
     --set kafka.enabled=true
  • operator is the namespace that Confluent Platform is deployed in.

  • All commands are executed in the helm directory under the directory Confluent Operator was downloaded to.

Currently, Confluent Operator does not support the direct upgrade from Confluent Platform 5.3.x to 5.5. If you are upgrading from 5.3.x to 5.5, perform a rolling upgrade as following:

  1. Upgrade Confluent Platform from 5.3.x to 5.4.x, specifically running the pre_upgrade_cp54_zookeeper.sh script before upgrading.
  2. Upgrade Confluent Platform from 5.4.x to 5.5.x as described in the following section.

Upgrade Confluent Platform version 5.4.x to 5.5.x

Review the following upgrade considerations for limitations and unsupported upgrade scenarios:

  • Starting in 5.5, Confluent Operator supports namespaced deployments for new installations rather than requiring cluster-wide permissions for Confluent Operator. However, for the purposes of this upgrade workflow, you must continue to use the cluster-wide permissions for Operator and cannot switch to the namespaced scoping. The following default settings cannot be changed for upgrades:
    • operator.namespaced: false
    • operator.installClusterResources: true
  • Starting in 5.5, Confluent Operator can use pre-existing Kubernetes Storage Classes rather than needing to create its own as it did with 5.4.x. However, for the purposes of this upgrade workflow, you must continue to use the Operator-created Storage Class. Make sure your configuration file ($VALUES_FILE) still contains the global.provider.storage section you were using for 5.4.

Complete the following procedures to upgrade your operating environment.

Upgrade Confluent Operator

You start the migration and upgrade process by upgrading Confluent Operator.

  1. Download and extract the Operator bundle for Confluent Platform 5.5.

  2. Delete the old clusterrolebinding with the following command:

    kubectl delete clusterrolebinding <namespace>-<operator-deployment-name>

    For example, using the operator namespace and the cc-operator name:

    kubectl delete clusterrolebinding operator-cc-operator
  3. If you specify not to install CRDs (operator.installCRDs: false) at the time of the upgrade, you need to manually update them as described in Configure Confluent Operator and Confluent Platform after the upgrade.

  4. Run the disable_reconcile.sh script.


    If you do not disable the reconcile operations before you run the upgrade, components will begin rolling restarts and some pods may go into CrashLoopBackOff status.

    The script disables reconcile operations so that Confluent Platform components do not restart. Only Operator restarts during this upgrade.

    ./scripts/upgrade/disable_reconcile.sh <namespace>

    Your output should resemble the following:

    ./scripts/upgrade/disable_reconcile.sh <namespace>
    physicalstatefulcluster.operator.confluent.cloud/connectors annotated
    physicalstatefulcluster.operator.confluent.cloud/controlcenter annotated
    physicalstatefulcluster.operator.confluent.cloud/kafka annotated
    physicalstatefulcluster.operator.confluent.cloud/ksql annotated
    physicalstatefulcluster.operator.confluent.cloud/replicator annotated
    physicalstatefulcluster.operator.confluent.cloud/schemaregistry annotated
    physicalstatefulcluster.operator.confluent.cloud/zookeeper annotated
  5. From the directory where the Operator bundle was extracted, run:

    helm upgrade --install <operator-helm-release-name> \
      ./confluent-operator \
      --values $VALUES_FILE \
      --set operator.enabled=true \
      --namespace <namespace> \

    <helm-release-name> is the NAME displayed when running the command helm ls.

    NAME        REVISION    UPDATED              STATUS   CHART                APP VERSION    NAMESPACE
    operator    2           Mon Jan 6            DEPLOYED confluent-operator   1.0            operator

Upgrade ZooKeeper, Kafka and other Confluent Platform components

Complete the following steps for ZooKeeper, Kafka, and other components, specifically Schema Registry, Connect, Replicator, and Confluent Control Center, in the Confluent Platform cluster.


Each cluster will roll during this procedure.


Do not follow the steps in this section to upgrade KSQL. See Migrate KSQL to ksqlDB for the special migration steps for ksqlDB 5.5.

  1. For each Confluent Platform component release in the cluster, update the desired configuration and version of the cluster:

    helm upgrade --install <component-helm-release-name> \
      ./confluent-operator \
      --values $VALUES_FILE \
      --set <component>.enabled=true \
      --namespace <namespace> \
  2. Enable reconciliation so that Operator applies the desired configuration and updates the cluster:

    ./scripts/upgrade/enable_reconcile.sh <namespace> <cluster-name>

    For example:

    ./scripts/upgrade/enable_reconcile.sh operator kafka
  3. Complete the previous step for all remaining namespaces (if applicable).

Migrate KSQL to ksqlDB

Upgrading KSQL 5.4 to ksqlDB 5.5 in-place is not supported. Complete the following steps to migrate KSQL 5.4 to ksqlDB 5.5:

  1. To prevent data loss, stop all client applications and producers that write to the KSQL cluster.

  2. Make sure you have access to kafka-console-consumer. The kafka-console-consumer tool is installed as part of Confluent Platform.

    You must provide credentials for the kafka-console-consumer command by using the -consumer.config option. For more information, see Encryption and Authentication with SSL.

  3. Capture existing SQL statements with the following command:

    kafka-console-consumer \
      --bootstrap-server <Kafka provider host> \
      --topic _confluent-ksql-<ksql_service_id>_command_topic \
      --from-beginning \
    | jq -r ".statement" > statements.sql

    The above example command shows how to pipe the output to jq and save the the SQL commands to a statements.sql file.

    Replace <ksql_service_id> with <namespace>.<KSQL cluster name>. You can find <KSQL cluster name> in the Helm chart of KSQL.

    If you have load balancers enabled for Kafka, replace <Kafka provider host> with kafka.<load balancer host address>:9092.

    If you do not have load balancers enabled, replace <Kafka provider host> with the kafka.name value in Helm charts as bootstrap with port 9071, e.g. kafka:9071.

    Look through the statements to make sure that the command worked as expected without any errors or exeptions.

  4. Stop the existing KSQL deployment with the following command:

    helm uninstall <ksql-release-name> --namespace <namespace>
  5. Deploy a new ksqlDB cluster with the following command:

    helm upgrade --install ksql \
      --values $VALUES_FILE \
      --set ksql.enabled=true \
      --namespace <namespace>
  6. Replay SQL statements that you captured in the second step with the command in ksqlDB CLI:

    RUN SCRIPT <path-to-statements.sql>

    There have been backward-incompatible syntax changes between KSQL and ksqlDB, and some of the statements may fail. If this happens, run the statements in statements.sql one-by-one, fixing any statements that have failed.

  7. Reconfigure and restart the clients to talk to the new ksqlDB clusters.

    If a load balancer is enabled, you may have a new load balancer IP. You may need to update the DNS records if you’re pointing DNS at the ksqlDB load balancer with clients reaching ksqlDB via DNS address rather than directly reaching ksqlDB via load balancer IP. See Add DNS entries for configuring ksqlDB DNS name.

See Upgrading ksqlDB for more information about migrating from KSQL to ksqlDB.

Update a single component image

You can update a single Confluent Platform component image version without affecting cluster performance. Special care is taken when upgrading Kafka brokers. The upgrade process verifies topic leader re-election and zero under-replicated partitions before beginning the upgrade process for the next Kafka broker. This ensures that the Kafka cluster continues to perform well while being upgraded.

  1. In the IMAGES file that comes with the Operator bundle, locate the Confluent Platform component that you want to update, and take the image tag, <new-tag>.

  2. In your config file ($VALUES_FILE), add or update the following section:

        tag: <new-tag>
  3. Run the following command, replacing <helm-release-name> with the Helm release name of the component:

    helm upgrade --install <helm-release-name> \
     --values $VALUES_FILE \
     --set <component>.enabled=true \
     --set <component>.image.tag=<new-tag> \
     ./confluent-operator \
     --namespace <namespace>

    A component rolling upgrade begins after you run the above helm upgrade command, unless you have disabled reconciliation. (See the workflow above on upgrading to 5.5 for an example where you might disable reconciliation).