Confluent for Kubernetes Release Notes

Confluent for Kubernetes (CFK) provides a declarative API-driven control plane to deploy and manage Confluent Platform on Kubernetes.

The following sections summarize the technical details of the CFK 2.5 releases.

Confluent for Kubernetes 2.5.5 Release Notes

Confluent for Kubernetes (CFK) 2.5.5 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.2.x, and 7.3.x on Kubernetes versions 1.21 - 1.25 (OpenShift 4.8 - 4.12).

Notable fixes

  • Even when a configOverrides configuration is not in the key=value format, the dynamic certificate update no longer fails and causes the cluster to roll.
  • Updated the confluent-init-container image to have restricted read/write permission on the /opt directory.
  • Resolved Kubernetes pod overlay conflicts with the container.resources field. Now kafka.spec.podTemplate.resources can be used along with the pod overlay.
  • Fixed the issue where multiple CA certificates with the same name and different issuers were not recognized.
  • Fixed an issue where Confluent Control Center goes into CrashLoopBackOff when confluent.controlcenter.ui.basepath is added in the Confluent Control Center CRD under spec.configOverrides.server.

Notable enhancements and updates

  • You can now configure and deploy a Connect worker to download connector plugins from both Confluent Hub and custom URL locations.

    See Configure Kafka Connect.

  • Added support for removing the -Xms or -Xmx configurations from the Kafka configuration.

  • Upgraded CFK to use Go 1.20.6 to fix a number of Go vulnerability issues.

Confluent for Kubernetes 2.5.4 Release Notes

Confluent for Kubernetes (CFK) 2.5.4 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.2.x, and 7.3.x on Kubernetes versions 1.21 - 1.25 (OpenShift 4.8 - 4.12).

Notable fixes

  • Fixed security and vulnerability issues.

Confluent for Kubernetes 2.5.3 Release Notes

Confluent for Kubernetes (CFK) 2.5.3 is a Red Hat Operator Catalog-specific release, and it does not contain notable fixes or updates.

Confluent for Kubernetes 2.5.2 Release Notes

Confluent for Kubernetes (CFK) 2.5.2 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.2.x, and 7.3.x on Kubernetes versions 1.21 - 1.25 (OpenShift 4.8 - 4.12).

Notable fixes

  • Fixed security and vulnerability issues.

Confluent for Kubernetes 2.5.1 Release Notes

Confluent for Kubernetes (CFK) 2.5.1 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.2.x, and 7.3.x on Kubernetes versions 1.21 - 1.25 (OpenShift 4.8 - 4.12).

Notable fixes

  • Fixed the issue with the Kafka CR where the status.phase is sometimes stuck in PROVISIONING even after all replicas have successfully rolled and have come up cleanly.
  • Fixed security and vulnerability issues.

Confluent for Kubernetes 2.5.0 Release Notes

Confluent for Kubernetes (CFK) 2.5.0 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.2.x, and 7.3.x on Kubernetes versions 1.21 - 1.25 (OpenShift 4.8 - 4.12).

New features

Set the volume mode for Persistent Volume Claim

While creating CFK resources, you can set the volume mode for Persistent Volume Claim(PVC) to Block or Filesystem. PVCs are always created in the Filesystem mode currently.

Once set, the volume mode cannot be updated.

See Set custom VolumeMode for Persistent Volume Claims.

Kafka with FIPS-compliant ciphers

You can now use FIPS-compliant Java KeyStores with Kafka’s TLS configuration.

See Security Compliance in Confluent for Kubernetes.

Use Overlays for pod resources to support new Kubernetes capabilities

In the Confluent Platform component custom resource (CR), you can set and use additional Kubernetes pod template features that the CFK API does not support.

See Customize Confluent Platform pods with Pod Overlay.

Notable enhancements and updates

  • CFK provides ARM64 architecture Docker images for ARM64 architecture for use in non-production environments. ARM64 is in preview support only.

  • CFK support bundle now includes information about application resources, such as cluster Link, Confluent Rolebinding, Connector, Kafka REST class, Topic, Schema, and Schema Exporter.

    See Support bundle.

  • CFK no longer requires a Confluent license key.

  • CFK supports specifying an existing Connect cluster as a first-class dependency in the ksqlDB CR.

  • CFK supports mounting volumes for the CFK operator pod.

Notable fixes

  • The webhook to prevent unsafe Kafka pod deletion now safely blocks pod deletion in the following case:
    • There is less than or equal to minimum in-sync replicas, and
    • The broker is one of the in-sync replicas
  • The webhook to prevent unsafe Kafka pod deletion has improved the handling of in-sync replica count.
  • The maximum length of the referenced secret name for the TLS section in the application resource CRs, such as Cluster Links, Connectors, Topics, etc. is now set correctly.
  • ClusterLink mirror topic status section does not incorrectly show PENDING_STOPPED after topics are promoted.
  • In the event of a Cluster Link topic creation failure, the ClusterLink status section correctly displays the successful topics.
  • Spaces in Confluent RoleBinding Principal name are not replaced with the + sign.

Known issues in CFK 2.5

  • When deploying CFK to Red Hat OpenShift with Red Hat’s Operator Lifecycle Manager (that is, using the Operator Hub), you must use OpenShift version 4.9 or 4.10. You cannot use OpenShift 4.6 - 4.8.

    This OpenShift version restriction does not apply when deploying CFK to Red Hat OpenShift in the standard way without using the Red Hat Operator Lifecycle Manager.

  • If the ksqlDB REST endpoint is using the auto-generated certificates, the ksqlDB deployment that points to Confluent Cloud requires to trust the Let’s Encrypt CA.

    For this to work, you must provide a CA bundle through cacerts.pem that contains both (1) the Confluent Cloud CA and (2) the self-signed CA to the ksqlDB CR.

  • When TLS is enabled, and when Confluent Control Center uses a different TLS certificate to communicate with MDS or Confluent Cloud Schema Registry, Control Center cannot use an auto-generated TLS certificate to connect to MDS or Confluent Cloud Schema Registry. See Troubleshooting Guide for a workaround.

  • When deploying the Schema Registry and Kafka CRs simultaneously, Schema Registry could fail because it cannot create topics with a replication factor of 3. It is because the Kafka brokers have not fully started.

    The workaround is to delete the Schema Registry deployment and re-deploy once Kafka is fully up.

  • When deploying an RBAC-enabled Kafka cluster in centralized mode, where another “secondary” Kafka is being used to store RBAC metadata, an error, “License Topic could not be created”, may return on the secondary Kafka cluster.

  • A periodic Kubernetes TCP probe on ZooKeeper causes frequent warning messages “client has closed socket” when warning logs are enabled.

  • REST Proxy configured with monitoring interceptors is missing the callback handler properties when RBAC is enabled. Interceptor would not work, and you would see an error message in the KafkaRestProxy log.

    As a workaround, manually add configuration overrides as shown in the following KafkaRestProxy CR:

    configOverrides:
      server:
        - confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
        - consumer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
        - producer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
    
  • When configuring source initiated cluster links with CFK where the source cluster has TLS enabled, do not set spec.tls, and do not set spec.authentication if the source cluster has mTLS authentication.

    Instead, in the ClusterLink CR, under the spec.configs section, set local.security.protocol: SSL for mTLS.

    Set local.security.protocol: SASL_SSL for SASL authentication with TLS.

  • The Pod Disruption Budget (PDB) in CFK is set as shown below and is non-configurable using the first class CFK API:

    • For Kafka, MaxUnavailable is based on the minISR: maxUnavailable := replicas - minISR
    • For ZooKeeper, MaxUnavailable is based on the number of ZooKeeper nodes: maxUnavailable := (replicas -1 ) /2

    The PDB setting is typically used when upgrading a Kubernetes node. The pods are moved to different nodes for the node upgrade and then are moved back to the node. Or, when you want to reduce the size of the node pool, you would drain that node by moving the pods out of that node.

    If you have use cases when you need to change a PDB, manually set the PDB using the kubectl patch command as below:

    1. Block reconcile on Kafka to ensure that you do not overwrite anything. For example:

      kubectl annotate kafka kafka platform.confluent.io/block-reconcile=true
      
    2. Modify the PDB as required:

      kubectl patch pdb kafka -p '{"spec":{"maxUnavailable":<desired value>}}' --type=merge
      

      Use caution when you select the value. The wrong value could result in data loss or service disruption as you could bring down more Kafka nodes than ideal.

    3. Verify the change:

      kubectl get pdb
      

      An example output based on the above command:

      NAME        MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
      kafka       N/A             <desired value>   <desired value>       11m
      
    4. Perform node drains as required.

    5. Enable reconcile on Kafka. For example:

      kubectl annotate kafka kafka platform.confluent.io/block-reconcile-
      

Known gaps from Confluent Platform 7.3

CFK 2.5 does not support the following Confluent Platform 7.3 functionality:

  • Kafka authentication mechanisms: Kerberos and SASL/Scram