Confluent for Kubernetes Release Notes

Confluent for Kubernetes (CFK) provides a declarative API-driven control plane to deploy and manage Confluent Platform on Kubernetes.

The following sections summarize the technical details of the CFK 2.6 releases.

Confluent for Kubernetes 2.6.5 Release Notes

Confluent for Kubernetes (CFK) 2.6.5 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.3.x, and 7.4.x on Kubernetes versions 1.22 - 1.26 (OpenShift 4.9 - 4.12).

Notable fixes

  • Fixed an issue to avoid race conditions while updating certificates using cert-manager.
  • Fixed an issue in the ConfluentRolebinding custom resource (CR) which resulted in an incorrect state when you delete a resource-specific role binding from the CR.
  • When Kafka and KRaft roll, now information about under-replicated partitions (URPs) and log end offsets (LEOs) are logged.
  • The Kafka shrink workflow now considers the brokerID offset annotation as part of the self-balancing cluster (SBC) rebalancing.
  • Critical security and vulnerability issues were fixed.

Confluent for Kubernetes 2.6.4 Release Notes

Confluent for Kubernetes (CFK) 2.6.4 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.3.x, and 7.4.x on Kubernetes versions 1.22 - 1.26 (OpenShift 4.9 - 4.12).

Notable fixes

  • Critical security and vulnerability issues were fixed.

Confluent for Kubernetes 2.6.3 Release Notes

Confluent for Kubernetes (CFK) 2.6.3 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.3.x, and 7.4.x on Kubernetes versions 1.22 - 1.26 (OpenShift 4.9 - 4.12).

Notable fixes

  • Critical security and vulnerability issues were fixed.
  • Now you can specify the bidirectional value for the link.mode property in Cluster Link custom resources (CR).

Confluent for Kubernetes 2.6.2 Release Notes

Confluent for Kubernetes (CFK) 2.6.2 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.3.x, and 7.4.x on Kubernetes versions 1.22 - 1.26 (OpenShift 4.9 - 4.12).

Notable fixes

  • Updated the confluent-init-container image to have restricted read/write permission on the /opt directory.
  • Resolved Kubernetes pod overlay conflicts with the container.resources field. Now kafka.spec.podTemplate.resources can be used along the pod overlay.
  • Even when a configOverrides configuration is not in the key=value format, the dynamic certificate update no longer fails and causes the cluster to roll.
  • Fixed an issue where Confluent Control Center goes into CrashLoopBackOff when confluent.controlcenter.ui.basepath is added in the Confluent Control Center CRD under spec.configOverrides.server.
  • Fixed the issue where multiple CA certificates with the same name and different issuers were not recognized.

Notable enhancements and updates

  • You can now configure and deploy a Connect worker to download connector plugins from both Confluent Hub and custom URL locations.

    See Configure Kafka Connect.

  • Upgraded CFK to use Go 1.20.6 to fix a number of Go vulnerability issues.

  • Added support for removing the -Xms or -Xmx configurations from the Kafka configuration.

Confluent for Kubernetes 2.6.1 Release Notes

Confluent for Kubernetes (CFK) 2.6.1 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.3.x, and 7.4.x on Kubernetes versions 1.22 - 1.26 (OpenShift 4.9 - 4.12).

Notable fixes

  • Fixed security and vulnerability issues.

Confluent for Kubernetes 2.6.0 Release Notes

Confluent for Kubernetes (CFK) 2.6.0 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.3.x, and 7.4.x on Kubernetes versions 1.22 - 1.26 (OpenShift 4.9 - 4.12).

New features

Confluent for Kubernetes (CFK) Blueprints is GA

CFK Blueprints is a higher-level abstraction of CFK, that allows platform teams to define a set of standard configurations, such as prod, staging, dev, qa, and to allow teams to easily deploy Confluent Platform in Kubernetes infrastructure.

For CFK Blueprints documentation, see Confluent for Kubernetes User Guides.

Multi-region clusters (MRC) with external access URLs

Now, Confluent Platform components can communicate via external access endpoints in an MRC deployment, eliminating the prior requirement to set up inter-component communication via DNS.

See Multi-Region Deployment of Confluent Platform in Confluent for Kubernetes for details.

KRaft-based Confluent Platform clusters

You can use CFK to provision, manage, and monitor KRaft-backed clusters in new Confluent Platform deployments.

KRaft replaces ZooKeeper in Confluent Platform deployments.

See Configure and Manage KRaft for details.

Health+ for CFK

In addition to the Confluent Platform metrics, now CFK metrics are available via Health+ for proactive support using Telemetry.

See Monitor Confluent Platform with Confluent for Kubernetes for details.

Notable enhancements and updates

Separate internal and external certificates

CFK supports the provisioning of separate certificates for internal communication (amongst Confluent Platform components) and external communications (between clients and the Confluent Platform cluster).

See Use separate TLS certificates for internal and external communications for details.

Dynamic certificate loading for Kafka

CFK supports dynamic certificate loading for Kafka listeners and Kafka REST server. This allows you to update your Kafka broker certificates without having to perform a Kafka cluster roll.

See Dynamic Kafka certificate update for details.

Notable fixes

  • Kafka Internal secret name caused an issue with the CFK upgrade workflow.
  • The default size of the regionalized garbage collector (-XX:G1HeapRegionSize=16) was set without the M units, resulting in it being interpreted as 16 bytes and being rounded up to 1 MB.
  • Added explanation for spec.status.phase in the API documentation.
  • Now you can correctly remove JVM heap size configuration overrides.

Known issues

  • When deploying CFK to Red Hat OpenShift with Red Hat’s Operator Lifecycle Manager (that is, using the Operator Hub), you must use OpenShift version 4.9 or higher.

    This OpenShift version restriction does not apply when deploying CFK to Red Hat OpenShift in the standard way without using the Red Hat Operator Lifecycle Manager.

  • If the ksqlDB REST endpoint is using the auto-generated certificates, the ksqlDB deployment that points to Confluent Cloud requires trusting the Let’s Encrypt CA.

    For this to work, you must provide a CA bundle through cacerts.pem that contains both (1) the Confluent Cloud CA and (2) the self-signed CA to the ksqlDB CR.

  • When TLS is enabled, and when Confluent Control Center uses a different TLS certificate to communicate with MDS or Confluent Cloud Schema Registry, Control Center cannot use an auto-generated TLS certificate to connect to MDS or Confluent Cloud Schema Registry. See Troubleshooting Guide for a workaround.

  • When deploying the Schema Registry and Kafka CRs simultaneously, Schema Registry could fail because it cannot create topics with a replication factor of 3. It is because the Kafka brokers have not fully started.

    The workaround is to delete the Schema Registry deployment and re-deploy once Kafka is fully up.

  • When deploying an RBAC-enabled Kafka cluster in centralized mode, where another “secondary” Kafka is being used to store RBAC metadata, an error, “License Topic could not be created”, may return on the secondary Kafka cluster.

  • A periodic Kubernetes TCP probe on ZooKeeper causes frequent warning messages “client has closed socket” when warning logs are enabled.

  • REST Proxy configured with monitoring interceptors is missing the callback handler properties when RBAC is enabled. Interceptor would not work, and you would see an error message in the KafkaRestProxy log.

    As a workaround, manually add configuration overrides as shown in the following KafkaRestProxy CR:

    configOverrides:
      server:
        - confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
        - consumer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
        - producer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
    
  • When configuring source-initiated cluster links with CFK where the source cluster has TLS enabled, do not set spec.tls, and do not set spec.authentication in the ClusterLink CR on the destination cluster if the source cluster has mTLS authentication.

    Instead, in the Destination mode ClusterLink CR, under the spec.configs section, set:

    • local.security.protocol: SSL for mTLS.
    • local.security.protocol: SASL_SSL for SASL authentication with TLS.

    For details about configuring the destination cluster for source-initiated Cluster Linking, see Configure the source-initiated cluster link on the destination cluster.

  • The Pod Disruption Budget (PDB) in CFK is set as shown below and is non-configurable using the first-class CFK API:

    • For Kafka: maxUnavailable := 1
    • For ZooKeeper, MaxUnavailable is based on the number of ZooKeeper nodes: maxUnavailable := (replicas - 1 ) / 2

    The PDB setting is typically used when upgrading a Kubernetes node. The pods are moved to different nodes for the node upgrade and then are moved back to the node. Or, when you want to reduce the size of the node pool, you would drain that node by moving the pods out of that node.

    If you have use cases when you need to change a PDB, manually set the PDB using the kubectl patch command as below:

    1. Block reconcile on Kafka to ensure that you do not overwrite anything. For example:

      kubectl annotate kafka kafka platform.confluent.io/block-reconcile=true
      
    2. Modify the PDB as required:

      kubectl patch pdb kafka -p '{"spec":{"maxUnavailable":<desired value>}}' --type=merge
      

      Use caution when you select the value. The wrong value could result in data loss or service disruption as you could bring down more Kafka nodes than ideal.

    3. Verify the change:

      kubectl get pdb
      

      An example output based on the above command:

      NAME        MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
      kafka       N/A             <desired value>   <desired value>       11m
      
    4. Perform node drains as required.

    5. Enable reconcile on Kafka. For example:

      kubectl annotate kafka kafka platform.confluent.io/block-reconcile-
      
  • For Kafka replication listeners, you cannot use the JAAS method to set up LDAP credentials.

    Use the JAAS pass-through (jaasConfigPassThrough) method for LDAP credentials when configuring replication listeners.

  • Authorizer for KRaft

    When simple authorization is enabled in the Kafka custom resource (CR) (spec.authorization.type: simple), CFK sets the authorizer to AclAuthorizer, which works for ZooKeeper-based brokers, but is not compatible with KRaft.

    As a workaround, add the below configOverride to both the KRaft broker (in the Kafka CR) and KRaft controller (in the KRaftController CR):

    spec:
      configOverrides:
        server:
          - authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
          - super.users=User:kafka
          - confluent.authorizer.access.rule.providers=KRAFT_ACL
    
  • RBAC authorization is not supported in the KRaft mode.

Known gaps from Confluent Platform 7.4

CFK 2.6 does not support the following Confluent Platform 7.4 functionality:

  • Kafka authentication mechanisms: Kerberos and SASL/Scram

Known gaps in CFK Blueprints

CFK Blueprints 2.6 does not support the following CFK functionality:

  • Internal listener authentication change on the running cluster
  • A central Confluent Platform cluster serving as the RBAC metadata store for multiple Confluent Platform clusters
  • The StaticPortBasedRouting and NodePort external access methods
  • Monitoring multiple Kafka clusters in Confluent Control Center
  • Configuring and managing KRaft-based clusters