Confluent for Kubernetes Release Notes¶
Confluent for Kubernetes (CFK) provides a declarative API-driven control plane to deploy and manage Confluent Platform on Kubernetes.
The following sections summarize the technical details of the CFK 2.11 releases.
Confluent for Kubernetes 2.11.0 Release Notes¶
Confluent for Kubernetes (CFK) 2.11.0 allows you to deploy and manage Confluent Platform versions from 7.1.x to 7.9.x on Kubernetes versions 1.25 - 1.31 (OpenShift 4.12 -4.17).
The images released in CFK 2.11.0 are:
confluentinc/confluent-operator:0.1193.1
confluentinc/confluent-init-container:2.11.0
confluentinc/<CP component images>:7.9.0
For a full list of Confluent Platform images and tags, see Confluent Platform versions.
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
New features¶
- Migration of the default dynamic PersistentVolumeClaim (PVC) storage class
CFK supports migration of the default dynamic PersistentVolumeClaim (PVC) storage class from Amazon Elastic Block Store (EBS) to Amazon Elastic File System (EFS).
This migration enables the configuration of the PVC, namely the access mode.
See Configure Storage for Confluent Platform Using Confluent for Kubernetes.
- Schema CRD enhancements
- An annotation to disable hard delete of schemas in Schema custom resources (CRs).
- Support for referring to the “latest” child schema version.
See Manage Schemas for Confluent Platform Using Confluent for Kubernetes.
- HTTP proxy support using Helm environment variables
CFK now allows the configuration of HTTP proxy settings, such as
http_proxy
andhttps_proxy
, using newly implemented Helm environment variables.- OAuth/OIDC authentication to ksqlDB
CFK introduces OAuth support for ksqlDB, ksqlDB as a server and Confluent Control Center as a client.
The support includes RBAC and non-RBAC configurations for both the new and existing clusters.
- CFK plugin to view Confluent license information
A new
confluent kubectl cluster license
plugin is available for checking the status or content of your Confluent license.
Notable enhancements and updates¶
Non-RBAC to RBAC migration
The workflow to enable RBAC in non-RBAC deployments is now tested and documented along with playbooks for various authentication scenarios.
Two new CRD versions for Confluent Platform custom resources (CRs)
App version and Helm chart version were added to the Confluent Platform CRDs. These versions are available in CFK releases 2.11.0 and later.
Notable fixes¶
- Fixed an issue that caused incorrect Cluster ID to be retrieved during self-balancing process when CFK scales down a Kafka cluster in a KRaft-based deployment.
- Fixed a problem of CFK failing to communicate with the internal Kafka topics when the certificate authority chain had multiple certificates.
- Fixed an issue where audit logs fail to start up when KRaft and Kafka brokers are secured with mTLS or SASL/Plain with LDAP.
Known issues¶
When deploying CFK to Red Hat OpenShift with Red Hat’s Operator Lifecycle Manager using the OperatorHub, you must use OpenShift version 4.9 or higher.
This OpenShift version restriction does not apply when deploying CFK to Red Hat OpenShift in the standard way without using the Red Hat Operator Lifecycle Manager.
When CFK is deployed on an OpenShift cluster with Red Hat’s Operator LIfeCycle Manager/OperatorHub, capturing the support bundle for CFK using the command,
kubectl confluent support-bundle --namespace <namespace>
, can fail with the following error message:panic: runtime error: index out of range [0] with length 0
If the ksqlDB REST endpoint is using the auto-generated certificates, the ksqlDB deployment that points to Confluent Cloud requires trusting the Let’s Encrypt CA.
For this to work, you must provide a CA bundle through
cacerts.pem
that contains both (1) the Confluent Cloud CA and (2) the self-signed CA to the ksqlDB CR.When TLS is enabled, and when Confluent Control Center uses a different TLS certificate to communicate with MDS or Confluent Cloud Schema Registry, Control Center cannot use an auto-generated TLS certificate to connect to MDS or Confluent Cloud Schema Registry. See Troubleshooting Guide for a workaround.
When deploying the Schema Registry and Kafka CRs simultaneously, Schema Registry could fail because it cannot create topics with a replication factor of 3. It is because the Kafka brokers have not fully started.
The workaround is to delete the Schema Registry deployment and re-deploy once Kafka is fully up.
When deploying an RBAC-enabled Kafka cluster in centralized mode, where another “secondary” Kafka is being used to store RBAC metadata, an error, “License Topic could not be created”, may return on the secondary Kafka cluster.
A periodic Kubernetes TCP probe on ZooKeeper causes frequent warning messages “client has closed socket” when warning logs are enabled.
REST Proxy configured with monitoring interceptors is missing the callback handler properties when RBAC is enabled. Interceptor would not work, and you would see an error message in the KafkaRestProxy log.
As a workaround, manually add configuration overrides as shown in the following KafkaRestProxy CR:
configOverrides: server: - confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler - consumer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler - producer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
When configuring source-initiated cluster links with CFK where the source cluster has TLS enabled, do not set
spec.tls
, and do not setspec.authentication
in the ClusterLink CR on the destination cluster if the source cluster has mTLS authentication.Instead, in the Destination mode ClusterLink CR, under the
spec.configs
section, set:local.security.protocol: SSL
for mTLS.local.security.protocol: SASL_SSL
for SASL authentication with TLS.
For details about configuring the destination cluster for source-initiated Cluster Linking, see Configure the source-initiated cluster link on the destination cluster.
The CFK support bundle plugin on Windows systems does not capture all the logs.
As a workaround, specify the
--out-dir
flag in thekubectl confluent support-bundle
command to provide the output location for the support bundle.
Known gaps from Confluent Platform 7.9¶
CFK 2.11 does not support the following Confluent Platform 7.9 functionality:
- Kafka authentication mechanisms: Kerberos and SASL/Scram
Known gaps in CFK Blueprints¶
CFK Blueprints 2.11 does not support the following CFK functionality:
- Internal listener authentication change on the running cluster
- A central Confluent Platform cluster serving as the RBAC metadata store for multiple Confluent Platform clusters
- The StaticPortBasedRouting and NodePort external access methods
- Monitoring multiple Kafka clusters in Confluent Control Center
- Configuring and managing KRaft-based clusters
- Using OpenID Connect (OIDC) for authentication