Confluent for Kubernetes Release Notes¶
Confluent for Kubernetes (CFK) provides a declarative API-driven control plane to deploy and manage Confluent Platform on Kubernetes.
The following sections summarize the technical details of the CFK 2.10 releases.
Confluent for Kubernetes 2.10.0 Release Notes¶
Confluent for Kubernetes (CFK) 2.10.0 allows you to deploy and manage Confluent Platform versions from 7.1.x to 7.8.x on Kubernetes versions 1.25 - 1.31 (OpenShift 4.11 - 4.17).
The images released in CFK 2.10.0 are:
confluentinc/confluent-operator:0.1145.6
confluentinc/confluent-init-container:2.10.0
confluentinc/<CP component images>:7.8.0
For a full list of Confluent Platform images and tags, see Confluent Platform versions.
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
New features¶
- Role-based access control (RBAC) with mTLS authentication
You can configure RBAC using mTLS authentication on a new Confluent Platform cluster.
- File-based user store in MDS for RBAC
You can configure RBAC using a file-based user on a new Confluent Platform cluster.
- Use CFK to manage Confluent Platform for Apache Flink® environments and applications
You can manage Confluent Platform for Apache Flink application resources using CFK customer resources.
See Manage Flink Applications Using Confluent for Kubernetes.
Notable enhancements and updates¶
Now you can mount secrets on the Connect cluster using directory paths.
CFK shows ConfigOverride options that are set instead of the default values when you query a CR status.
You can configure external access to JMX metrics using TCP.
See Monitor Confluent Platform with Confluent for Kubernetes.
Notable fixes¶
An issue causing failure to support the centralized RBAC was fixed. The fix applies to CFK releases 2.9.0 through 2.9.3.
CFK Connect deployment now correctly sets the updated default values for Log4j.
This fix will cause your Connect cluster to roll when you upgrade to CFK 2.10.0.
Removed privileged tag from the tasks in the KRaft controller to support air-gapped installation in the KRaft mode.
Fixed an intermittent issue during the ZooKeeper to KRaft migration which was resulting in failure to set the owner.
Known issues¶
When deploying CFK to Red Hat OpenShift with Red Hat’s Operator Lifecycle Manager (that is, using the OperatorHub), you must use OpenShift version 4.9 or higher.
This OpenShift version restriction does not apply when deploying CFK to Red Hat OpenShift in the standard way without using the Red Hat Operator Lifecycle Manager.
If the ksqlDB REST endpoint is using the auto-generated certificates, the ksqlDB deployment that points to Confluent Cloud requires trusting the Let’s Encrypt CA.
For this to work, you must provide a CA bundle through
cacerts.pem
that contains both (1) the Confluent Cloud CA and (2) the self-signed CA to the ksqlDB CR.When TLS is enabled, and when Confluent Control Center uses a different TLS certificate to communicate with MDS or Confluent Cloud Schema Registry, Control Center cannot use an auto-generated TLS certificate to connect to MDS or Confluent Cloud Schema Registry. See Troubleshooting Guide for a workaround.
When deploying the Schema Registry and Kafka CRs simultaneously, Schema Registry could fail because it cannot create topics with a replication factor of 3. It is because the Kafka brokers have not fully started.
The workaround is to delete the Schema Registry deployment and re-deploy once Kafka is fully up.
When deploying an RBAC-enabled Kafka cluster in centralized mode, where another “secondary” Kafka is being used to store RBAC metadata, an error, “License Topic could not be created”, may return on the secondary Kafka cluster.
A periodic Kubernetes TCP probe on ZooKeeper causes frequent warning messages “client has closed socket” when warning logs are enabled.
REST Proxy configured with monitoring interceptors is missing the callback handler properties when RBAC is enabled. Interceptor would not work, and you would see an error message in the KafkaRestProxy log.
As a workaround, manually add configuration overrides as shown in the following KafkaRestProxy CR:
configOverrides: server: - confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler - consumer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler - producer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
When configuring source-initiated cluster links with CFK where the source cluster has TLS enabled, do not set
spec.tls
, and do not setspec.authentication
in the ClusterLink CR on the destination cluster if the source cluster has mTLS authentication.Instead, in the Destination mode ClusterLink CR, under the
spec.configs
section, set:local.security.protocol: SSL
for mTLS.local.security.protocol: SASL_SSL
for SASL authentication with TLS.
For details about configuring the destination cluster for source-initiated Cluster Linking, see Configure the source-initiated cluster link on the destination cluster.
The CFK support bundle plugin on Windows systems does not capture all the logs.
As a workaround, specify the
--out-dir
flag in thekubectl confluent support-bundle
command to provide the output location for the support bundle.
Known gaps from Confluent Platform 7.8¶
CFK 2.10 does not support the following Confluent Platform 7.8 functionality:
- Kafka authentication mechanisms: Kerberos and SASL/Scram
Known gaps in CFK Blueprints¶
CFK Blueprints 2.10 does not support the following CFK functionality:
- Internal listener authentication change on the running cluster
- A central Confluent Platform cluster serving as the RBAC metadata store for multiple Confluent Platform clusters
- The StaticPortBasedRouting and NodePort external access methods
- Monitoring multiple Kafka clusters in Confluent Control Center
- Configuring and managing KRaft-based clusters
- Using OpenID Connect (OIDC) for authentication