Confluent for Kubernetes Release Notes¶
Confluent for Kubernetes (CFK) provides a declarative API-driven control plane to deploy and manage Confluent Platform on Kubernetes.
The following sections summarize the technical details of the CFK 2.8 releases.
Confluent for Kubernetes 2.8.5 Release Notes¶
Confluent for Kubernetes (CFK) 2.8.5 allows you to deploy and manage Confluent Platform versions 7.5.x and 7.6.x on Kubernetes versions 1.25 - 1.29 (OpenShift 4.10 - 4.16).
The images released in CFK 2.8.5 are:
confluentinc/confluent-operator:0.921.77
confluentinc/confluent-init-container:2.8.5
confluentinc/<CP component images>:7.6.4
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
Notable fixes¶
For the list of security and vulnerability issues fixed in this release, see Security Advisories and Security Release Notes.
Confluent for Kubernetes 2.8.4 Release Notes¶
Confluent for Kubernetes (CFK) 2.8.4 allows you to deploy and manage Confluent Platform versions 7.5.x and 7.6.x on Kubernetes versions 1.25 - 1.29 (OpenShift 4.10 - 4.15).
The images released in CFK 2.8.4 are:
confluentinc/confluent-operator:0.921.63
confluentinc/confluent-init-container:2.8.4
confluentinc/<CP component images>:7.6.3
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
Notable fixes¶
- Internal issues were resolved, including critical security and vulnerability issues.
Confluent for Kubernetes 2.8.3 Release Notes¶
Confluent for Kubernetes (CFK) 2.8.3 allows you to deploy and manage Confluent Platform versions 7.5.x and 7.6.x on Kubernetes versions 1.25 - 1.29 (OpenShift 4.10 - 4.15).
The images released in CFK 2.8.3 are:
confluentinc/confluent-operator:0.921.40
confluentinc/confluent-init-container:2.8.3
confluentinc/<CP component images>:7.6.2
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
Notable fixes¶
- To prevent a possible data loss, you cannot change the Kafka cluster used by that Topic Custom Resource (CR) after you create a topic using the Topic CR.
- Internal issues were resolved, including critical security and vulnerability issues.
Confluent for Kubernetes 2.8.2 Release Notes¶
Confluent for Kubernetes (CFK) 2.8.2 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.5.x, and 7.6.x on Kubernetes versions 1.25 - 1.29 (OpenShift 4.10 - 4.14).
The images released in CFK 2.8.2 are:
confluentinc/confluent-operator:0.921.20
confluentinc/confluent-init-container:2.8.2
confluentinc/<CP component images>:7.6.1
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
Notable fixes¶
- Fixed an issue to avoid race conditions while updating certificates using cert-manager.
- Fixed an issue in the ConfluentRolebinding custom resource (CR) which resulted in an incorrect state when you delete a resource-specific role binding from the CR.
- When Kafka and KRaft roll, now information about under-replicated partitions (URPs) and log end offsets (LEOs) are logged.
- The Kafka shrink workflow now considers the brokerID offset annotation as part of the self-balancing cluster (SBC) rebalancing.
- Fixed an issue which caused the Webhook TLS configuration to fail when
certificates with the names,
cacert.pem
,fullchain.pem
, andprivkey.pem
, are provided. - The KRaftMigrationJob now does not duplicate configurations, after migration workflow.
- The KRaftMigrationJob does no longer hits a null pointer exception error in case of ZooKeeper auto discovery where ZooKeeper is not defined in the Kafka CR dependency section.
- Critical security and vulnerability issues were fixed.
Confluent for Kubernetes 2.8.1 Release Notes¶
Confluent for Kubernetes (CFK) 2.8.1 is a Red Hat Operator Catalog-specific release, and it does not contain notable fixes or updates.
Confluent for Kubernetes 2.8.0 Release Notes¶
Confluent for Kubernetes (CFK) 2.8.0 allows you to deploy and manage Confluent Platform versions 6.2.x, 7.5.x, and 7.6.x on Kubernetes versions 1.25 - 1.29 (OpenShift 4.10 - 4.14).
The images released in CFK 2.8.0 are:
confluentinc/confluent-operator:0.921.2
confluentinc/confluent-init-container:2.8.0
confluentinc/<CP component images>:7.6.0
For a full list of Confluent Platform images and tags, see Confluent Platform versions.
For details on installing CFK and Confluent Platform using the above images, see Deploy Confluent for Kubernetes and Deploy Confluent Platform using Confluent for Kubernetes.
New features¶
- Migration from Zookeeper to KRaft
You can use CFK to perform an in-place upgrade from ZooKeeper to KRaft in Confluent Platform 2.8.5.
Migrating Confluent Platform 7.6.0 clusters is not recommended for production environments. Use Confluent Platform 7.6.1 or later version with CFK 2.8.1 or later for production environment migration.
Migration from ZooKeeper to KRaft is not supported in multi-region clusters deployments.
- ARM 64 for production deployments
- You can use CFK to deploy Confluent Platform on ARM 64 architecture in production.
- Automatic upgrade of Kafka
Starting in the 2.8 release, CFK automates the Kafka upgrade process by setting the following properties in Kafka during upgrades:
inter.broker.protocol.version
andlog.message.format.version
Notable enhancements and updates¶
CFK increased the value of the annotation,
http-timeout-in-seconds
, to180
seconds for cluster linking.For more information about the annotation, see Annotate Confluent custom resources.
CFK supports new Kubernetes versions 1.28 and 1.29 (OpenShift 4.14).
Now you can enable RBAC on the KRaft-based Kafka deployment.
You have an option to disable the default PodDisruptionBudget for Kafka, ZooKeeper, and KRaft using the new annotation
platform.confluent.io/disable-pdb
.For KRaft-based deployments, the data recovery feature is optional.
By default, the data recovery feature is disabled, and you can have KRaft in the namespace-scoped deployments.
To enable the data recovery feature, see deploy CFK with the data recovery feature enabled.
Notable fixes¶
The readiness and liveness port of Kafka has been changed from 9071 to 9072.
Additionally, now in the Pod Template of a component CR (applicable to all components), you can customize/override the port and path of the readiness and liveness probe.
Now you have an option to install CFK in the namespaced scope through OpenShift OperatorHub.
Critical security and vulnerability issues were fixed.
Known issues¶
When deploying CFK to Red Hat OpenShift with Red Hat’s Operator Lifecycle Manager (that is, using the OperatorHub), you must use OpenShift version 4.9 or higher.
This OpenShift version restriction does not apply when deploying CFK to Red Hat OpenShift in the standard way without using the Red Hat Operator Lifecycle Manager.
If the ksqlDB REST endpoint is using the auto-generated certificates, the ksqlDB deployment that points to Confluent Cloud requires trusting the Let’s Encrypt CA.
For this to work, you must provide a CA bundle through
cacerts.pem
that contains both (1) the Confluent Cloud CA and (2) the self-signed CA to the ksqlDB CR.When TLS is enabled, and when Confluent Control Center uses a different TLS certificate to communicate with MDS or Confluent Cloud Schema Registry, Control Center cannot use an auto-generated TLS certificate to connect to MDS or Confluent Cloud Schema Registry. See Troubleshooting Guide for a workaround.
When deploying the Schema Registry and Kafka CRs simultaneously, Schema Registry could fail because it cannot create topics with a replication factor of 3. It is because the Kafka brokers have not fully started.
The workaround is to delete the Schema Registry deployment and re-deploy once Kafka is fully up.
When deploying an RBAC-enabled Kafka cluster in centralized mode, where another “secondary” Kafka is being used to store RBAC metadata, an error, “License Topic could not be created”, may return on the secondary Kafka cluster.
A periodic Kubernetes TCP probe on ZooKeeper causes frequent warning messages “client has closed socket” when warning logs are enabled.
REST Proxy configured with monitoring interceptors is missing the callback handler properties when RBAC is enabled. Interceptor would not work, and you would see an error message in the KafkaRestProxy log.
As a workaround, manually add configuration overrides as shown in the following KafkaRestProxy CR:
configOverrides: server: - confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler - consumer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler - producer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
When configuring source-initiated cluster links with CFK where the source cluster has TLS enabled, do not set
spec.tls
, and do not setspec.authentication
in the ClusterLink CR on the destination cluster if the source cluster has mTLS authentication.Instead, in the Destination mode ClusterLink CR, under the
spec.configs
section, set:local.security.protocol: SSL
for mTLS.local.security.protocol: SASL_SSL
for SASL authentication with TLS.
For details about configuring the destination cluster for source-initiated Cluster Linking, see Configure the source-initiated cluster link on the destination cluster.
The Pod Disruption Budget (PDB) in CFK is set as shown below and is non-configurable using the first-class CFK API:
- For Kafka:
maxUnavailable
:=1
- For ZooKeeper,
MaxUnavailable
is based on the number of ZooKeeper nodes:maxUnavailable
:= (replicas
-1
) /2
The PDB setting is typically used when upgrading a Kubernetes node. The pods are moved to different nodes for the node upgrade and then are moved back to the node. Or, when you want to reduce the size of the node pool, you would drain that node by moving the pods out of that node.
If you have use cases when you need to change a PDB, manually set the PDB using the
kubectl patch
command as below:Block reconcile on Kafka to ensure that you do not overwrite anything. For example:
kubectl annotate kafka kafka platform.confluent.io/block-reconcile=true
Modify the PDB as required:
kubectl patch pdb kafka -p '{"spec":{"maxUnavailable":<desired value>}}' --type=merge
Use caution when you select the value. The wrong value could result in data loss or service disruption as you could bring down more Kafka nodes than ideal.
Verify the change:
kubectl get pdb
An example output based on the above command:
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE kafka N/A <desired value> <desired value> 11m
Perform node drains as required.
Enable reconcile on Kafka. For example:
kubectl annotate kafka kafka platform.confluent.io/block-reconcile-
- For Kafka:
The CFK support bundle plugin on Windows systems does not capture all the logs.
As a workaround, specify the
--out-dir
flag in thekubectl confluent support-bundle
command to provide the output location for the support bundle.
Known gaps from Confluent Platform 7.6¶
CFK 2.8 does not support the following Confluent Platform 7.6 functionality:
- Kafka authentication mechanisms: Kerberos and SASL/Scram
Known gaps in CFK Blueprints¶
CFK Blueprints 2.8 does not support the following CFK functionality:
- Internal listener authentication change on the running cluster
- A central Confluent Platform cluster serving as the RBAC metadata store for multiple Confluent Platform clusters
- The StaticPortBasedRouting and NodePort external access methods
- Monitoring multiple Kafka clusters in Confluent Control Center
- Configuring and managing KRaft-based clusters
- Single sign-on (SSO) authentication for Control Center using OpenID Connect (OIDC)