Confluent for Kubernetes Release Notes

Confluent for Kubernetes (CFK) provides a declarative API driven control plane to deploy and manage Confluent Platform on Kubernetes.

The following sections summarize the technical details of the CFK 2.3.x releases.

Confluent for Kubernetes 2.3.1 Release Notes

Confluent for Kubernetes 2.3.1 allows you to deploy and manage Confluent Platform versions 7.0.x and 7.1.x on Kubernetes.

Notable enhancements

  • Capability to add labels to the LoadBalancer, Nodeport, and Route services.

Notable fixes

  • Connector plugin installation correctly fails if the archive path has the wrong suffix.
  • Fixed ConfluentRolebinding to honor the spec.clustersScopeByIds.kafkaClusterId setting.
  • Fixed CFK migration not to set global Telemetry if it is not set before migration.
  • Fixed security and vulnerability issues.

Confluent for Kubernetes 2.3.0 Release Notes

Confluent for Kubernetes 2.3.0 allows you to deploy and manage Confluent Platform versions 7.0.x and 7.1.x on Kubernetes.

New features

Multi-Region clusters in GA

Now you can deploy Kafka, ZooKeeper, and Schema Registry clusters across multiple Kubernetes regions.

With Built-in Multi-Region Replication, you can deploy Confluent Platform across regional data centers with automated client failover in the event of a disaster. It enables synchronous and asynchronous replication by topic, and all the replication is offset preserving.

See Multi-Region Deployment of Confluent Platform.

Custom volume mounts for Confluent Platform component pods

CFK supports configuring Confluent Platform pods to have custom volumes of various types which are attached to the pods simultaneously and mounted at desired paths in the pods.

See Mount custom volumes.

Declarative management of Schema Linking

Schema Linking is a Confluent feature for keeping schemas in sync between two Schema Registry clusters. CFK provides a declarative API using the SchemaExporter custom resource definition (CRD) to support the entire workflow of creating and managing schema exporters.

See Link Schemas.

All secrets can now be managed by HashiCorp Vault and injected into Confluent Platform pods

In addition to the Confluent Platform component CRs, now you can use HashCorp Vault to manage your secrets in CFK application resources, such as Topics, Schemas, and Connectors.

See Provide secrets in Vault.

Protection against unintended cluster deletions (in Early Access)

CFK can be configured to reject cluster object deletion requests.

See Deploy Confluent for Kubernetes with cluster object deletion protection.

Notable enhancements

Notable fixes

  • Fixed an issue where Operator to CFK migration did not handle Telemetry migration properly if global Telemetry was disabled in Operator.
  • Control Center deployed by CFK can now monitor multiple Kafka clusters that have RBAC enabled.
  • The public license key is no longer part of the CFK Helm charts.
  • Fixed a redundant space in spec.mirrorTopics.configs in the ClusterLink CRD that caused a validation error when creating a ClusterLink CR.

Known issues

  • When deploying Confluent for Kubernetes to Red Hat OpenShift with Red Hat’s Operator Lifecycle Manager (that is, using the Operator Hub), you must use OpenShift version 4.9.

    This OpenShift version restriction does not apply when deploying CFK to Red Hat OpenShift in the standard way without using the Red Hat Operator Lifecycle Manager.

  • If ksqlDB REST endpoint is using the auto-generated certificates, the ksqlDB deployment that points to Confluent Cloud requires to trust the Let’s Encrypt CA.

    For this to work, you must provide a CA bundle through cacerts.pem that contains the both (1) the Confluent Cloud CA and (2) the self-signed CA to the ksqlDB CR.

  • When TLS is enabled, and when Confluent Control Center uses a different TLS certificate to communicate with MDS or Confluent Cloud Schema Registry, Control Center cannot use an auto-generated TLS certificate to connect to MDS or Confluent Cloud Schema Registry. See Troubleshooting Guide for a workaround.

  • When deploying the Schema Registry and Kafka CRs simultaneously, Schema Registry could fail because it cannot create topics with a replication factor of 3. This is because the Kafka brokers have not fully started.

    The workaround is to delete the Schema Registry deployment and re-deploy once Kafka is fully up.

  • When deploying an RBAC-enabled Kafka cluster in centralized mode, where another “secondary” Kafka is being used to store RBAC metadata, an error, “License Topic could not be created”, may return on the secondary Kafka cluster.

  • A periodic Kubernetes TCP probe on ZooKeeper causes frequent warning messages “client has closed socket” when warning logs are enabled.

  • If you are deploying Confluent Platform 7.0.2 or 7.1.0 with centralized RBAC, you might see an “Invalid License” issue when connecting to the secondary Kafka cluster.

    The workaround is to restart the secondary Kafka cluster.

  • If you encounter a Confluent license issue with an error log in CFK, invalid license illegal base64 data at input byte 223, reinstall CFK with the --set image.pullPolicy=Always or --set image.tag=0.435.11-1 option:

    helm upgrade --install confluent-operator \
      confluentinc/confluent-for-kubernetes \
      --set image.pullPolicy=Always \
      --namespace <namespace>
    
    helm upgrade --install confluent-operator \
      confluentinc/confluent-for-kubernetes \
      --set image.tag=0.435.11-1 \
      --namespace <namespace>
    

    See Deploy Confluent for Kubernetes for more information about the install command.

  • REST Proxy configured with monitoring interceptors is missing the callback handler properties when RBAC is enabled. Interceptor would not work, and you would see an error message in the KafkaRestProxy log.

    As a workaround, manually add configuration overrides as below in the KafkaRestProxy CR:

    configOverrides:
      server:
        - confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
        - consumer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
        - producer.confluent.monitoring.interceptor.sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler
    
  • Custom authorizer configuration

    If you have custom authorizer (other than RBAC or ACL) configured using config overrides, you need to configure the kafka.rest.client properties using Configuration overrides. For example, to enable mTLS with the Kafka broker you must set kafka.rest.client.security.protocol to SSL or SASL_SSL.

    See Configuration Options for SSL Encryption between Admin REST APIs and Kafka Brokers for the supported options.

Known gaps from Confluent Platform 7.1

CFK 2.3.0 does not support the following Confluent Platform 7.1 functionality:

  • Kafka authentication mechanisms: Kerberos and SASL/Scram