Advanced Configuration Options in Confluent for Kubernetes

This topic describes a few of the advanced configuration options Confluent for Kubernetes (CFK) provides for Confluent components.

Configure Kafka Connect & ksqlDB using Confluent Cloud

Confluent for Kubernetes supports deploying and managing Connect, ksqlDB, and Confluent Control Center to connect to a Confluent Cloud Kafka and Schema Registry.

For an illustrative walkthrough on configuring this, see the tutorial for connecting to Confluent Cloud.

Provide custom service account

Each Confluent component pod deployed by Confluent for Kubernetes has an associated Kubernetes Service Account. These service accounts serve several functions in Confluent for Kubernetes. For example, to deploy Confluent Docker images from a private Docker image registry, you can associate Docker registry credentials to the service account associated with your Confluent components.

If you do not specify a service account, the default service account is assigned to the component pod in the same namespace.

To provide a custom service account, set serviceAccountName in the component custom resource (CR) configuration:

spec:
  podTemplate:
    serviceAccountName:

Configuration overrides

You can override or remove default configuration parameters of a Confluent Platform components in the component custom resource (CR) as below:

spec:
  configOverrides:
    server: []
    jvm: []
    log4j: []

Refer to Configuration Reference for configuration parameters used in Confluent Platform components.

Apply the changes with the kubectl apply command.

Override default configuration

Under the configOverrides property, specify the key=value of the configuration to override the default setting for the server or log4j. In the case of jvm, specify the parameter you want to change inside double quotes.

The example below for Kafka has the following effects:

  • Enables automatic Topic creation (disabled by default).
  • Enables the Cluster Linking feature (disabled by default).
  • Sets a few JVM flags related to memory management.
  • Changes the log-level to DEBUG from the default of INFO.
kind: Kafka
spec:
  configOverrides:
    server:
      - auto.create.topics.enable=true
      - confluent.cluster.link.enable=true
    jvm:
      - "-Xmx6g"
      - "-XX:MetaspaceSize=96m"
      - "-XX:+UseG1GC"
      - "-Djavax.net.ssl.trustStore=/mnt/sslcerts/kafka-tls/truststore.p12"
      - "-Djavax.net.ssl.trustStorePassword=mystorepassword"
    log4j:
      - log4j.rootLogger=DEBUG, stdout

Remove default configuration

To remove a default setting, under the configOverrides property, add -- in front of the key=value of the configuration setting.

For example, to remove autopurge.purgeInterval=1 property in ZooKeeper:

kind: Zookeeper
spec:
  configOverrides:
    server:
      - ---autopurge.purgeInterval=1

Annotate Confluent custom resources

Confluent for Kubernetes (CFK) provides a set of public annotations that you can use to modify a certain workflow or a state of Confluent Platform components. The annotations are applied to Confluent Platform custom resources (CRs).

platform.confluent.io/force-reconcile

Triggers a reconcile cycle of cluster. Once the reconcile cycle is complete, the annotation value gets reset to false.

  • Supported values: true, false
  • Default value: false
  • CR types applied to: All CRs
platform.confluent.io/block-reconcile

Blocks the reconcile even when internal resources or the CR spec is changed. This is used primarily to allow users to perform manual workflows. When this is enabled, CFK discards any changes done out of band to the CR.

  • Supported values: true, false
  • Default value: false
  • CR types applied to: All CRs
platform.confluent.io/roll-precheck

When set to disable, CFK does not perform the pre-check for under-replicated partitions.

  • Supported values: disable, enable
  • Default value: enable
  • CR types applied to: Kafka
platform.confluent.io/roll-pause

When set to true, the current pod roll will be paused.

  • Supported values: false, true
  • Default value: false
  • CR types applied to: Kafka
platform.confluent.io/disable-garbage-collection

Disables CFK from garbage collecting Kubernetes resources that CFK internally manages.

  • Supported values: false, true
  • Default value: true
  • CR types applied to: Control Center, Connect, Kafka, REST Proxy, ksqlDB, Schema Registry, ZooKeeper
platform.confluent.io/enable-shrink

Enables the shrink workflow for the Kafka CR. This should only be enabled when the Kafka image of the version 7.0 or higher.

  • Supported values: true, false
  • Default value: false
  • CR types applied to: Kafka
platform.confluent.io/disable-internal-rolebindings-creation

Defines whether to disable internal rolebinding creation in RBAC security settings.

  • Supported values: true, false
  • Default value: false
  • CR types applied to: Control Center, Connect, REST Proxy, ksqlDB, Schema Registry
platform.confluent.io/soft-deletion-versions

A list of versions to trigger a soft delete workflow for the Schema CR.

  • Supported values: A JSON formatted array, for example, [1,2,3]
  • Default value: None
  • CR types applied to: Schema
platform.confluent.io/delete-versions

A list of versions to trigger a hard delete workflow for the Schema CR.

  • Supported values: A JSON formatted array, for example, [1,2,3]
  • Default value: None
  • CR types applied to: Schema
platform.confluent.io/restart-connector

Triggers a restart of the Connector.

  • Supported values: true, false
  • Default value: false
  • CR types applied to: Connector
platform.confluent.io/restart-task

Triggers a restart of the specified Connector task.

  • Supported values: A int32 type number
  • Default value: None
  • CR types applied to: Connector
platform.confluent.io/http-timeout-in-seconds

Specifies the HTTP client timeout in seconds for the CR workflows.

  • Supported values: A int32 type number
  • Default value: None
  • CR types applied to: Control Center, Connect, Kafka, KafkaTopic, ClusterLink, Schema
platform.confluent.io/confluent-hub-install-extra-args

Additional arguments for the Connect CR. The extra arguments will be used when the Connect starts up and downloads plugins from the Confluent Hub.

  • Supported values: A string of flags, for example, --worker-configs /dev/null --component-dir /mnt/plugins
  • Default value: None
  • CR types applied to: Connect
platform.confluent.io/pod-overlay-configmap-name

Configures additional Kubernetes features that are not supported in the CFK API.

  • Supported values: A ConfigMap name. For details on the Pod Overlay feature and the associated ConfigMap, see Customize Confluent Platform pods with Pod Overlay.
  • Default value: None
  • CR types applied to: Control Center, Connect, Kafka, REST Proxy, ksqlDB, Schema Registry, ZooKeeper, KRaft
platform.confluent.io/enable-dynamic-configs

Enables dynamic TLS certificates rotation for Kafka listeners and Kafka REST class service so that the Kafka cluster does not roll when certificates change.

  • Supported values: true, false
  • Default value: false
  • CR types applied to: Kafka

To add an annotation, run the following command:

kubectl annotate <CR type> <CR name> -n <namespace> <annotation>="<annotation value>"

To delete an annotation, run the following command:

kubectl annotate <CR type> <CR name> -n <namespace> <annotation>-

Annotate Confluent pods

An annotation in Kubernetes defines an unstructured key value map that can be set by external tools to store and retrieve metadata. Annotations are stored with pods.

You can define custom annotations for Confluent Platform components, and those annotations are applied to the Kubernetes pods for the components.

Annotation values must pass Kubernetes annotations validation. See the Kubernetes documentation on Annotations for details.

Define annotations in the component custom resource (CR) as below:

spec:
  podTemplate:
    annotations:
      key1: value1
      key2: value2
      ...

annotations must be a map of string keys and string values.

The following are example annotations in the Kafka pod for Hashcorp vault:

spec:
  podTemplate:
    annotations:
      vault.hashicorp.com/agent-inject: true
      vault.hashicorp.com/agent-inject-status: update
      vault.hashicorp.com/preserve-secret-case: true
      vault.hashicorp.com/agent-inject-secret-jksPassword.txt: secret/jksPassword.txt

Customize Confluent Platform pods with Pod Overlay

Confluent for Kubernetes (CFK) supports a subset of Kubernetes PodTemplateSpec in the CFK API (spec.podTemplate in the component custom resource) where you configure StatefulSet PodTemplate for the Confluent Platform components. .

To set and use additional Kubernetes features that are not supported by the CFK API, you can use the Pod Overlay feature.

Example use cases that you can use the Pod Overlay feature include:

  • To deploy a Confluent Platform cluster with a custom init container that runs alongside the CFK init container.

    In this case, the custom init container runs before the CFK init container.

  • To use a newly introduced Kubernetes feature that has not been added to the CFK API.

Make sure that you do not have conflict values between what’s set in the CFK podTemplate API and in Pod Overlay. For example, if you specify podSecurityContext in the kafka.spec.podTemplate, you cannot use Pod Overlay to specify different values in spec.template.spec.securityContext.

To use Pod Overlay:

  1. Create a template file (<template-file>) with the settings you want to add:

    spec:
      template:
    

    The template file has to start with spec: template: <xxx>, and it has to follow the Kubernetes StatefulSetSpec API.

    You can configure fields only inside spec.template.

    Fields specified outside of spec.template will be considered as invalid.

    The following example is for a custom init container:

    spec:
      template:
        spec:
          initContainers:
          - name: busybox
            image: busybox:1.28
            command: ["echo","I am a custom init-conatiner"]
            imagePullPolicy: IfNotPresent
          hostNetwork: true
          dnsPolicy: ClusterFirstWithHostNet
    

    Note that when hostNetwork: is set to true, dnsPolicy: must be set to ClusterFirstWithHostNet.

  2. Create a ConfigMap (<configmap>) using the file created in the previous step (<template-file>). You must use pod-template.yaml as the key with --from-file option.

    kubectl create configmap <configmap> --from-file=pod-template.yaml=<template-file> -n <namespace>
    
  3. Add the platform.confluent.io/pod-overlay-configmap-name annotation on the Confluent Platform component resource CR.

    For example:

    kind: Kafka
    metadata:
      name: kafka
      namespace: operator
      annotations:
        platform.confluent.io/pod-overlay-configmap-name: <configmap>
    

    According to the Kubernetes convention, a ConfigMap can only be referenced by the pods residing in the same namespace. So CFK will look for <configmap> within the same namespace as the component CR object.

For configuration examples, see the tutorial for Pod Overlay.

Configure for Kubernetes Horizontal Pod Autoscaler

In Kubernetes, the Horizontal Pod Autoscaler (HPA) feature automatically scales the number of pod replicas.

Starting in Confluent for Kubernetes (CFK) 2.1.0, you can configure Confluent Platform components to use HPA based on CPU and memory utilization of Confluent Platform pods.

HPA is not supported for ZooKeeper and Confluent Control Center.

To use HPA with a Confluent Platform component, create an HPA resource for the component custom resource (CR) out of band to integrate with CFK.

The following example is to create an HPA resource for Connect based on CPU utilization and memory usage:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: connect-cluster-hpa
  namespace: confluent
spec:
  scaleTargetRef:                             --- [1]
    apiVersion: platform.confluent.io/v1beta1 --- [2]
    kind: Connect                             --- [3]
    name: connect                             --- [4]
  minReplicas: 2                              --- [5]
  maxReplicas: 4                              --- [6]
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50              --- [7]
    - type: Resource
      resource:
        name: memory
        targetAverageValue: 1000Mi            --- [8]
  • [1] Required. Specify the Confluent component-specific information in this section.

  • [2] Required. CFK API version.

  • [3] Required. The CR kind of the object to scale.

  • [4] Required. The CR name of the object to scale.

  • [5] The minimum number of replicas when scaling down.

    If your Kafka default replication factor is N, the minReplicas on your HPA for your Kafka cluster must be >= N.

    If you want Schema Registry, Connect, ksqlDB to be HA, set minReplicas >= 2

  • [6] The maximum number of replicas when scaling up.

  • [7] The target average CPU utilization of 50%.

  • [8] The target average memory usage value of 1000 Mi.

Take the following into further consideration when setting up HPA for Confluent Platform:

  • If you have oneReplicaPerNode set to true for Kafka (which is the default), your upper bound for Kafka brokers is the number of available Kubernetes worker nodes you have.
  • If you have affinity or taint/toleration rules set for Kafka, that further constrains the available nodes.
  • If your underlying Kubernetes cluster doesn’t itself support autoscaling of the Kubernetes worker nodes, make sure there is enough Kubernetes worker nodes to allow HPA is successful.

You can check the current status of HPA by running:

kubectl get hpa

Mount custom volumes

Kubernetes provides storage abstraction through Kubernetes Volumes. These volumes can be ephemeral, where the volumes get destroyed when the pod dies, or persistent, where the lifetime of the volumes goes beyond the pod lifetime.

In CFK, you can have a Confluent Platform pod configured to have multiple volumes of various types attached to the pod simultaneously and mounted at desired paths in the pod.

Note

Mounting custom volumes does not support multiple PersistentVolumes for ZooKeeper and Kafka data. CFK configures and manages one PersistentVolume for ZooKeeper and Kafka data.

The following are a few of the common use cases for custom volume mounts:

  • Third-party secrets providers

    As an alternative to using Kubernetes secrets to secure sensitive information, you can use a vault product like Hashicorp Vault, AWS Secrets Manager, and Azure KeyVault.

    You integrate a third-party secrets provider by configuring an ephemeral volume mount for the Confluent component pod that takes the credentials from the secrets provider.

  • Kafka connectors

    Some Kafka connectors require JARs, that are outside of the Connect plugin but need to be available to the Connect pods. You can create persistent volumes with the connector JARs and mount them on the Connect worker pods.

  • Multiple custom partitions

    For example, you could write logs to a separate persistent volume of your choice.


In CFK, you mount custom volumes to Confluent component pods by defining custom volume mounts in the component custom resources (CRs), such as for Kafka, ZooKeeper, Control Center, Schema Registry, ksqlDB, Connect, and Kafka REST Proxy. The same volume will be mounted on all the pods in the component cluster in the specified paths.

To mount custom volumes to a Confluent Platform component:

  1. Configure the volumes according to the driver specification.

  2. Add the following to the Confluent Platform component CR:

    spec:
      mountedVolumes:         --- [1]
        volumes:              --- [2]
        volumeMounts:         --- [3]
    
    • [1] mountedVolumes is an array of the volumes and volumeMounts that are requested for this component.

    • [2] Required. volumes is an array of named volumes in a pod that may be accessed by any container in the pod.

      For the supported volume types and the specific configuration properties required for each volume type, see Kubernetes Volume Types for the supported volume types.

    • [3] Required. Describes mounting paths of the volumes within this container.

      For the configuration properties for volume mount, see Kubernetes Pod volumeMounts.

  3. Apply the CR using the kubectl apply command.

Before the volumes and volume mounts are added to the component pod template, CFK performs a validation to ensure that there is no conflict with internal volume mounts. Reconcile will fail, and the error will be added to the CFK logs in the following cases:

  • A custom volume’s mount path conflicts with an internal mount path.

    These are the internal mounts used by Confluent Platform components:

    • /mnt/config
    • /mnt/config/init
    • /mnt/config/shared
    • /mnt/data/data0
    • /mnt/plugins
    • /opt/confluentinc
  • A custom volume’s mount path conflicts with a custom-mounted secret.

  • There is a conflict between the custom volume names or custom volume mount paths.

The below example is to mount an Azure file volume and HashiCorp vault with SecretProviderClass and a CSI driver:

apiVersion: platform.confluent.io/v1beta1
kind: Kafka
spec:
  mountedVolumes:
    volumes:
    - name: azure
      azureFile:
        secretName: azure-secret
        shareName: aksshare
        readOnly: true
    - name: secrets-store-inline
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "vault-database"
    volumeMounts:
    - name: azure
      mountPath: /mnt/azurePath1
    - name: azure
      mountPath: /mnt/azurePath2
    - name: secrets-store-inline
      mountPath: "/mnt/secrets-store"
      readOnly: true

Configure pod disruption budget

CFK provides the first-class CFK API to customize the Pod Disruption Budget (PDB) for all Confluent Platform components.

The PDB setting is typically used when upgrading a Kubernetes node. The pods are moved to different nodes for the node upgrade and then are moved back to the node. Or, when you want to reduce the size of the node pool, you would drain that node by moving the pods out of that node.

PDB is also used when upgrading and reconfiguring components pods where pods need to restart to provide a disruption budget and help with the availability of the service.

You can disable the PDB for a specific Confluent Platform component.

By default, a PDB is configured based on a pre-determined formula:

  • For Kafka: maxUnavailable := 1
  • For other Confluent Platform components, MaxUnavailable is based on the number of nodes: maxUnavailable := (replicas - 1 ) / 2

To set a PDB, configure the settings in the component PR:

kind: <Component>
spec:
  pdb:
    enabled:              --- [1]
    maxUnavailable:       --- [2]
  • [1] Required. Set to false to disable PDB for this Confluent Platform component.
  • [2] The maximum number of pods that can be unavailable during a disruption.