Scale Confluent Platform Clusters and Balance Data using Confluent for Kubernetes

Scale Kafka cluster

At a high level, adding brokers to a cluster involves a few key steps:

  • Define the configuration for each of the new brokers.
  • Provision storage, networking, and compute resources to the brokers.
  • Start the brokers with the defined configurations and provisioned resources.
  • Reassign partitions across the cluster so that the new brokers share the load and the cluster’s overall performance improves.

To automate the above process, Confluent for Kubernetes (CFK) leverages Self-Balancing, which is enabled by default with CFK.

If you need to manually enable Self-Balancing, see Disable or re-configure Self-Balancing for the steps.

Scale up Kafka cluster

To scale up a Kafka cluster:

  1. Increase the number of Kafka replicas using one of the following options:

    • Use the kubectl scale command:

      kubectl scale kafka <Kafka-CR-name> --replicas=N
      
    • Increase the number of Kafka replicas in the Kafka custom resource (CR) and apply the new setting with the kubectl apply command:

      spec:
        replicas:
      
  2. Ensure that proper DNS records are configured for the new brokers, and ensure that the CFK can resolve the new broker hostname, using a command such as nslookup.

    If you are using hosts file instead of a DNS service, update hosts file with the new brokers information. For example:

    1. Get the new broker IP addresses:

      kubectl get services -n <namespace>
      
    2. Refer to the existing broker host names with the broker prefix, and derive the hostnames of the new brokers.

    3. Add the new broker hosts to the /etc/hosts file, and inject the updated file to the CFK pod as described in Adding entries to Pod /etc/hosts.

Scale down Kafka cluster

With Confluent Platform 7.x and later, you can have CFK scale down Kafka clusters.

CFK leverages the Self-Balancing feature to automate the shrinking process. Self-Balancing is enabled by default with CFK.

To have CFK automatically scale down your cluster, the following requirements must be satisfied:

  • Set up the Admin REST Class as described in Manage Confluent Admin REST Class for Confluent Platform Using Confluent for Kubernetes. CFK uses the KafkaRestClass resource in the namespace where the Kafka cluster is running.

  • If the Admin REST Class is set up with the basic authentication for the REST client, the first user listed in basic.txt will be used to shrink the cluster. See Basic authentication for details on basic.txt.

    This first user must have a role that is listed under spec.services.kafkaRest.authentication.basic.roles in the Kafka custom resource (CR).

  • If the Kafka brokers use the DirectoryPathInContainer property to specify the credentials to authenticate to Confluent Admin REST Class, you need to set up Vault and add the required Vault annotations to the CFK Helm values before you deploy CFK.

    If updating an existing CFK pod, you need to roll the CFK pod after updating the CFK Helm values. See Provide secrets for Confluent Platform operations without CRs for details.

To automatically scale down a Kafka cluster:

  1. Make sure the Kafka cluster is stable.

  2. Enable cluster shrinking by applying the annotation, platform.confluent.io/enable-shrink=true, to the Kafka cluster:

    kubectl annotate Kafka <Kafka CR name> -n <namespace> platform.confluent.io/enable-shrink=true
    

    Verify that the annotation is set in the status of the Kafka cluster.

  3. Decrease the number of brokers in the Kafka CR and apply the change using the kubectl apply command:

    spec:
      replicas:
    

    replicas: should not be set to less than 3. CFK sets a default replication factor of 3 for all Kafka topics.

    CFK triggers the workflow to shrink the Kafka cluster according to the value of replicas updated in the Kafka custom resource (CR).

Disable or re-configure Self-Balancing

The Self-Balancing feature is enabled by default in Confluent for Kubernetes.

To balance the load across the cluster whenever an imbalance is detected, set confluent.balancer.heal.uneven.load.trigger to ANY_UNEVEN_LOAD. The default is EMPTY_BROKER.

kafka:
  configOverrides:
    server:
      - confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD

For a complete list of available settings you can use to control Self-Balancing, see Configuration Options and Commands for Self-Balancing Clusters.

Scale other Confluent Platform clusters

Use the below command to scale up or down other Confluent Platform components:

kubectl scale <CP-component-CR-kind> <component-CR-name> --replicas=N