Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Sizing recommendations

Review the following sizing guidelines and recommendations before creating your Kubernetes cluster.

Node number and size recommendations

The number of nodes required in your cluster depends on whether you are deploying a development testing cluster or a production-ready cluster.

  • Test Cluster: Each node should typically have a minimum of 2 or 4 CPUs and 7 to 16 GB RAM. If you are testing a deployment of Operator and all Confluent Platform components, you can create a 10-node cluster with six nodes for Apache ZooKeeper™ and Apache Kafka® pods (three replicas each) and four nodes for all other components pods.

    Note

    Confluent recommends running ZooKeeper and Kafka on individual pods on individual Kubernetes nodes. You don’t have to do this, but we’ve found that this is the best way to run ZooKeeper and Kafka clusters.

    Tip

    Confluent recommends running ZooKeeper and Kafka on individual pods on individual nodes. You can bin pack other components. Bin packing places component tasks on nodes in the cluster that have the least remaining CPU and memory capacity. Bin packing maximizes node utilization and can reduce the number of nodes required. Each Confluent Platform component YAML file has the default entry disableHostPort: false. You can enable bin packing by adding this parameter to the component section in the <provider.yaml> file and setting it to true. Bin packing components is not recommended for production deployments. Also, note that when set to false, the default port used is 28130.

  • Production Cluster: Review the default capacity values in the helm/providers/<provider>.yaml file. Determine how these values affect your production application and build out your nodes accordingly. You can also use the on-premises System Requirements to determine what is required for your public or private cloud production environment. Note that the on-premises storage information provided is not applicable for cloud environments.

    Note

    Your Confluent Sales Engineer can help you determine the number of nodes required for your application. To get additional information, you can also contact Confluent Support .

Pod resource parameters

The default parameters in the provider YAML file specifies pod resources needed. If you are testing Confluent Operator and Confluent Platform, your resource requirements may not be as great as the default values shown. However, ZooKeeper and Kafka must be installed on individual pods on individual nodes.

Important

At least three Kafka brokers are required for a fully functioning Confluent Platform deployment. A one- or two-broker configuration is not supported and should not be used for development testing or production.

Confluent Operator can define pod resource limits for all Confluent Platform components it deploys. You can define these settings using the requests and limits tags for each component in their values.yaml file. The following example shows the default pod resource parameters in a provider YAML file snippet for Kafka. See Managing Compute Resources for Containers for more details.

Note

The limits property is shown in the following example. The default for limits is empty and not shown in the default provider.yaml files. It’s only shown here so you know where it goes in the .yaml file. See Managing Compute Resources for Containers for more details.

## Kafka Cluster
##
kafka:
  name: kafka
  replicas: 3
  resources:
    ## It is recommended to set both resource requests and limits.
    ## If not configured, kubernetes will set cpu/memory defaults.
    ## Reference: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
    requests:
      cpu: 200m
      memory: 1Gi
    limits: {}
  loadBalancer:
    enabled: false
    domain: ""
  tls:
    enabled: false
    fullchain: |-
    privkey: |-
    cacerts: |-

During installation, Confluent Operator and Confluent Platform components are created based on parameters stored in multiple Helm Chart values.yaml files (one for Operator and one for each Confluent Platform component) and the <provider>.yaml where you are installing your Confluent Platform cluster.

Important

Do not modify parameters in the individual component values.yaml files. If you need to adjust capacity, add a parameter, or change a parameter for a component, you modify the component section in the <provider>.yaml file. You can also adjust configuration parameters after installation using helm upgrade.

The <provider>.yaml file is layered over the values.yaml files at installation and contains values that are specific to provider environments and that can be modified by you prior to installation.

After you download the Helm bundle you’ll see that:

  • The values.yaml file for each Confluent Platform component is stored in helm/confluent-operator/charts/<component>/.
  • The values.yaml file for Confluent Operator is stored in helm/confluent-operator/.
  • The <provider>.yaml file for each provider is stored in helm/providers/.

At installation, Helm reads the YAML files in the following layered order:

  1. The values.yaml for the Confluent Platform component is read.
  2. The values.yaml for Operator is read.
  3. The providers.yaml is read.

The following example shows the default parameters used when installing Operator in a Google Cloud Platform GKE cluster.

For example, --set operator.enabled=true in the example below is a modification of the default setting false. This parameter change enables and starts components immediately after the component is installed. The reason the default setting is changed at installation is because Operator and Confluent Platform components must be started in a specific order.

helm install \
  -f ./providers/gcp.yaml \
  --name operator \
  --namespace operator \
  --set operator.enabled=true \
  ./confluent-operator