Plan for Confluent Platform Deployment

This topic contains the supported and recommended options to consider when you plan to deploy Confluent Platform using Confluent for Kubernetes (CFK).

Deployment workflow

At the high level, the workflow to configure, deploy, and manage Confluent Platform using CFK is as follows:

  1. Review this topic and review the required and recommended options for the deployment environment.

  2. Prepare your Kubernetes environment.

    For details, see Prepare Kubernetes Cluster for Confluent Platform.

  3. Deploy Confluent for Kubernetes.

    For details, see Deploy Confluent for Kubernetes.

  4. Configure Confluent Platform.

    For details, see Configure Confluent Platform.

  5. Deploy Confluent Platform.

    For details, see Deploy Confluent Platform.

  6. Manage Confluent Platform.

    For details, see Manage Confluent Platform with Confluent for Kubernetes.

Deployment checklist

The following is a deployment checklist to prepare for a Confluent Platform deployment:

Supported environments and prerequisites

Review and address the following prerequisites before you start the installation process.


  • Confluent for Kubernetes 2.0.4 supports Kubernetes versions 1.15 to 1.20 with any Cloud Native Computing Foundation (CNCF) conformant offering. A full list of CNCF conformant Kubernetes offerings can be found here.

  • The following API must be present in your Kubernetes:

    To check for the API, run the following command. The output should include

    kubectl api-versions | grep
  • Install kubectl.

  • Configure the kubeconfig file for your cluster.

If you are using Red Hat OpenShift as your Kubernetes distribution, you need to determine how you work with Red Hat’s Security Context Constraint (SCC). See this documentation set to understand how.

Operating systems

The underlying Operating System (OS) of your Kubernetes worker nodes must be a supported version for the Confluent Platform version you plan to deploy.

See Supported Versions and Interoperability for supported OSs for a specific version of Confluent Platform.


Helm 3 is required for Confluent for Kubernetes.

Confluent Platform

Confluent for Kubernetes 2.0.4 supports Confluent Platform versions 6.0.x, 6.1.x, and 6.2.x.


Do not use cp-helm-chart with Confluent for Kubernetes. It is an open source Helm chart that is not supported by Confluent.

Confluent for Kubernetes image tags and versions

The Confluent Platform and Confluent for Kubernetes images are hosted in the confluentinc repository in Docker Hub.

You can locate a version of Confluent for Kubernetes image with the matching tag at Confluent Operator tags.

The following table shows the mapping between Confluent for Kubernetes / Confluent Operator versions and the corresponding image tags.

Confluent for Kubernetes / Operator Pod Image Tag Confluent for Kubernetes Version
0.419.0 Confluent Operator 1.6.0
0.419.6 Confluent Operator 1.6.1
0.419.10 Confluent Operator 1.6.2
0.419.10 Confluent Operator 1.7.0
0.419.13 Confluent Operator 1.7.1
0.419.15 Confluent Operator 1.7.2
0.174.6 Confluent for Kubernetes 2.0.0
0.174.13 Confluent for Kubernetes 2.0.1
0.174.21 Confluent for Kubernetes 2.0.2
0.174.25 Confluent for Kubernetes 2.0.3
0.174.34 Confluent for Kubernetes 2.0.4

Cluster sizing

Review the sizing guidelines and recommendations in this section before creating your Confluent cluster.

The following table provides the guidance on the minimum cluster sizing:

Cluster Type Production Minimum
Confluent Component CPU Memory Disk CPU Memory Disk
ZooKeeper 4 14 GB 100 GB 2 8 GB 100 GB
Kafka Brokers 12 64 GB 12 TB 4 16 GB 1 TB
Connect Workers 12 24 GB 50 GB [1] 4 16 GB 50 GB [1]
Schema Registry 2 4 GB N/A 2 4 GB N/A
Confluent Control Center 12 32 GB 300 GB 4 16 GB 250 GB
ksqlDB 4 32 GB 100 GB SSD [2] 4 20 GB 100 GB SSD
Confluent REST Proxy 16 [3] 1 GB+ [4] 16 [3] 1 GB+ [4]
  • ZooKeeper and Kafka must be installed on separate, individual pods on separate Kubernetes nodes.
  • At least three Kafka brokers are required for a fully functioning Confluent Platform deployment. A one- or two-broker configuration is not supported and should not be used for development, testing, or production.
  • [1] The disk storage requirement for Connect workers depends on how many Connect plugins you are downloading per Connect clusters.
  • [2] The disk storage requirement for ksqlDB depends on the number of concurrent queries and the aggregation performed.
  • [3] Only required for installation.
  • [4] 1 GB overhead plus 64 MB per producer and 16 MB per consumer.

The number of Kubernetes worker nodes required in your cluster depends on whether you are deploying a development testing cluster or a production-ready cluster.

Production Cluster

Review the default capacity values in the Confluent Platform component custom resources (CRs). Determine how these values affect your production application and build out your nodes accordingly.

You can also use the on-premises System Requirements to determine what is required for your cloud production environment. Note that the on-premises storage information provided is not applicable for cloud environments.

Development Testing Cluster
Each node should typically have a minimum of 2 or 4 CPUs and 7 to 16 GB RAM. If you are testing a deployment of CFK and all Confluent Platform components, you can create a 10-node cluster with six nodes for Apache ZooKeeper™ and Apache Kafka® pods (three replicas each) and four nodes for all other components pods.

For further details on sizing recommendations, see Sizing Calculator for Apache Kafka and Confluent Platform.

Configure component sizing

In CFK, you specify resource requirements using the limits and requests properties for custom resources (CR). See CPU and Memory Resources for defining resource requirements for Confluent Platform.

Docker registry

Confluent for Kubernetes pulls Confluent Docker images from a Docker registry and deploys those on to your Kubernetes cluster.

By default, Confluent for Kubernetes deploys publicly-available Docker images hosted on Docker Hub from the confluentinc repositories.

If you choose to use your own Docker registry and repositories, you need to pull the images from the Confluent repositories and upload to your Docker registry repositories.

See Use Custom Docker Registry for Confluent Platform for details on using custom private registry.


You need to provide dynamic persistent storage for all Confluent Platform components with block-level storage solutions, such as AWS EBS, Azure Disk, GCE Disk, Ceph RBD, and Portworx.

See Configure Storage for Confluent Platform for details on storage configuration options.

Kubernetes security

With Kubernetes Role-based access control (RBAC) and namespaces, you can deploy Confluent Platform in one of two ways:

  • (Recommended) Provide Confluent for Kubernetes with access to provision and manage Confluent Platform resources in one specific namespace.
  • Provide Confluent for Kubernetes with access to provision and manage Confluent Platform resources across all namespaces in the Kubernetes cluster.

Both options above require Kubernetes role bindings configuration. See Configure Kubernetes RBAC and Custom Resource Definitions for details.

Confluent security

Confluent supports the following processes to enforce security.

  • Authentication
  • Authorization
  • Network encryption
  • Configuration secrets

For production deployments, Confluent recommends the following security mechanisms:

  • Enable one of the following methods for Kafka client authentication:

    • SASL/Plain

      For SASL/Plain, the identity can come from your LDAP server.

    • mTLS

  • Enable Confluent Role Based Access Control (RBAC) for authorization, with user/group identity coming from the LDAP server.

  • Enable TLS for network encryption for both internal traffic between Confluent Platform components and external traffic from clients to Confluent Platform components.

See Production recommended secure setup for a tutorial scenario to configure these security settings.


Confluent Platform components can be accessed by users and client applications that are either:

  • In the internal Kubernetes network
  • External to the Kubernetes network

The following are the options to externally expose Confluent Platform:

  • Load balancers
    • For Kafka, a Layer 4 load balancer that supports TLS passthrough is required.
    • For other Confluent components with HTTP endpoints, a Layer 4/7 load balancer is required.
  • Kubernetes node ports
  • Static external access with host-based or port-based routing
  • OpenShift routes

Default ports in Confluent for Kubernetes

CFK uses the following default ports for Confluent Platform components. You can override the default ports in the component custom resources.

  • JMX port: 7203
  • Jolokia port: 7777
  • Prometheus port: 7778
  • Kafka External port: 9092
  • Kafka Internal port: 9071
  • Replication port: 9072
  • Token port: 9073
  • MDS default port: 8090
  • Control Center default listener port: 9021
  • Control Center HTTP port: 8021
  • Schema Registry default external listener port: 8081
  • Schema Registry internal listener port: 9081
  • ksqlDB external port: 8088
  • ksqlDB internal port: 9088
  • Connect external port: 8083
  • Connect internal port: 9083

Upgrades and updates

CFK provides a declarative API and configuration automation for running Confluent Platform on Kubernetes. You can update configurations and upgrade versions for an existing deployment by applying the updated declarative specs in custom resource files.

However, the following are configuration scenarios that cannot be enabled or changed for an existing deployment:

  • Confluent RBAC. You cannot enable Confluent RBAC on an existing cluster.
  • TLS certificates mechanism. You cannot change the mechanism for how TLS certificates are provided between auto-generated certificates and user provided certificates.
  • TLS encryption. You cannot enable TLS encryption on a TLS disabled cluster.
  • Kafka listener authentication. You cannot change the authentication mechanism used for an existing Kafka listener.
  • External network access mechanism for Kafka brokers. You cannot change the external network access mechanism for Kafka brokers among load balancer, node ports, static ingress controller based routing.
  • Storage class for persistent storage. You cannot change the storage class used to create Persistent Volume Claims for Confluent components.
  • Configuration secrets mechanism. You cannot change the configuration secrets mechanism between using Kubernetes secrets and directory in path containers.