Plan for Confluent Platform Deployment

This topic contains the supported and recommended options to consider when you plan to deploy Confluent Platform using Confluent for Kubernetes (CFK).

Deployment workflow

At the high level, the workflow to configure, deploy, and manage Confluent Platform using CFK is as follows:

  1. Review this topic and review the required and recommended options for the deployment environment.

  2. Prepare your Kubernetes environment.

    For details, see Prepare Kubernetes Cluster for Confluent Platform.

  3. Deploy Confluent for Kubernetes.

    For details, see Deploy Confluent for Kubernetes.

  4. Configure Confluent Platform.

    For details, see Configure Confluent Platform.

  5. Deploy Confluent Platform.

    For details, see Deploy Confluent Platform.

  6. Manage Confluent Platform.

    For details, see Manage Confluent Platform with Confluent for Kubernetes.

Deployment checklist

The following is a deployment checklist to prepare for a Confluent Platform deployment:

Supported environments and prerequisites

Review and address the following prerequisites before you start the installation process.

Kubernetes

If you are using Red Hat OpenShift as your Kubernetes distribution, you need to determine how you work with Red Hat’s Security Context Constraint (SCC). See this documentation set to understand how.

Operating systems

The underlying Operating System (OS) of your Kubernetes worker nodes must be a supported version for the Confluent Platform version you plan to deploy.

See Supported Versions and Interoperability for supported OSs for a specific version of Confluent Platform.

Helm

Helm 3 is required for Confluent for Kubernetes.

Confluent Platform

Confluent for Kubernetes 2.3.1 supports Confluent Platform versions 7.0.x and 7.1.x.

Note

Do not use cp-helm-chart with Confluent for Kubernetes. It is an open source Helm chart that is not supported by Confluent.

Confluent for Kubernetes image tags and versions

The Confluent Platform and Confluent for Kubernetes images are hosted in the confluentinc repository in Docker Hub.

You can locate a version of Confluent for Kubernetes image with the matching tag at Confluent Operator tags.

The following table shows the mapping between Confluent for Kubernetes / Confluent Operator versions, the corresponding image tags, and other compatibility information.

Version CFK Image Tag Confluent Platform Versions Kubernetes Versions Release Date
Operator 1.6.0 0.419.0 5.5.x, 6.0.x 1.15 - 1.18 Sep 24, 2020
Operator 1.6.1 0.419.6 5.5.x, 6.0.x 1.15 - 1.18 Jan 04, 2021
Operator 1.6.2 0.419.10 5.5.x, 6.0.x 1.15 - 1.18 Jan 29, 2021
Operator 1.7.0 0.419.10 6.0.x, 6.1.x 1.16 - 1.19 Feb 9, 2021
Operator 1.7.1 0.419.13 6.0.x, 6.1.x 1.16 - 1.19 Jun 14, 2021
Operator 1.7.2 0.419.15 6.0.x, 6.1.x 1.16 - 1.19 Jul 20, 2021
CFK 2.0.0 0.174.6 6.0.x, 6.1.x, 6.2.x 1.15 - 1.20 May 12, 2021
CFK 2.0.1 0.174.13 6.0.x, 6.1.x, 6.2.x 1.15 - 1.20 Jun 10, 2021
CFK 2.0.2 0.174.21 6.0.x, 6.1.x, 6.2.x 1.15 - 1.20 Aug 11, 2021
CFK 2.0.3 0.174.25 6.0.x, 6.1.x, 6.2.x 1.15 - 1.20 Sep 10, 2021
CFK 2.0.4 0.174.34 6.0.x, 6.1.x, 6.2.x 1.15 - 1.20 Jan 25, 2022
CFK 2.1.0 0.280.1 6.0.x, 6.1.x, 6.2.x

1.17 - 1.22

(OpenShift 4.6 - 4.9)

Oct 12, 2021
CFK 2.1.1 0.280.22 6.0.x, 6.1.x, 6.2.x

1.17 - 1.22

(OpenShift 4.6 - 4.9)

Jan 25, 2022
CFK 2.1.2 0.280.42 6.0.x, 6.1.x, 6.2.x

1.17 - 1.22

(OpenShift 4.6 - 4.9)

May 5, 2022
CFK 2.2.0 0.304.2 6.2.x, 7.0.x

1.17 - 1.22

(OpenShift 4.6 - 4.9)

Nov 3, 2021
CFK 2.2.1 0.304.17 6.2.x, 7.0.x

1.17 - 1.22

(OpenShift 4.6 - 4.9)

Jan 25, 2022
CFK 2.2.2 0.304.41 6.2.x, 7.0.x

1.17 - 1.22

(OpenShift 4.6 - 4.9)

May 5, 2022
CFK 2.3.0 0.435.11 7.0.x, 7.1.x

1.18 - 1.23

(OpenShift 4.6 - 4.10)

Apr 5, 2022
CFK 2.3.1 0.435.23 7.0.x, 7.1.x

1.18 - 1.23

(OpenShift 4.6 - 4.10)

May 5, 2022

Cluster sizing

Review the sizing guidelines and recommendations in this section before creating your Confluent cluster.

The following table provides the guidance on the minimum cluster sizing:

Cluster Type Production Minimum
Confluent Component CPU Memory Disk CPU Memory Disk
ZooKeeper 4 14 GB 100 GB 2 8 GB 100 GB
Kafka Brokers 12 64 GB 12 TB 4 16 GB 1 TB
Connect Workers 12 24 GB 50 GB [1] 4 16 GB 50 GB [1]
Schema Registry 2 4 GB N/A 2 4 GB N/A
Confluent Control Center 12 32 GB 300 GB 4 16 GB 250 GB
ksqlDB 4 32 GB 100 GB SSD [2] 4 20 GB 100 GB SSD
Confluent REST Proxy 16 [3] 1 GB+ [4] 16 [3] 1 GB+ [4]
  • ZooKeeper and Kafka must be installed on separate, individual pods on separate Kubernetes nodes.
  • At least three Kafka brokers are required for a fully functioning Confluent Platform deployment. A one- or two-broker configuration is not supported and should not be used for development, testing, or production.
  • [1] The disk storage requirement for Connect workers depends on how many Connect plugins you are downloading per Connect clusters.
  • [2] The disk storage requirement for ksqlDB depends on the number of concurrent queries and the aggregation performed.
  • [3] Only required for installation.
  • [4] 1 GB overhead plus 64 MB per producer and 16 MB per consumer.

The number of Kubernetes worker nodes required in your cluster depends on whether you are deploying a development testing cluster or a production-ready cluster.

Production Cluster

Review the default capacity values in the Confluent Platform component custom resources (CRs). Determine how these values affect your production application and build out your nodes accordingly.

You can also use the on-premises System Requirements to determine what is required for your cloud production environment. Note that the on-premises storage information provided is not applicable for cloud environments.

Development Testing Cluster
Each node should typically have a minimum of 2 or 4 CPUs and 7 to 16 GB RAM. If you are testing a deployment of CFK and all Confluent Platform components, you can create a 10-node cluster with six nodes for Apache ZooKeeper™ and Apache Kafka® pods (three replicas each) and four nodes for all other components pods.

For further details on sizing recommendations, see Sizing Calculator for Apache Kafka and Confluent Platform.

Configure component sizing

In CFK, you specify resource requirements using the limits and requests properties for custom resources (CR). See CPU and Memory Resources for defining resource requirements for Confluent Platform.

Docker registry

Confluent for Kubernetes pulls Confluent Docker images from a Docker registry and deploys those on to your Kubernetes cluster.

By default, Confluent for Kubernetes deploys publicly-available Docker images hosted on Docker Hub from the confluentinc repositories.

If you choose to use your own Docker registry and repositories, you need to pull the images from the Confluent repositories and upload to your Docker registry repositories.

See Use Custom Docker Registry for Confluent Platform for details on using custom private registry.

Storage

You need to provide dynamic persistent storage for all Confluent Platform components with block-level storage solutions, such as AWS EBS, Azure Disk, GCE Disk, Ceph RBD, and Portworx.

See Configure Storage for Confluent Platform for details on storage configuration options.

Kubernetes security

With Kubernetes Role-based access control (RBAC) and namespaces, you can deploy Confluent Platform in one of two ways:

  • (Recommended) Provide Confluent for Kubernetes with access to provision and manage Confluent Platform resources in one specific namespace.
  • Provide Confluent for Kubernetes with access to provision and manage Confluent Platform resources across all namespaces in the Kubernetes cluster.

Both options above require Kubernetes role bindings configuration. See Configure Kubernetes RBAC and Custom Resource Definitions for details.

Confluent security

Confluent supports the following processes to enforce security.

  • Authentication
  • Authorization
  • Network encryption
  • Configuration secrets

For production deployments, Confluent recommends the following security mechanisms:

  • Enable one of the following methods for Kafka client authentication:

    • mTLS

    • SASL/PLAIN

    • SASL/PLAIN with LDAP

      For SASL/PLAIN, the identity can come from your LDAP server.

  • Enable Confluent Role Based Access Control (RBAC) for authorization, with user/group identity coming from the LDAP server.

  • Enable TLS for network encryption for both internal traffic between Confluent Platform components and external traffic from clients to Confluent Platform components.

See Production recommended secure setup for a tutorial scenario to configure these security settings.

Networking

Confluent Platform components can be accessed by users and client applications that are either:

  • In the internal Kubernetes network
  • External to the Kubernetes network

The following are the options to externally expose Confluent Platform:

  • Load balancers
    • For Kafka, a Layer 4 load balancer that supports TLS passthrough is required.
    • For other Confluent components with HTTP endpoints, a Layer 4/7 load balancer is required.
  • Kubernetes node ports
  • Static external access with host-based or port-based routing
  • OpenShift routes

Default ports in Confluent for Kubernetes

CFK uses the following default ports for Confluent Platform components. You can override the default ports in the component custom resources.

  • 7203: JMX port
  • 7777: Jolokia port
  • 7778: Prometheus port
  • 8081: Schema Registry port
  • 8082: Confluent REST Proxy port
  • 8083: Connect port
  • 8088: ksqlDB port
  • 8090: MDS port
  • 9021: Control Center port
  • 9071: Kafka Internal port
  • 9072: Replication port
  • 9073: Token port
  • 9092: Kafka External port

Upgrades and updates

CFK provides a declarative API and configuration automation for running Confluent Platform on Kubernetes. You can update configurations and upgrade versions for an existing deployment by applying the updated declarative specs in custom resource files.

However, the following are configuration scenarios that cannot be enabled or changed for an existing deployment:

  • Confluent RBAC. You cannot enable Confluent RBAC on an existing cluster.
  • TLS certificates mechanism. You cannot change the mechanism for how TLS certificates are provided between auto-generated certificates and user provided certificates.
  • TLS encryption. You cannot enable TLS encryption on a TLS disabled cluster.
  • Kafka listener authentication. You cannot change the authentication mechanism used for an existing Kafka listener.
  • External network access mechanism for Kafka brokers. You cannot change the external network access mechanism for Kafka brokers among load balancer, node ports, static ingress controller based routing.
  • Storage class for persistent storage. You cannot change the storage class used to create Persistent Volume Claims for Confluent components.
  • Configuration secrets mechanism. You cannot change the configuration secrets mechanism between using Kubernetes secrets and directory in path containers.

Support for Kubernetes ecosystem

The Confluent for Kubernetes (CFK) product encapsulates a set of Kubernetes Controllers and business logic that automate configuring, deploying, and managing multiple aspects of the Confluent Platform on a Kubernetes of your choice.

Confluent for Kubernetes (CFK) uses the standard Kubernetes API. It will call this Kubernetes API, and it will expect that the Kubernetes API does what it needs to do. CFK does not manage aspects beyond the Kubernetes API - such as vendor implementations like Amazon LoadBalancers, or StorageClasses like Amazon EBS.

The following examples illustrate the support boundaries of CFK.

For storage, the Kubernetes API implements and provides the APIs for StorageClass, PersistentVolumes, PersistentVolumeClaims. When CFK configures and deploys Kafka brokers, it takes a user-provided StorageClass and uses that to create a PersistentVolumeClaim for the Kafka broker storage. CFK does not check and validate what’s in the user-provided StorageClass. CFK does not check whether the Persistent Volume is created - CFK relies on the storage vendor implementation for that.

For networking with load balancers, the Kubernetes API implements and provides the APIs for Network Service and LoadBalancer. When CFK configures and deploys Kafka with a load balancer, CFK creates a LoadBalancer type service, one for every Kafka broker. Kafka then relies on LoadBalancer vendor implementation for the actual LoadBalancer instance to be configured and deployed. Amazon ELB, Google LB, Azure LB, MetalLB are all examples of such LoadBalancer vendor implementations.

Within the described boundary, CFK is tested to ensure that it invokes the Kubernetes API in the right way and creates the correct Kubernetes objects. CFK tests do validate that a LoadBalancer type service is created, with the configurations that the user specified in the Kafka broker custom resource. CFK does not test that a Google load balancer is properly configured and deployed to route traffic to Kafka brokers. Confluent depends on the Google implementation of the load balancer to do the right thing.

With the above points in mind, when you are deploying CFK, consider the following guidelines in order to efficiently and effectively run a production system:

  1. Identify the architecture you want to use - the Confluent components and the Kubernetes runtime - and validate core deployment and management functions in your environment.
  2. Develop a runbook and troubleshooting steps for your deployment. Ensure your team is familiar with this runbook before you go to production.
  3. When you need to get support for issues in your deployment, be prepared to pull in the respective vendors to cover the entire architecture. If there is a networking issue in your CFK deployment, be prepared to pull in the Kubernetes vendor and the vendor for the networking service you are using along with Confluent.