Plan for Confluent Platform Deployment Using Confluent for Kubernetes¶
This topic contains the supported and recommended options to consider when you plan to deploy Confluent Platform using Confluent for Kubernetes (CFK).
Deployment options¶
When you use CFK, you have two options to deploy and manage Confluent Platform:
-
Use CFK to deploy individual Confluent Platform components on Kubernetes via component-level CustomResourceDefinitions (CRDs).
-
CFK Blueprints extends CFK CRDs with a new set of abstractions - Infrastructures, Blueprints, and Deployments.
Use CFK Blueprints:
- To provision and manage Confluent deployments in a standardized and automated manner.
- To provide a self-service Kubernetes interface for teams to deploy and use Confluent Platform in your organization.
Deployment workflow¶
At the high level, the workflow to configure, deploy, and manage Confluent Platform using CFK is as follows:
Review this topic and review the required and recommended options for the deployment environment.
Prepare your Kubernetes environment.
For details, see Prepare Kubernetes Cluster for Confluent Platform and Confluent for Kubernetes.
Deploy Confluent for Kubernetes.
For details, see Deploy Confluent for Kubernetes.
Configure Confluent Platform.
For details, see Configure Confluent Platform for Deployment with Confluent for Kubernetes.
Deploy Confluent Platform.
For details, see Deploy Confluent Platform using Confluent for Kubernetes.
Manage Confluent Platform.
For details, see Manage Confluent Platform with Confluent for Kubernetes.
Deployment checklist¶
The following is a deployment checklist to prepare for a Confluent Platform deployment:
Supported environments and prerequisites¶
Review and address the following prerequisites before you start the installation process.
Kubernetes¶
- Confluent for Kubernetes 2.10 supports the Kubernetes distributions with Cloud Native Computing Foundation (CNCF) conformant offerings. See the Supported Versions section for the specific versions supported.
- Install kubectl.
- Configure the kubeconfig file for your cluster.
If you are using Red Hat OpenShift as your Kubernetes distribution, you need to determine how you work with Red Hat’s Security Context Constraint (SCC). See this documentation set to understand how.
Hardware¶
The underlying processor architecture of your Kubernetes worker nodes must be a supported version for the Confluent Platform version you plan to deploy.
Currently, Confluent Platform supports x86 and ARM64 hardware architecture.
For supported hardware for a specific version of Confluent Platform, see Hardware Requirements for Confluent Platform.
Operating systems¶
The underlying Operating System (OS) of your Kubernetes worker nodes must be a supported version for the Confluent Platform version you plan to deploy.
For supported OSs for a specific version of Confluent Platform, see OS Requirements for Confluent Platform.
Confluent Platform¶
The following table shows the compatibility information with CFK, Confluent Platform, and Kubernetes.
Version | Confluent Platform Versions | Kubernetes Versions | Release Date | Standard End of Support |
---|---|---|---|---|
CFK 2.10.x | 7.0.x - 7.8.x | 1.25 - 1.31 (OpenShift 4.11 - 4.17) |
Dec 4, 2024 | Dec 4, 2026 |
CFK 2.9.x | 7.0.x - 7.7.x | 1.25 - 1.30 (OpenShift 4.11 - 4.16) |
Jul 30, 2024 | Jul 30, 2026 |
CFK 2.8.x | 7.5.x, 7.6.x | 1.25 - 1.29 (OpenShift 4.10 - 4.15) |
Feb 14, 2024 | Feb 14, 2025 |
CFK 2.7.x | 7.4.x, 7.5.x | 1.23 - 1.27 (OpenShift 4.10 - 4.14) |
Aug 28, 2023 | Aug 28, 2024 |
CFK 2.6.x | 7.3.x, 7.4.x | 1.22 - 1.26 (OpenShift 4.9 - 4.12) |
May 3, 2023 | May 3, 2024 |
CFK 2.5.x | 7.2.x, 7.3.x | 1.21 - 1.25 (OpenShift 4.8 - 4.12) |
Nov 4, 2022 | Nov 4, 2023 |
CFK 2.4.x | 7.1.x, 7.2.x | 1.21 - 1.24 (OpenShift 4.8 - 4.10) |
Jul 7, 2022 | Jul 7, 2023 |
- Starting with CFK 2.9, the standard support policy of CFK is for 2 years
from the first patch release (
.0
) date. - Platinum tier support is not offered for CFK.
- You can apply your Confluent Platform Platinum tier support contract to the Confluent Platform components
deployed by CFK when both of the following conditions are true:
- You are on a currently supported version of CFK.
- The Confluent Platform version you want to use is compatible with a currently supported version of CFK.
Confluent for Kubernetes image tags¶
The Confluent Platform and Confluent for Kubernetes images are hosted in the confluentinc repository in Docker Hub.
The following table shows the mapping among Confluent for Kubernetes (CFK) versions, corresponding image tags, and the custom resource definition (CRD) versions.
CFK Helm Version | CFK Image Tag | CFK CRD Version |
---|---|---|
2.10.0 | 0.1145.6 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.9.4 | 0.1033.43 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.9.3 | 0.1033.33 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.9.2 | 0.1033.22 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.9.1 | 0.1033.10 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.9.0 | 0.1033.3 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.8.5 | 0.921.77 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.8.4 | 0.921.63 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.8.3 | 0.921.40 | controller-gen.kubebuilder.io/version: v0.15.0 |
2.8.2 | 0.921.20 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.8.1 | N/A | controller-gen.kubebuilder.io/version: v0.9.2 |
2.8.0 | 0.921.2 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.7.5 | 0.824.84 | controller-gen.kubebuilder.io/version: v0.14.0 |
2.7.4 | 0.824.61 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.7.3 | 0.824.40 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.7.2 | 0.824.33 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.7.1 | 0.824.17 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.7.0 | 0.824.2 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.6.5 | 0.771.89 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.6.4 | 0.771.68 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.6.3 | 0.771.62 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.6.2 | 0.771.45 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.6.1 | 0.771.29 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.6.0 | 0.771.13 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.5.5 | 0.581.89 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.5.4 | 0.581.75 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.5.3 | N/A | controller-gen.kubebuilder.io/version: v0.9.2 |
2.5.2 | 0.581.55 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.5.1 | 0.581.34 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.5.0 | 0.581.16 | controller-gen.kubebuilder.io/version: v0.9.2 |
2.4.4 | 0.517.78 | controller-gen.kubebuilder.io/version: v0.8.0 |
2.4.3 | 0.517.56 | controller-gen.kubebuilder.io/version: v0.8.0 |
2.4.2 | 0.517.43 | controller-gen.kubebuilder.io/version: v0.8.0 |
2.4.1 | 0.517.23 | controller-gen.kubebuilder.io/version: v0.8.0 |
2.4.0 | 0.517.12 | controller-gen.kubebuilder.io/version: v0.8.0 |
2.3.4 | 0.435.67 | controller-gen.kubebuilder.io/version: v0.7.0 |
2.3.3 | 0.435.57 | controller-gen.kubebuilder.io/version: v0.7.0 |
2.3.2 | 0.435.40 | controller-gen.kubebuilder.io/version: v0.7.0 |
2.3.1 | 0.435.23 | controller-gen.kubebuilder.io/version: v0.7.0 |
2.3.0 | 0.435.11 | controller-gen.kubebuilder.io/version: v0.7.0 |
2.2.3 | 0.304.59 | controller-gen.kubebuilder.io/version: v0.4.1 |
2.2.2 | 0.304.41 | controller-gen.kubebuilder.io/version: v0.4.1 |
2.2.1 | 0.304.17 | controller-gen.kubebuilder.io/version: v0.4.1 |
2.2.0 | 0.304.2 | controller-gen.kubebuilder.io/version: v0.4.1 |
Resource requirements for CFK operator¶
In CFK, the Confluent Platform pods are configured with the following default CPU and memory resources:
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
The above default resource values are sufficient for most use cases. If you plan to manage a high number of day-2 application custom resources (CRs), such as 100 or more, refer to manage resources about increasing the number of application CRs and the corresponding memory and CPU.
Cluster sizing for Confluent Platform¶
The resource recommendations are the same for CFK and other Confluent Platform environments. For guidance on the cluster sizing, refer to Confluent Platform System Requirements.
The following are additional requirements and considerations:
- Avoid placing multiple replicas of the same component, such as KRaft, Kafka, ZooKeeper, on a single Kubernetes node.
- The disk storage requirement for Connect workers depends on how many Connect plugins you download per Connect cluster.
The number of Kubernetes worker nodes required in your cluster depends on whether you are deploying a development testing cluster or a production-ready cluster.
- Production Cluster
Review the default capacity values in the Confluent Platform component custom resources (CRs). Determine how these values affect your production application and build out your nodes accordingly.
You can also use the on-premises System Requirements to determine what is required for your cloud production environment. Note that the on-premises storage information provided is not applicable for cloud environments.
- Development Testing Cluster
- Each node should typically have a minimum of 2 or 4 CPUs and 7 to 16 GB RAM. If you are testing a deployment of CFK and all Confluent Platform components, you can create a 10-node cluster with six nodes for Apache ZooKeeper™ and Apache Kafka® pods (three replicas each) and four nodes for all other components pods.
For further details on sizing recommendations, see Sizing Calculator for Apache Kafka and Confluent Platform.
Configure component sizing¶
In CFK, you specify resource requirements using the limits and requests properties for custom resources (CR). See CPU and Memory Resources for defining resource requirements for Confluent Platform.
Docker registry¶
Confluent for Kubernetes pulls Confluent Docker images from a Docker registry and deploys those on to your Kubernetes cluster.
By default, Confluent for Kubernetes deploys publicly-available Docker images hosted on Docker
Hub from the confluentinc
repositories.
If you choose to use your own Docker registry and repositories, you need to pull the images from the Confluent repositories and upload to your Docker registry repositories.
See Use Custom Docker Registry for Confluent Platform Using Confluent for Kubernetes for details on using custom private registry.
Storage¶
You need to provide dynamic persistent storage for all Confluent Platform components with block-level storage solutions, such as AWS EBS, Azure Disk, GCE Disk, Ceph RBD, and Portworx.
See Configure Storage for Confluent Platform Using Confluent for Kubernetes for details on storage configuration options.
Kubernetes security¶
With Kubernetes Role-based access control (RBAC) and namespaces, you can deploy Confluent Platform in one of two ways:
- (Recommended) Provide Confluent for Kubernetes with access to provision and manage Confluent Platform resources in one specific namespace.
- Provide Confluent for Kubernetes with access to provision and manage Confluent Platform resources across all namespaces in the Kubernetes cluster.
Both options above require Kubernetes role bindings configuration. See Configure Kubernetes RBAC and Custom Resource Definitions for details.
Confluent security¶
Confluent supports the following processes to enforce security.
- Authentication
- Authorization
- Network encryption
- Configuration secrets
For production deployments, Confluent recommends the following security mechanisms:
Enable one of the following methods for Kafka client authentication:
mTLS
SASL/PLAIN
SASL/PLAIN with LDAP
For SASL/PLAIN, the identity can come from your LDAP server.
Enable Confluent Role Based Access Control (RBAC) for authorization, with user/group identity coming from the LDAP server.
Enable TLS for network encryption for both internal traffic between Confluent Platform components and external traffic from clients to Confluent Platform components.
See Production recommended secure setup for a tutorial scenario to configure these security settings.
Networking¶
Confluent Platform components can be accessed by users and client applications that are either:
- In the internal Kubernetes network
- External to the Kubernetes network
The following are the options to externally expose Confluent Platform:
- Load balancers
- For Kafka, a Layer 4 load balancer that supports TLS passthrough is required.
- For other Confluent components with HTTP endpoints, a Layer 4/7 load balancer is required.
- Kubernetes node ports
- Static external access with host-based or port-based routing
- OpenShift routes
Default ports in Confluent for Kubernetes¶
CFK uses the following default ports for Confluent Platform components. You can override the default ports in the component custom resources.
- 7203: JMX port
- 7777: Jolokia port
- 7778: Prometheus port
- 8081: Schema Registry default port
- 9081: Schema Registry internal listener port
- 8082: Confluent REST Proxy port
- 8083: Connect port
- 8088: ksqlDB default port
- 9088: ksqlDB internal listener port
- 8090: MDS default port
- 9090: MDS internal listener port
- 9021: Control Center port
- 9071: Kafka Internal port
- 9072: Replication port
- 9073: Token port
- 9092: Kafka External port
Upgrades and updates¶
CFK provides a declarative API and configuration automation for running Confluent Platform on Kubernetes. You can update configurations and upgrade versions for an existing deployment by applying the updated declarative specs in custom resource files.
However, the following are configuration scenarios that cannot be enabled or changed for an existing deployment:
Confluent RBAC
You cannot enable Confluent RBAC on an existing cluster.
You cannot disable Confluent RBAC on an existing cluster set up with RBAC.
As a workaround, you can grant ClusterAdmin role to a root group containing all users.
TLS certificates mechanism
You cannot change the mechanism for how TLS certificates are provided between auto-generated certificates and user-provided certificates.
TLS encryption
You cannot enable TLS encryption on a TLS disabled cluster.
Kafka listener authentication
You cannot change the authentication mechanism used for an existing Kafka listener.
Kafka metrics TLS and authentication configurations
External network access mechanism for Kafka brokers
You cannot change the external network access mechanism for Kafka brokers among load balancer, node ports, static ingress controller based routing.
Storage class for persistent storage
You cannot change the storage class used to create Persistent Volume Claims for Confluent components.
Configuration secrets mechanism
You cannot change the configuration secrets mechanism between using Kubernetes secrets and using the directory in path containers feature.
Support for Kubernetes ecosystem¶
The Confluent for Kubernetes (CFK) product encapsulates a set of Kubernetes Controllers and business logic that automate configuring, deploying, and managing multiple aspects of the Confluent Platform on a Kubernetes of your choice.
CFK uses the standard Kubernetes API. It calls this Kubernetes API, and it expects that the Kubernetes API does what it needs to do. CFK does not manage aspects beyond the Kubernetes API - such as vendor implementations like Amazon LoadBalancers, or StorageClasses like Amazon EBS.
The following examples illustrate the support boundaries of CFK.
- For storage, the Kubernetes API implements and provides the APIs for StorageClass, PersistentVolumes, PersistentVolumeClaims. When CFK configures and deploys Kafka brokers, it takes a user-provided StorageClass and uses that to create a PersistentVolumeClaim for the Kafka broker storage. CFK does not check and validate what’s in the user-provided StorageClass. CFK does not check whether the Persistent Volume is created - CFK relies on the storage vendor implementation for that.
- For networking with load balancers, the Kubernetes API implements and provides the APIs for Network Service and LoadBalancer. When CFK configures and deploys Kafka with a load balancer, CFK creates a LoadBalancer type service, one for every Kafka broker. Kafka then relies on LoadBalancer vendor implementation for the actual LoadBalancer instance to be configured and deployed. Amazon ELB, Google LB, Azure LB, MetalLB are all examples of such LoadBalancer vendor implementations.
Within the described boundary, CFK is tested to ensure that it invokes the Kubernetes API in the right way and creates the correct Kubernetes objects. CFK tests do validate that a LoadBalancer type service is created, with the configurations that the user specified in the Kafka broker custom resource. CFK does not test that a Google load balancer is properly configured and deployed to route traffic to Kafka brokers. Confluent depends on the Google implementation of the load balancer to do the right thing.
With the above points in mind, when you are deploying CFK, consider the following guidelines in order to efficiently and effectively run a production system:
- Identify the architecture you want to use - the Confluent components and the Kubernetes runtime - and validate core deployment and management functions in your environment.
- Develop a runbook and troubleshooting steps for your deployment. Ensure your team is familiar with this runbook before you go to production.
- When you need to get support for issues in your deployment, be prepared to pull in the respective vendors to cover the entire architecture. If there is a networking issue in your CFK deployment, be prepared to pull in the Kubernetes vendor and the vendor for the networking service you are using along with Confluent.