Prepare Kubernetes Cluster for Confluent Operator¶
This guide describes the tasks to prepare your Kubernetes cluster for Confluent Platform deployment with Confluent Operator. The user performing these tasks will need certain Kubernetes cluster-level permissions.
The examples in this guide use the following assumptions:
$VALUES_FILErefers to the configuration file you set up in Create the global configuration file.
To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file (
$VALUES_FILE). However, in your production deployments, use the
--set-fileoption when applying sensitive data with Helm. For example:
helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal=”cn=mds,dc=test,dc=com” \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operatoris the namespace that Confluent Platform is deployed in.
All commands are executed in the
helmdirectory under the directory Confluent Operator was downloaded to.
Create a namespace for Confluent Platform¶
Create a Kubernetes namespace to deploy Confluent Platform into:
kubectl create namespace <namespace-name>
Install Custom Resource Definitions (CRDs)¶
By default, Confluent Operator needs Kubernetes cluster-level permissions to automatically install Custom Resource Defintions (CRDs).
To override the default behavior and allow users to install and run Confluent Operator without cluster-level access, pre-install Confluent Operator CRDs as described below.
Download the Confluent Operator bundle as described in Configure Confluent Operator and Confluent Platform.
Install the Confluent Operator CRDs by running the following command:
kubectl apply -f resources/crds
See Configure Confluent Operator and Confluent Platform for the follow-up workflow to configure Confluent Operator to deploy and run Confluent Platform services without cluster-level access. These steps do not require cluster-level access.
If you want Confluent Operator to manage Confluent Platform components across multiple namespaces, but you don’t want the user who installs Confluent Operator to need permissions to manage cluster-level resources, you can create the ClusterRole and ClusterRoleBindings needed by Confluent Operator ahead of time with the following command:
kubectl apply -f resources/rbac
subjectsin the ClusterRoleBinding definition must match the Operator component name specified when you install the component using
helm upgrade --install.
subjectsin the ClusterRoleBinding definition must match the namespace specified when you install Operator using
helm upgrade --install.
For example, given the following ClusterRoleBinding definition:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: component: operator name: operator-clusterrolebinding subjects: - kind: ServiceAccount name: operator -----  namespace: operator -----  roleRef: kind: ClusterRole name: operator-clusterrole apiGroup: rbac.authorization.k8s.io
operator name  above must match the first argument below [a], and the
operator namespace  above must match the –namespace argument [b] below.
helm upgrade --install \ operator \ --------------[a] ./confluent-operator \ --values $VALUES_FILE \ --namespace operator \ ---[b] --set operator.enabled=true
See Configure Confluent Operator and Confluent Platform for the follow-up workflow to configure Confluent Operator to deploy Confluent Platform services. The steps in that workflow do not require any cluster-level Kubernetes permissions.
Set up statically-provisioned disks¶
Starting in Confluent Operator 5.5, you can instruct Operator to use a specific StorageClass for all PersistentVolumes it creates.
This is optional in Confluent Operator if you have Kubernetes clusters that are not capable of dynamically provisioning storage and need to statically provision disks up front.
Configure a StorageClass in Kubernetes for local provisioning. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-storage-class provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
For every Kafka broker instance you expect to create, decide:
- Which worker node you expect to place it on.
- Which directory path on that worker node’s host filesystem that you intend to use as the PersistentVolume to be exposed to the broker instance.
Create a PersistentVolume with the desired host path and the hostname label of the desired worker node as described here.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-1 -----  spec: capacity: storage: 1Gi ------  volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain -----  storageClassName: my-storage-class ------ local: path: /mnt/data/broker-1-data -----  nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-172-20-114-199.ec2.internal --- 
 Choose a name for the PersistentVolume.
 Choose a storage size that is greater than or equal to the storage you’re requesting for each Kafka broker instance.
Retainif you want the data to be retained after you delete the PersitentVolumeClaim that the Operator will eventually create and which Kubernetes will eventually bind to this PersistentVolume.
Deleteif you want this data to be garbage-collected when the PersistentVolumeClaim is deleted.
persistentVolumeReclaimPolicy: Delete, your data on the volume will be deleted when you delete the pod and consequently its persistent volume.
storageClassNamemust match the
global.storage.storageClassNameyou put in your configuration file (
 This is the directory path you want to use on the worker node for the broker as its persistent data volume.
 This is the value of the kubernetes.io/hostname label of the worker node you want to host this broker instance. To find this hostname:
kubectl get nodes -o 'custom-columns=NAME:metadata.name,HOSTNAME:metadata.labels.kubernetes\.io/hostname' NAME HOSTNAME node-1 ip-172-20-114-199.ec2.internal
Repeat steps 2 and 3 for every broker you intend to create. This is determined by the
kafka.replicasfield in your configuration file (