Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Prepare Kubernetes Cluster for Confluent Operator¶
This guide describes the tasks to prepare your Kubernetes cluster for Confluent Platform deployment with Confluent Operator. The user performing these tasks will need certain Kubernetes cluster-level permissions.
The examples in this guide use the following assumptions:
$VALUES_FILE
refers to the configuration file you set up in Create the global configuration file.To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file (
$VALUES_FILE
). However, in your production deployments, use the--set
or--set-file
option when applying sensitive data with Helm. For example:helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal=”cn=mds,dc=test,dc=com” \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operator
is the namespace that Confluent Platform is deployed in.All commands are executed in the
helm
directory under the directory Confluent Operator was downloaded to.
Create a namespace for Confluent Platform¶
Create a Kubernetes namespace to deploy Confluent Platform into:
kubectl create namespace <namespace-name>
Install Custom Resource Definitions (CRDs)¶
By default, Confluent Operator needs Kubernetes cluster-level permissions to automatically install Custom Resource Defintions (CRDs).
To override the default behavior and allow users to install and run Confluent Operator without cluster-level access, pre-install Confluent Operator CRDs as described below.
Download the Confluent Operator bundle as described in Configure Confluent Operator and Confluent Platform.
Install the Confluent Operator CRDs by running the following command:
kubectl apply -f resources/crds
See Configure Confluent Operator and Confluent Platform for the follow-up workflow to configure Confluent Operator to deploy and run Confluent Platform services without cluster-level access. These steps do not require cluster-level access.
Configure ClusterRoleBinding¶
If you want Confluent Operator to manage Confluent Platform components across multiple namespaces, but you don’t want the user who installs Confluent Operator to need permissions to manage cluster-level resources, you can create the ClusterRole and ClusterRoleBindings needed by Confluent Operator ahead of time with the following command:
kubectl apply -f resources/rbac
- The
name
of thesubjects
in the ClusterRoleBinding definition must match the Operator component name specified when you install the component usinghelm upgrade --install
[1]. - The
namespace
of thesubjects
in the ClusterRoleBinding definition must match the namespace specified when you install Operator usinghelm upgrade --install
[2].
For example, given the following ClusterRoleBinding definition:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
component: operator
name: operator-clusterrolebinding
subjects:
- kind: ServiceAccount
name: operator ----- [1]
namespace: operator ----- [2]
roleRef:
kind: ClusterRole
name: operator-clusterrole
apiGroup: rbac.authorization.k8s.io
The operator
name [1] above must match the first argument below [a], and the
operator
namespace [2] above must match the –namespace argument [b] below.
helm upgrade --install \
operator \ --------------[a]
./confluent-operator \
--values $VALUES_FILE \
--namespace operator \ ---[b]
--set operator.enabled=true
See Configure Confluent Operator and Confluent Platform for the follow-up workflow to configure Confluent Operator to deploy Confluent Platform services. The steps in that workflow do not require any cluster-level Kubernetes permissions.
Set up statically-provisioned disks¶
Starting in Confluent Operator 5.5, you can instruct Operator to use a specific StorageClass for all PersistentVolumes it creates.
This is optional in Confluent Operator if you have Kubernetes clusters that are not capable of dynamically provisioning storage and need to statically provision disks up front.
Configure a StorageClass in Kubernetes for local provisioning. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-storage-class provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
For every Kafka broker instance you expect to create, decide:
- Which worker node you expect to place it on.
- Which directory path on that worker node’s host filesystem that you intend to use as the PersistentVolume to be exposed to the broker instance.
Create a PersistentVolume with the desired host path and the hostname label of the desired worker node as described here.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-1 ----- [1] spec: capacity: storage: 1Gi ------ [2] volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain ----- [3] storageClassName: my-storage-class ------[4] local: path: /mnt/data/broker-1-data ----- [5] nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-172-20-114-199.ec2.internal --- [6]
[1] Choose a name for the PersistentVolume.
[2] Choose a storage size that is greater than or equal to the storage you’re requesting for each Kafka broker instance.
[3] Choose
Retain
if you want the data to be retained after you delete the PersitentVolumeClaim that the Operator will eventually create and which Kubernetes will eventually bind to this PersistentVolume.Choose
Delete
if you want this data to be garbage-collected when the PersistentVolumeClaim is deleted.Important
With
persistentVolumeReclaimPolicy: Delete
, your data on the volume will be deleted when you delete the pod and consequently its persistent volume.[4] The
storageClassName
must match theglobal.storage.storageClassName
you put in your configuration file ($VALUES_FILE
).[5] This is the directory path you want to use on the worker node for the broker as its persistent data volume.
[6] This is the value of the kubernetes.io/hostname label of the worker node you want to host this broker instance. To find this hostname:
kubectl get nodes -o 'custom-columns=NAME:metadata.name,HOSTNAME:metadata.labels.kubernetes\.io/hostname' NAME HOSTNAME node-1 ip-172-20-114-199.ec2.internal
Repeat steps 2 and 3 for every broker you intend to create. This is determined by the
kafka.replicas
field in your configuration file ($VALUES_FILE
).