.. _co-prepare: ======================================== Prepare Kubernetes Cluster for |co-long| ======================================== This guide describes the tasks to prepare your Kubernetes cluster for |cp| deployment with |co-long|. The user performing these tasks will need certain Kubernetes cluster-level permissions. .. include:: includes/co-values-file.rst .. _co-create-namespace: Create a namespace for |cp| --------------------------- Create a Kubernetes namespace to deploy |cp| into: :: kubectl create namespace .. _co-create-crd: Install Custom Resource Definitions (CRDs) ------------------------------------------ By default, |co-long| needs Kubernetes cluster-level permissions to automatically install Custom Resource Defintions (CRDs). To override the default behavior and allow users to install and run |co-long| without cluster-level access, pre-install |co-long| CRDs as described below. #. Download the |co-long| bundle as described in :ref:`co-configure`. #. Install the |co-long| CRDs by running the following command: :: kubectl apply -f resources/crds See :ref:`co-configure` for the follow-up workflow to configure |co-long| to deploy and run |cp| services without cluster-level access. These steps do not require cluster-level access. .. _co-admin-ClusterRoleBinding: Configure ClusterRoleBinding ---------------------------- If you want |co-long| to manage |cp| components across multiple namespaces, but you don't want the user who installs |co-long| to need permissions to manage cluster-level resources, you can create the ClusterRole and ClusterRoleBindings needed by |co-long| ahead of time with the following command: :: kubectl apply -f resources/rbac * The ``name`` of the ``subjects`` in the ClusterRoleBinding definition must match the |co| component name specified when you install the component using ``helm upgrade --install`` [1]. * The ``namespace`` of the ``subjects`` in the ClusterRoleBinding definition must match the namespace specified when you install |co| using ``helm upgrade --install`` [2]. For example, given the following ClusterRoleBinding definition: :: kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: component: operator name: operator-clusterrolebinding subjects: - kind: ServiceAccount name: operator ----- [1] namespace: operator ----- [2] roleRef: kind: ClusterRole name: operator-clusterrole apiGroup: rbac.authorization.k8s.io The ``operator`` name [1] above must match the first argument below [a], and the ``operator`` namespace [2] above must match the --namespace argument [b] below. :: helm upgrade --install \ operator \ --------------[a] ./confluent-operator \ --values $VALUES_FILE \ --namespace operator \ ---[b] --set operator.enabled=true See :ref:`co-configure` for the follow-up workflow to configure |co-long| to deploy |cp| services. The steps in that workflow do not require any cluster-level Kubernetes permissions. .. _co-static-storage: Set up statically-provisioned disks ----------------------------------- Starting in |co-long| 5.5, you can instruct |co| to use a specific StorageClass for all PersistentVolumes it creates. This is optional in |co-long| if you have Kubernetes clusters that are not capable of dynamically provisioning storage and need to statically provision disks up front. #. Configure a StorageClass in Kubernetes for local provisioning. For example: :: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-storage-class provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer #. For every |ak| broker instance you expect to create, decide: * Which worker node you expect to place it on. * Which directory path on that worker node’s host filesystem that you intend to use as the PersistentVolume to be exposed to the broker instance. #. Create a PersistentVolume with the desired host path and the hostname label of the desired worker node as described `here `__. :: apiVersion: v1 kind: PersistentVolume metadata: name: pv-1 ----- [1] spec: capacity: storage: 1Gi ------ [2] volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain ----- [3] storageClassName: my-storage-class ------[4] local: path: /mnt/data/broker-1-data ----- [5] nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-172-20-114-199.ec2.internal --- [6] * [1] Choose a name for the PersistentVolume. * [2] Choose a storage size that is greater than or equal to the storage you’re requesting for each |ak| broker instance. * [3] Choose ``Retain`` if you want the data to be retained after you delete the PersitentVolumeClaim that the |co| will eventually create and which Kubernetes will eventually bind to this PersistentVolume. Choose ``Delete`` if you want this data to be garbage-collected when the PersistentVolumeClaim is deleted. .. important:: With ``persistentVolumeReclaimPolicy: Delete``, your data on the volume will be deleted when you delete the pod and consequently its persistent volume. * [4] The ``storageClassName`` must match the ``global.storage.storageClassName`` you put in your configuration file (``$VALUES_FILE``). * [5] This is the directory path you want to use on the worker node for the broker as its persistent data volume. * [6] This is the value of the kubernetes.io/hostname label of the worker node you want to host this broker instance. To find this hostname: :: kubectl get nodes -o 'custom-columns=NAME:metadata.name,HOSTNAME:metadata.labels.kubernetes\.io/hostname' NAME HOSTNAME node-1 ip-172-20-114-199.ec2.internal #. Repeat steps 2 and 3 for every broker you intend to create. This is determined by the ``kafka.replicas`` field in your configuration file (``$VALUES_FILE``).