.. _co-deployment: Deploying |co-long| and |cp| ---------------------------- .. tip:: See :ref:`co-videos` for an introduction to Kubernetes and |co-long|. The following sections guide you through deploying one cluster containing the following components: * |co-long| (one instance) * |zk-full| (three replicas) * |ak-tm| (three broker replicas) * |sr-long| (two replicas) * |kconnect-long| (two replicas) * |crep-full| (two replicas) * |c3| (one instance) * |cksql-full| (two replicas) The procedure uses the Google Kubernetes Engine (GKE) as the example provider environment. Also, use this as a guide for deploying |co| and |cp| in :ref:`supported provider environments `. .. include:: includes/upgrade-note-helm-cp.rst .. include:: includes/openshift-note-helm2.rst Unless specified, the deployment procedure shows examples based on the default configuration parameters in the Helm YAML files provided by Confluent. This includes default naming conventions for components and namespaces. Note that you should not use the default YAML parameters for deploying into a production environment. However, using the default parameters allows you quickly get up and running in a test environment. .. tip:: For more information about how to use a shell script to automate deployment of |co-long| and |cp|, see the :ref:`co-quickstart`. General Prerequisites - A Kubernetes cluster conforming to one of the :ref:`supported environments `. - Cluster size based on the :ref:`Sizing Recommendations `. - The provider CLI or `Cloud SDK `_ is installed and initialized. - `kubectl `__ is installed, initialized, with the context set. You also must have the ``kubeconfig`` file configured for your cluster. - Access to the |co-long| bundle (Step 1 below). - **Storage:** `StorageClass-based `_ storage provisioner support. This is the default storage class used. Other external provisioners can be used. SSD or SSD-like disks are required for persistent storage. - **Security:** TLS certificates for each component required (if using TLS). Default SASL/PLAIN security is used in the example steps. See :ref:`co-security` for information about how to configure additional security. - **DNS**: DNS support on your platform environment is required for external access to |cp| components after deployment. After deployment, you create DNS entries to enable external access to individual |cp| components. See :ref:`co-networking-dns` for additional information. If your organization does not allow external access for development testing, see :ref:`co-no-dns`. - **Kubernetes Load Balancing:** * Layer 4 load balancing with passthrough support (terminating at the application) is required for |ak| brokers with external access enabled. * Layer 7 load balancing can be used for |co| and all other |cp| components. .. _co-k8s-installation: Prepare to deploy components ---------------------------- Complete the following steps to prepare for deployment. The procedure uses GCP and the **Google Kubernetes Engine (GKE)** as the example provider environment. You can use this procedure as a guide for deploying |co| and |cp| in other :ref:`supported provider environments `. .. _co-download-the-bundle: --------------------------------------------------- Step 1. Download the |co| bundle for |cp| |version| --------------------------------------------------- .. include:: includes/download-and-helm-step-1-and-2.rst .. _co-configure-provider-yaml: ------------------------------------------------ Step 3. Configure the default provider YAML file ------------------------------------------------ The following are the default YAML configuration changes necessary for initial deployment. You can manually update the YAML configuration after deployment if necessary. For more information about how to manually update the YAML configuration, see :ref:`co-update-component-config`. #. Go to the ``helm/providers`` directory on your local machine. #. Open the ``gcp.yaml`` file (or other public cloud provider YAML file). #. Validate or change your region and zone or zones (if your cluster spans :ref:`multiple availability zones `). The example below uses ``region: us-central1`` and ``zones: - us-central1-a``. #. Validate or change your storage provisioner. See `Storage Classe Provisioners `__ for configuration examples. The example below uses GCE persistent disk storage (``gce-pd``) and solid-state drives (``pd-ssd``). .. sourcecode:: bash global: provider: name: gcp region: us-central1 kubernetes: deployment: ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate ## If kubernetes is deployed in single availability zone then specify appropriate values zones: - us-central1-a storage: ## https://kubernetes.io/docs/concepts/storage/storage-classes/#gce ## provisioner: kubernetes.io/gce-pd reclaimPolicy: Delete parameters: type: pd-ssd #. :ref:`Validate or change the Docker image registry endpoint `. #. :ref:`Validate or change the Docker image tags `. #. Enable load balancing for external access to the |ak| cluster. The domain name is the domain name you use (or that you create) for your cloud project in the provider environment. See :ref:`co-endpoints` for more information. :: ## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true domain: "" tls: enabled: false fullchain: |- privkey: |- cacerts: |- The deployment example steps use SASL/PLAIN security with TLS disabled. This level of security can typically be used for testing and development purposes. For production environments, see :ref:`co-security` for information about how to set up the component YAML files with TLS enabled. .. _custom-registry: Custom Docker registry ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The default |cp| image registry is Docker Hub. If you are using a private image registry, specify the registry endpoint and the container image name in the provider YAML file. The following example shows the default public image registry for container images. If you are installing from images downloaded from Docker Hub and then moved to a separate image registry, you must enter your image registry's FQDN. If the registry you use requires basic authentication, you need to change the credential parameter to ``required: true`` and enter a username and password. .. sourcecode:: bash ## Docker registry endpoint where Confluent Images are available. ## registry: fqdn: docker.io credential: required: false username: password: .. _custom-image-tags: Custom Docker images ^^^^^^^^^^^^^^^^^^^^ |cp| currently supports two sets of Docker images, one set that uses Debian as the base image and another which uses the Red Hat Universal Base Image (UBI) as the base image. By default, Debian is used as the base image for all components. To use the Red Hat UBI-based images: #. In the ``IMAGES`` file that comes with the |co| bundle, locate the |cp| components that you want to use the UBI-based images for. Get the image tag values of the components. For example, in the entry below, the tag value for |kconnect| is ``5.4.0.0``. :: connect: 5.4.0.0 #. Append ``-ubi8`` to the tag values and specify the values for the image ``tag`` in the provider YAML file in the corresponding component sections. Using the following example of a provider YAML file, the ``initContainers`` of all the components will use the UBI-based images, and the main containers of |co|, |ak|, and |kconnect| will use the UBI-based images. .. sourcecode:: bash global: initContainer: image: tag: 5.4.0.0-ubi8 operator: image: tag: 0.275.0-ubi8 kafka: image: tag: 5.4.0.0-ubi8 connect: image: tag: 5.4.0.0-ubi8 .. _co-component-install-order: Deploy components ----------------- Complete the following steps to deploy |co-long| and |cp|. .. important:: * Components **must be** installed in the order provided in the following steps. * **Wait** for all component services to start before installing the next component. ------------------------------------------------ Step 1. Create a Namespace and Install |co-long| ------------------------------------------------ #. Create a Kubernetes namespace. :: kubectl create namespace For example: :: kubectl create namespace operator #. Go to the ``helm`` directory on your local machine. #. Enter the following command (using the example ``operator`` namespace): .. sourcecode:: bash helm install \ operator \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set operator.enabled=true You should see output similar to the following example: .. sourcecode:: bash NAME: operator LAST DEPLOYED: Tue Jan 7 17:47:04 2020 NAMESPACE: operator STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Confluent Operator The Confluent Operator interacts with kubernetes API to create statefulsets resources. The Confluent Operator runs three controllers, two component specific controllers for kubernetes by providing components specific Custom Resource Definition (CRD) (for Kafka and Zookeeper) and one controller for creating other statefulsets resources. 1. Validate if Confluent Operator is running. kubectl get pods -n operator | grep cc-operator 2. Validate if custom resource definition (CRD) is created. kubectl get crd | grep confluent -------------------- Step 2. Install |zk| -------------------- #. Enter the following command to verify that |co| is running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that |co| is running, enter the following command: .. sourcecode:: bash helm install \ zookeeper \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set zookeeper.enabled=true .. sourcecode:: bash NAME: zookeeper LAST DEPLOYED: Wed Jan 8 14:51:26 2020 NAMESPACE: operator STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Zookeeper Cluster Deployment Zookeeper cluster is deployed through CR. 1. Validate if Zookeeper Custom Resource (CR) is created kubectl get zookeeper -n operator | grep zookeeper 2. Check the status/events of CR: zookeeper kubectl describe zookeeper zookeeper -n operator 3. Check if Zookeeper cluster is Ready kubectl get zookeeper zookeeper -ojson -n operator kubectl get zookeeper zookeeper -ojsonpath='{.status.phase}' -n operator 4. Update/Upgrade Zookeeper Cluster The upgrade can be done either through the helm upgrade or by editing the CR directly as below; kubectl edit zookeeper zookeeper -n operator ---------------------------- Step 3. Install |ak| brokers ---------------------------- #. Enter the following command to verify that all |zk| services are running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that all |zk| services are running, enter the following command: .. sourcecode:: bash helm install \ kafka \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set kafka.enabled=true .. sourcecode:: bash NAME: kafka LAST DEPLOYED: Wed Jan 8 15:07:46 2020 NAMESPACE: operator STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Kafka Cluster Deployment Kafka Cluster is deployed to kubernetes through CR Object 1. Validate if Kafka Custom Resource (CR) is created kubectl get kafka -n operator | grep kafka 2. Check the status/events of CR: kafka kubectl describe kafka kafka -n operator 3. Check if Kafka cluster is Ready kubectl get kafka kafka -ojson -n operator kubectl get kafka kafka -ojsonpath='{.status.phase}' -n operator ... output omitted -------------------- Step 4. Install |sr| -------------------- #. Enter the following command to verify that all |ak| services are running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that all |ak| services are running, enter the following command: .. sourcecode:: bash helm install \ schemaregistry \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set schemaregistry.enabled=true .. sourcecode:: bash NAME: schemaregistry LAST DEPLOYED: Thu Jan 9 15:51:21 2020 NAMESPACE: operator STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Schema Registry is deployed through PSC. Configure Schema Registry through REST Endpoint 1. Validate if schema registry cluster is running kubectl get pods -n operator | grep schemaregistry 2. Access Internal REST Endpoint : http://schemaregistry:8081 (Inside kubernetes) OR http://localhost:8081 (Inside Pod) More information about schema registry REST API can be found here, https://docs.confluent.io/current/schema-registry/docs/api.html ------------------------------- Step 5. Install |kconnect-long| ------------------------------- #. Enter the following command to verify that all |sr| services are running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that all |sr| services are running, enter the following command: .. sourcecode:: bash helm install \ connectors \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set connect.enabled=true --------------------------- Step 6. Install |crep-full| --------------------------- #. Enter the following command to verify that all |kconnect| services are running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that all |kconnect| services are running, enter the following command: .. sourcecode:: bash helm install \ replicator \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set replicator.enabled=true -------------------- Step 7. Install |c3| -------------------- #. Enter the following command to verify that all |crep| services are running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that all |crep| services are running, enter the following command: .. sourcecode:: bash helm install \ controlcenter \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set controlcenter.enabled=true ---------------------------- Step 8. Install |cksql-full| ---------------------------- #. Enter the following command to verify that all |c3| services are running: .. sourcecode:: bash kubectl get pods -n operator #. After verifying that all |c3| services are running, enter the following command: .. sourcecode:: bash helm install \ ksql \ ./confluent-operator -f \ ./providers/gcp.yaml \ --namespace operator \ --set ksql.enabled=true All components should be successfully installed and running. .. sourcecode:: bash kubectl get pods -n operator NAME READY STATUS RESTARTS AGE cc-operator-54f54c694d-qjb7w 1/1 Running 0 4h31m connectors-0 1/1 Running 0 4h15m connectors-1 1/1 Running 0 4h15m controlcenter-0 1/1 Running 0 4h18m kafka-0 1/1 Running 0 4h20m kafka-1 1/1 Running 0 4h20m kafka-2 1/1 Running 0 4h20m ksql-0 1/1 Running 0 21m ksql-1 1/1 Running 0 21m replicator-0 1/1 Running 0 4h18m replicator-1 1/1 Running 0 4h18m schemaregistry-0 1/1 Running 0 4h18m schemaregistry-1 1/1 Running 0 4h18m zookeeper-0 1/1 Running 0 4h30m zookeeper-1 1/1 Running 0 4h30m zookeeper-2 1/1 Running 0 4h30m .. note:: **Deleting components:** If you are installing components for testing purposes, you may want to delete components soon after deploying them. See :ref:`Deleting a Cluster ` for instructions; otherwise, continue to next steps to test the deployment. .. _co-test-deployment: --------------------------- Step 9. Test the deployment --------------------------- Complete the following steps to test and validate your deployment. .. _co-internal-validation: Internal validation ^^^^^^^^^^^^^^^^^^^ .. include:: includes/internal-validation.rst .. _co-external-validation: External validation ^^^^^^^^^^^^^^^^^^^ .. include:: includes/external-validation.rst .. note:: **Deleting components:** If you are installing components for testing purposes, you may want to delete components soon after deploying them. See :ref:`Deleting a Cluster ` for instructions; otherwise, continue to next step to configure external access to |c3|. The following step is an optional activity you can complete to gain additional knowledge of how upgrades work and how to access your environment using |c3-short|. .. _co-upgrade-c3-example: ------------------------------------------ Step 10. Configure external access to |c3| ------------------------------------------ Complete the following steps to perform a rolling upgrade to your configuration, enable external access, and launch |c3-short|. Upgrade the configuration ^^^^^^^^^^^^^^^^^^^^^^^^^ #. Enter the following Helm upgrade command to add an external load balancer for the |c3-short| instance. Replace ```` with your platform environment domain. This upgrades your cluster configuration and adds a bootstrap load balancer for |c3-short|. .. include:: includes/upgrade-example.rst #. Get the |c3-short| bootstrap load balancer public IP. In the example, namespace *operator* is used. Change this to the namespace for your cluster. .. sourcecode:: bash kubectl get services -n operator #. Add the bootstrap load balancer DNS entry and public IP to the DNS table for your platform environment. Launch |c3| ^^^^^^^^^^^ Complete the following steps to launch |c3| in your cluster. #. Start a new terminal session on your local machine. Enter the following command to set up port forwarding to the default |c3| endpoint. In the example, namespace *operator* is used. :: kubectl port-forward svc/controlcenter 9021:9021 -n operator #. Connect to |c3-short| using http://localhost:9021/. #. Log in to |c3-short|. Basic authorization credentials are configured in the default ```` file. In the example below, the userID is **admin** and the password is **Developer1**. .. sourcecode:: bash ## ## C3 authentication ## auth: basic: enabled: true ## ## map with key as user and value as password and role property: admin: Developer1,Administrators disallowed: no_access .. important:: Basic authentication to |c3| can be used for development testing. Typically, this authentication type is disabled for production environments and LDAP is configured for user access. LDAP parameters are provided in the |c3-short| YAML file. .. note:: **Deleting components:** If you are installing components for testing purposes, you may want to delete components soon after deploying them. See :ref:`Deleting a Cluster ` for instructions. .. include:: includes/kubernetes-demos-tip.rst