.. _co-deployment: Deploying |co-long| and |cp| ---------------------------- The following sections guide you through deploying one cluster containing the following components: * |co-long| (one instance with one instance of the Confluent Manager service) * |zk-full| (three replicas) * |ak-tm| (three broker replicas) * |sr-long| (two replicas) * |kconnect-long| (two replicas) * |crep-full| (two replicas) * |c3| (one instance) * |cksql-full| (two replicas) Unless specified, the deployment procedure shows examples based on the default configuration parameters in the Helm YAML files provided by Confluent. This includes default naming conventions for components and namespaces. Note that you should not use the default YAML parameters for deploying into a production environment. However, using the default parameters allows you quickly get up and running in a test environment. The procedure uses the Google Kubernetes Engine (GKE) as the example provider environment with additional configuration information for OpenShift. This procedure can also be used as a guide for deploying |co| and |cp| in :ref:`supported provider environments `. .. note:: You should have experience using Linux and experience configuring Kubernetes or OpenShift before using these instructions. Detailed information about installing and configuring Kubernetes and OpenShift is not provided. General Prerequisites - A Kubernetes cluster conforming to one of the :ref:`supported environments `. - Cluster size based on the :ref:`Sizing Recommendations `. - The provider CLI or `Cloud SDK `_ is installed and initialized. - `kubectl `_ or OpenShift `oc `_ is installed, initialized, with the context set for your running cluster. You also need the ``kubeconfig`` file configured for your cluster. - Access to the Confluent images on Docker or the `Red Hat Container Catalog `_. - Access to the Confluent Helm bundle (Step 1 below). - `Kubernetes role-based access control (RBAC) `_ or `OpenShift RBAC `_ support enabled for your cluster. - **Storage:** `StorageClass-based `_ storage provisioner support. This is the default storage class used. Other external provisioners can be used. SSD or SSD-like disks are required for persistent storage. - **Security:** TLS certificates for each component required (if using TLS). Default SASL/PLAIN security is used in the example steps. See :ref:`co-security` for information about how to configure additional security. - **DNS**: DNS support on your platform environment is required for external access to |cp| components after deployment. After deployment, you create DNS entries to enable external access to individual |cp| components. See :ref:`co-networking-dns` for additional information. If your organization does not allow external access for development testing, see :ref:`co-no-dns`. Provider-Specific Prerequisites - **Kubernetes Load Balancing:** - Layer 4 load balancing with passthrough support (terminating at the application) is required for |ak| brokers with external access enabled. - Layer 7 load balancing can be used for |co| and all other |cp| components. - **OpenShift Load Balancing:** - Route-based load balancing. See :ref:`co-openshift-routes`. .. _co-k8s-installation: ---------------------------- Prepare to deploy components ---------------------------- Complete the following steps to prepare for deployment. .. _co-installation-helm-bundle: ----------------------------------------------- Step 1: Download the Helm bundle for |cp| 5.3.1 ----------------------------------------------- Confluent offers a bundle of Helm charts, templates, and scripts used to deploy |co-long| and |cp| components to your Kubernetes cluster. Note that the bundle is extracted to a directory named ``helm``. You should run helm install commands from within this directory. Download and extract the following bundle: :litwithvars:`https://platform-ops-bin.s3-us-west-1.amazonaws.com/operator/confluent-operator-|co_release|-for-confluent-platform-|release|.tar.gz`. --------------------------------- Step 3: Install Helm 2 and Tiller --------------------------------- Use the following steps to install Helm 2 and Tiller. Prerequisites: - `kubectl `_ or OpenShift `oc `__ installed, initialized, with the context set for your running cluster. - A ``kubeconfig`` file configured for your cluster. Kubernetes cluster ^^^^^^^^^^^^^^^^^^ Complete the following steps to install and configure Helm 2 and Tiller. Note that a service account is created and RBAC `ClusterRoleBinding `__ is used to give Tiller **cluster-admin** authorization to perform all tasks required to automatically deploy and provision |co-long| and |cp|. See `User-facing roles `__ for additional information about the ``cluster-admin`` role created for this purpose. .. note:: The default Tiller installation can be used for development testing. See :ref:`co-tiller-security` for more information about securing Helm and Tiller for production environments. #. Install Helm using the `Helm version 2 installation instructions `__. .. important:: The latest 2.9.x version is recommended. Helm 3 is not supported. #. Create an RBAC service account. .. sourcecode:: bash kubectl create serviceaccount tiller -n kube-system #. Bind the ``cluster-admin`` role to the service account. .. sourcecode:: bash kubectl create clusterrolebinding tiller \ --clusterrole=cluster-admin \ --serviceaccount kube-system:tiller #. Install Tiller with the service account enabled. .. sourcecode:: bash helm init --service-account tiller OpenShift cluster ^^^^^^^^^^^^^^^^^ Background information and complete instructions for installing Helm and Tiller are provided in an OpenShift blog article. Complete the `Getting started with Helm on OpenShift `__ instructions. .. _co-configure-provider-yaml: ------------------------------------- Step 3. Modify the provider YAML file ------------------------------------- The following steps you through configuration changes necessary for the default provider YAML file. Refer to :ref:`OpenShift Deployment ` if you are using the private provider YAML file for OpenShift. .. note:: The following are the default YAML configuration changes necessary for initial deployment. You can manually update the YAML configuration after deployment if necessary. See :ref:`co-update-component-config` for details. Kubernetes deployment ^^^^^^^^^^^^^^^^^^^^^ Make the following configuration changes to your provider YAML file. #. Go to the ``helm/providers`` directory on your local machine. #. Open the ``gcp.yaml`` file (or other public cloud provider YAML file). #. Validate or change your region and zone or zones (if your cluster spans :ref:`multiple availability zones `). The example below uses ``region: us-central1`` and ``zones: - us-central1-a``. #. Validate or change your storage provisioner. See `Storage Classe Provisioners `__ for configuration examples. The example below uses GCE persistent disk storage (``gce-pd``) and solid-state drives (``pd-ssd``). .. sourcecode:: bash global: provider: name: gcp region: us-central1 kubernetes: deployment: ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate ## If kubernetes is deployed in single availability zone then specify appropriate values zones: - us-central1-a storage: ## https://kubernetes.io/docs/concepts/storage/storage-classes/#gce ## provisioner: kubernetes.io/gce-pd reclaimPolicy: Delete parameters: type: pd-ssd #. Enter the image registry endpoint. The following example shows the default public image registry endpoint. If you are installing from images downloaded and stored locally or located elsewhere, you need to enter your unique endpoint. If the endpoint you use requires basic authentication, you need to change the credential parameter to ``required: true`` and enter a username and password. .. sourcecode:: bash ## Docker registry endpoint where Confluent Images are available. ## registry: fqdn: docker.io credential: required: false username: password: #. Enable load balancing for external access to the |ak| cluster. The domain name is the domain name you use (or that you create) for your cloud project in the provider environment. See :ref:`co-endpoints` for more information. :: ## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true domain: "" tls: enabled: false fullchain: |- privkey: |- cacerts: |- The deployment example steps use SASL/PLAIN security with TLS disabled. This level of security can typically be used for testing and development purposes. For production environments, see :ref:`co-security` for information about how to set up the component YAML files with TLS enabled. .. _co-prepare-openshift-yaml: OpenShift deployment ^^^^^^^^^^^^^^^^^^^^ Review and make changes based on the following information prior to beginning an OpenShift deployment. #. Go to the ``helm/providers`` directory on your local machine. #. Open the ``private.yaml`` file. **Security Context Constraints** Create a Security Context Constraints (SCC) configuration and UID for the pods and containers. The following lists the options you can use to create the UID. #. OpenShift generates a random UID for containers (**recommended**). #. You configure a custom UID. #. OpenShift runs containers using default restricted SCC mode. #. OpenShift runs containers with a root UID (not recommended). If you choose to use the recommended random UID option, add the following to the ``private.yaml`` configuration: :: pod: randomUID: true If you want to use another option, see the comments and instructions located in the downloaded helm bundle at ``helm/scripts/openshift``. **Additional private.yaml file changes** Make the following additional changes to the ``private.yaml`` file. #. Validate or change your region and zone or zones (if your cluster spans multiple zones). The example below shows a private cloud running in ``ohio-dc-1``. Depending on your environment, this can be whatever you want associated with the name of your OpenShift implementation. Note that a meaningful ``zone`` name should entered because it is used as the prefix for the storage-class resources. #. Validate or change your storage provisioner. The example shows an entry for using AWS Elastic Block Store (EBS) storage. .. sourcecode:: bash global: provider: name: private region: ohio-dc kubernetes: deployment: ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate ## If kubernetes is deployed in single availability zone then specify appropriate values zones: - ohio-dc-1 storage: ## more information can be found here ## https://kubernetes.io/docs/concepts/storage/storage-classes/ provisioner: kubernetes.io/aws-ebs reclaimPolicy: Delete parameters: type: io1 iopsPerGB: "10" fsType: ext4 #. Enter the image registry endpoint and save the YAML file. The following example shows the default public image registry endpoint. If you are installing from images pulled from the `Red Hat Container Catalog `_ or downloaded and stored locally, you need to enter a unique endpoint. Docker registry credentials are only required if the registry you are pointing to uses basic authentication. If true, you need to change the credential parameter to ``required: true`` and enter a username and password. Generally, for OpenShift, all nodes are preconfigured to pull images through TLS mutual authentication and you only need to supply the ``fqdn``. .. sourcecode:: bash ## Docker registry endpoint where Confluent Images are available. ## registry: fqdn: docker.io credential: required: false username: password: .. _co-openshift-enable-load-balancing: 4. Enable load balancing for external access to the |ak| cluster. For OpenShift, you need to add a parameter to the provider YAML file to enable route-based load balancing. The parameter you add is ``type: route`` as shown in the example below. :: ## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true type: route domain: "" tls: enabled: false fullchain: |- privkey: |- cacerts: |- .. important:: For OpenShift route-based load balacing, the |ak| cluster must have TLS enabled (either TLS, SASL_SSL or SSL). SASL/PLAIN security is not supported in OpenShift deployments. See :ref:`co-security` for information about how to set up the component YAML files with TLS enabled. .. _co-component-install-order: ----------------- Deploy components ----------------- Complete the following steps to deploy |co-long| and |cp|. Components **must be** installed in the order provided in the following steps. .. note:: - Use ``oc`` instead of ``kubectl`` commands for OpenShift. Otherwise, the commands are identical. For example, instead of ``kubectl get pods -n operator`` you use ``oc get pods -n operator``. - The steps show the |gcp-long| (|gcp|) as the provider. Replace ``gcp.yaml`` with your provider YAML if not using |gcp|. ------------------------- Step 1: Install |co-long| ------------------------- #. Go to the ``helm`` directory on your local machine. #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name operator \ --namespace operator \ --set operator.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash NAME: operator LAST DEPLOYED: Mon May 20 13:31:14 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRoleBinding NAME AGE operator-cc-manager 1s operator-cc-operator 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 0/1 ContainerCreating 0 1s cc-operator-54f54c694d-fpncm 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE confluent-docker-registry kubernetes.io/dockerconfigjson 1 1s ==> v1/ServiceAccount NAME SECRETS AGE cc-manager 1 1s cc-operator 1 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE cc-manager 0/1 1 0 1s cc-operator 0/1 1 0 1s #. Patch the Service Account so it can pull |cp| images. .. sourcecode:: bash kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}' If you are using a private or local registry with basic authentication, use the following command: .. sourcecode:: bash kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "" }]}' #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. .. note:: If you are using a different image registry (and basic authentication) you need to change ``"confluent-docker-registry"`` to ``""`` at Step 1 under **NOTES**. The following is sample output. Note that *Confluent Manager* is a service installed as part of |co-long|. :: NOTES: The Confluent Operator The Confluent Operator interacts with kubernetes API to create StatefulSets resources 1. Give the `default` Service Account access to pull images from "confluent-docker-registry". kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}' 2. Validate if Confluent Operator is running. kubectl get pods -n operator 3. Validate if custom resource definition (CRD) is created. kubectl get crd -n operator The Confluent Manager The Confluent Manager brings the component (Confluent Services) specific controllers for kubernetes by providing components specific Custom Resource Definition (CRD). It runs two controllers Kafka/Zookeeper. 1. Validate if Confluent Manager is running. kubectl get pods -n operator | grep "manager" 2. Validate if custom resource definition (CRD) is created for Kafka/Zookeeper kubectl get crd -n operator 3. Validate if Kafka command is available kubectl get kafka -n operator OR kubectl get broker -n operator 4. Validate if Zookeeper command is available kubectl get zookeeper -n operator OR kubectl get zk -n operator Note: For Openshift Platform replace kubectl commands with 'oc' commands -------------------- Step 2: Install |zk| -------------------- #. Enter the following command to verify that the |co| and Manager services are running: .. sourcecode:: bash kubectl get pods -n operator You should see output similar to the following example: .. sourcecode:: bash NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 25m cc-operator-54f54c694d-fpncm 1/1 Running 0 25m #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name zookeeper \ --namespace operator \ --set zookeeper.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash NAME: zookeeper LAST DEPLOYED: Mon May 20 14:01:31 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE zookeeper-apikeys Opaque 1 1s zookeeper-sslcerts Opaque 0 1s ==> v1/StorageClass NAME PROVISIONER AGE zookeeper-standard-ssd-us-central1-a kubernetes.io/gce-pd 1s ==> v1alpha1/ZookeeperCluster NAME AGE zookeeper 1s ==> v1beta1/PodDisruptionBudget NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE zookeeper N/A 1 0 1s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. ---------------------------- Step 3: Install |ak| brokers ---------------------------- #. Enter the following command to verify that the |zk| services are running: .. sourcecode:: bash kubectl get pods -n operator You should see output similar to the following example: .. sourcecode:: bash NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 40m cc-operator-54f54c694d-fpncm 1/1 Running 0 40m zookeeper-0 1/1 Running 0 10m zookeeper-1 1/1 Running 0 10m zookeeper-2 1/1 Running 0 10m #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name kafka \ --namespace operator \ --set kafka.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash NAME: kafka LAST DEPLOYED: Mon May 20 14:13:14 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE kafka-apikeys Opaque 1 1s kafka-sslcerts Opaque 0 1s ==> v1/StorageClass NAME PROVISIONER AGE kafka-standard-ssd-us-central1-a kubernetes.io/gce-pd 1s ==> v1alpha1/KafkaCluster NAME AGE kafka 1s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. -------------------- Step 4: Install |sr| -------------------- #. Enter the following command to verify that the |ak| broker services are running: .. sourcecode:: bash kubectl get pods -n operator You should see output similar to the following example: .. sourcecode:: bash NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 68m cc-operator-54f54c694d-fpncm 1/1 Running 0 68m kafka-0 1/1 Running 0 26m kafka-1 1/1 Running 0 26m kafka-2 1/1 Running 0 26m zookeeper-0 1/1 Running 0 38m zookeeper-1 1/1 Running 0 38m zookeeper-2 1/1 Running 0 38m #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name schemaregistry \ --namespace operator \ --set schemaregistry.enabled=true \ ./confluent-operator .. note:: If you want to :ref:`bin pack ` this component or any of the remaining |cp| components, add ``--set disableHostPort=true`` to the installation command. This is **not recommended** for production deployments. Note that you should not bin pack |ak| or |zk|. You should see output similar to the following example: .. sourcecode:: bash NAME: schemaregistry LAST DEPLOYED: Mon May 20 14:39:26 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE schemaregistry 0s ==> v1/Secret NAME TYPE DATA AGE schemaregistry-apikeys Opaque 1 0s schemaregistry-sslcerts Opaque 0 0s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. ------------------------------- Step 5: Install |kconnect-long| ------------------------------- #. Enter the following command to verify that the |sr| services are running: .. sourcecode:: bash kubectl get pods -n operator You should see output similar to the following example: .. sourcecode:: bash NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 74m cc-operator-54f54c694d-fpncm 1/1 Running 0 74m kafka-0 1/1 Running 0 32m kafka-1 1/1 Running 0 32m kafka-2 1/1 Running 0 32m schemaregistry-0 1/1 Running 0 6m4s schemaregistry-1 1/1 Running 0 6m4s zookeeper-0 1/1 Running 0 43m zookeeper-1 1/1 Running 0 43m zookeeper-2 1/1 Running 0 43m #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name connectors \ --namespace operator \ --set connect.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash NAME: connect LAST DEPLOYED: Mon May 20 14:47:23 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE connectors 1s ==> v1/Secret NAME TYPE DATA AGE connectors-apikeys Opaque 1 1s connectors-sslcerts Opaque 0 1s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. --------------------------- Step 6. Install |crep-full| --------------------------- #. Enter the following command to verify that the |kconnect| services are running: .. sourcecode:: bash kubectl get pods -n operator You should see output similar to the following example: .. sourcecode:: bash NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 81m cc-operator-54f54c694d-fpncm 1/1 Running 0 81m connectors-0 1/1 Running 0 5m49s connectors-1 1/1 Running 0 5m49s kafka-0 1/1 Running 0 39m kafka-1 1/1 Running 0 39m kafka-2 1/1 Running 0 39m schemaregistry-0 1/1 Running 0 13m schemaregistry-1 1/1 Running 0 13m zookeeper-0 1/1 Running 0 51m zookeeper-1 1/1 Running 0 51m zookeeper-2 1/1 Running 0 51m #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name replicator \ --namespace operator \ --set replicator.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash LAST DEPLOYED: Mon May 20 14:54:38 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE replicator 1s ==> v1/Secret NAME TYPE DATA AGE replicator-apikeys Opaque 1 1s replicator-sslcerts Opaque 0 1s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. -------------------- Step 7: Install |c3| -------------------- #. Enter the following command to verify that the |crep| services are running: .. sourcecode:: bash kubectl get pods -n operator You should see output similar to the following example: .. sourcecode:: bash NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 85m cc-operator-54f54c694d-fpncm 1/1 Running 0 85m connectors-0 1/1 Running 0 9m25s connectors-1 1/1 Running 0 9m25s kafka-0 1/1 Running 0 43m kafka-1 1/1 Running 0 43m kafka-2 1/1 Running 0 43m replicator-0 1/1 Running 0 2m10s replicator-1 1/1 Running 0 2m10s schemaregistry-0 1/1 Running 0 17m schemaregistry-1 1/1 Running 0 17m zookeeper-0 1/1 Running 0 55m zookeeper-1 1/1 Running 0 55m zookeeper-2 1/1 Running 0 55m #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name controlcenter \ --namespace operator \ --set controlcenter.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash NAME: controlcenter LAST DEPLOYED: Mon May 20 14:58:23 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE controlcenter 1s ==> v1/Secret NAME TYPE DATA AGE controlcenter-apikeys Opaque 1 1s controlcenter-sslcerts Opaque 0 1s ==> v1/StorageClass NAME PROVISIONER AGE controlcenter-standard-ssd-us-central1-a kubernetes.io/gce-pd 1s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. ---------------------------- Step 8: Install |cksql-full| ---------------------------- #. Enter the following command to verify that the |c3| services are running: .. sourcecode:: bash kubectl get pods -n operator #. Enter the following command: .. sourcecode:: bash helm install \ -f ./providers/gcp.yaml \ --name ksql \ --namespace operator \ --set ksql.enabled=true \ ./confluent-operator You should see output similar to the following example: .. sourcecode:: bash NAME: ksql LAST DEPLOYED: Tue May 21 14:53:10 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE ksql 0s ==> v1/Secret NAME TYPE DATA AGE ksql-apikeys Opaque 1 0s ksql-sslcerts Opaque 0 0s ==> v1/StorageClass NAME PROVISIONER AGE ksql-standard-ssd-us-central1-a kubernetes.io/gce-pd 0s #. Complete all additional steps in the displayed installation output under **NOTES**. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see :ref:`co-useful-helm-commands`. All components should be successfully installed and running. .. sourcecode:: bash kubectl get pods -n operator NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-8p58q 1/1 Running 0 4h31m cc-operator-54f54c694d-qjb7w 1/1 Running 0 4h31m connectors-0 1/1 Running 0 4h15m connectors-1 1/1 Running 0 4h15m controlcenter-0 1/1 Running 0 4h18m kafka-0 1/1 Running 0 4h20m kafka-1 1/1 Running 0 4h20m kafka-2 1/1 Running 0 4h20m ksql-0 1/1 Running 0 21m ksql-1 1/1 Running 0 21m replicator-0 1/1 Running 0 4h18m replicator-1 1/1 Running 0 4h18m schemaregistry-0 1/1 Running 0 4h18m schemaregistry-1 1/1 Running 0 4h18m zookeeper-0 1/1 Running 0 4h30m zookeeper-1 1/1 Running 0 4h30m zookeeper-2 1/1 Running 0 4h30m .. note:: **Deleting components:** If you are installing components for testing purposes, you may want to delete components soon after deploying them. See :ref:`Deleting a Cluster ` for instructions; otherwise, continue to next steps to test the deployment. .. _co-test-deployment: --------------------------- Step 9: Test the deployment --------------------------- Complete the following steps to test and validate your deployment. .. note:: Use ``oc`` instead of ``kubectl`` commands for OpenShift. Otherwise, the commands are identical. For example, instead of ``kubectl get pods -n operator`` you would use ``oc get pods -n operator``. .. _co-internal-validation: Internal validation ^^^^^^^^^^^^^^^^^^^ .. include:: includes/internal-validation.rst .. _co-external-validation: External validation ^^^^^^^^^^^^^^^^^^^ .. include:: includes/external-validation.rst .. note:: **Deleting components:** If you are installing components for testing purposes, you may want to delete components soon after deploying them. See :ref:`Deleting a Cluster ` for instructions; otherwise, continue to next step to configure external access to |c3|. The following step is an optional activity you can complete to gain additional knowledge of how upgrades work and how to access your environment using |c3-short|. .. _co-upgrade-c3-example: ------------------------------------------ Step 10. Configure external access to |c3| ------------------------------------------ Complete the following steps to perform a rolling upgrade to your configuration, enable external access, and launch |c3-short|. Upgrade the configuration ^^^^^^^^^^^^^^^^^^^^^^^^^ #. Enter the following Helm upgrade command to add an external load balancer for the |c3-short| instance. Replace ```` with your platform environment domain. This upgrades your cluster configuration and adds a bootstrap load balancer for |c3-short|. .. include:: includes/upgrade-example.rst #. Get the |c3-short| bootstrap load balancer public IP. In the example, namespace *operator* is used. Change this to the namespace for your cluster. .. sourcecode:: bash kubectl get services -n operator #. Add the bootstrap load balancer DNS entry and public IP to the DNS table for your platform environment. Launch |c3| ^^^^^^^^^^^ Complete the following steps to launch |c3| in your cluster. #. Start a new terminal session on your local machine. Enter the following command to set up port forwarding to the default |c3| endpoint. In the example, namespace *operator* is used. :: kubectl port-forward svc/controlcenter 9021:9021 -n operator #. Connect to |c3-short| using http://localhost:9021/. #. Log in to |c3-short|. Basic authorization credentials are configured in the default ```` file. In the example below, the userID is **admin** and the password is **Developer1**. .. sourcecode:: bash ## ## C3 authentication ## auth: basic: enabled: true ## ## map with key as user and value as password and role property: admin: Developer1,Administrators disallowed: no_access .. important:: Basic authentication to |c3| can be used for development testing. Typically, this authentication type is disabled for production environments and LDAP is configured for user access. LDAP parameters are provided in the |c3-short| YAML file. .. note:: **Deleting components:** If you are installing components for testing purposes, you may want to delete components soon after deploying them. See :ref:`Deleting a Cluster ` for instructions. .. include:: includes/kubernetes-demos-tip.rst