Deploying Confluent Operator and Confluent Platform¶
The following sections guide you through deploying one cluster containing the following components:
- Confluent Operator (one instance with one instance of the Confluent Manager service)
- Apache ZooKeeper™ (three replicas)
- Apache Kafka® (three broker replicas)
- Confluent Schema Registry (two replicas)
- Kafka Connect (two replicas)
- Confluent Replicator (two replicas)
- Confluent Control Center (one instance)
- Confluent KSQL (two replicas)
Unless specified, the deployment procedure shows examples based on the default configuration parameters in the Helm YAML files provided by Confluent. This includes default naming conventions for components and namespaces. Note that you should not use the default YAML parameters for deploying into a production environment. However, using the default parameters allows you quickly get up and running in a test environment.
The procedure uses the Google Kubernetes Engine (GKE) as the example provider environment with additional configuration information for OpenShift. This procedure can also be used as a guide for deploying Operator and Confluent Platform in supported provider environments.
Note
You should have experience using Linux and experience configuring Kubernetes or OpenShift before using these instructions. Detailed information about installing and configuring Kubernetes and OpenShift is not provided.
- General Prerequisites
- A Kubernetes cluster conforming to one of the supported environments.
- Cluster size based on the Sizing Recommendations.
- The provider CLI or Cloud SDK is installed and initialized.
- kubectl or OpenShift oc is installed, initialized, with the context set for your running cluster. You also need the
kubeconfig
file configured for your cluster. - Access to the Confluent images on Docker or the Red Hat Container Catalog.
- Access to the Confluent Helm bundle (Step 1 below).
- Kubernetes role-based access control (RBAC) or OpenShift RBAC support enabled for your cluster.
- Storage: StorageClass-based storage provisioner support. This is the default storage class used. Other external provisioners can be used. SSD or SSD-like disks are required for persistent storage.
- Security: TLS certificates for each component required (if using TLS). Default SASL/PLAIN security is used in the example steps. See Configuring security for information about how to configure additional security.
- DNS: DNS support on your platform environment is required for external access to Confluent Platform components after deployment. After deployment, you create DNS entries to enable external access to individual Confluent Platform components. See Configuring external load balancers for additional information. If your organization does not allow external access for development testing, see No-DNS development access.
- Provider-Specific Prerequisites
- Kubernetes Load Balancing:
- Layer 4 load balancing with passthrough support (terminating at the application) is required for Kafka brokers with external access enabled.
- Layer 7 load balancing can be used for Operator and all other Confluent Platform components.
- OpenShift Load Balancing:
- Route-based load balancing. See OpenShift networking.
Prepare to deploy components¶
Complete the following steps to prepare for deployment.
Step 1: Download the Helm bundle for Confluent Platform 5.3.1¶
Confluent offers a bundle of Helm charts, templates, and scripts used to deploy
Confluent Operator and Confluent Platform components to your Kubernetes cluster. Note that the bundle
is extracted to a directory named helm
. You should run helm install commands
from within this directory.
Download and extract the following bundle:
https://platform-ops-bin.s3-us-west-1.amazonaws.com/operator/confluent-operator-1.3.1-for-confluent-platform-5.3.1.tar.gz
.
Step 3: Install Helm 2 and Tiller¶
Use the following steps to install Helm 2 and Tiller.
- Prerequisites:
Kubernetes cluster¶
Complete the following steps to install and configure Helm 2 and Tiller.
Note that a service account is created and RBAC ClusterRoleBinding is used to give Tiller cluster-admin authorization to perform all tasks required to automatically deploy and provision Confluent Operator and Confluent Platform. See User-facing roles for additional information about the cluster-admin
role created for this purpose.
Note
The default Tiller installation can be used for development testing. See Helm and Tiller security for more information about securing Helm and Tiller for production environments.
Install Helm using the Helm version 2 installation instructions.
Important
The latest 2.9.x version is recommended. Helm 3 is not supported.
Create an RBAC service account.
kubectl create serviceaccount tiller -n kube-system
Bind the
cluster-admin
role to the service account.kubectl create clusterrolebinding tiller \ --clusterrole=cluster-admin \ --serviceaccount kube-system:tiller
Install Tiller with the service account enabled.
helm init --service-account tiller
OpenShift cluster¶
Background information and complete instructions for installing Helm and Tiller are provided in an OpenShift blog article. Complete the Getting started with Helm on OpenShift instructions.
Step 3. Modify the provider YAML file¶
The following steps you through configuration changes necessary for the default provider YAML file. Refer to OpenShift Deployment if you are using the private provider YAML file for OpenShift.
Note
The following are the default YAML configuration changes necessary for initial deployment. You can manually update the YAML configuration after deployment if necessary. See Modifying Component Configurations for details.
Kubernetes deployment¶
Make the following configuration changes to your provider YAML file.
Go to the
helm/providers
directory on your local machine.Open the
gcp.yaml
file (or other public cloud provider YAML file).Validate or change your region and zone or zones (if your cluster spans multiple availability zones). The example below uses
region: us-central1
andzones: - us-central1-a
.Validate or change your storage provisioner. See Storage Classe Provisioners for configuration examples. The example below uses GCE persistent disk storage (
gce-pd
) and solid-state drives (pd-ssd
).global: provider: name: gcp region: us-central1 kubernetes: deployment: ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate ## If kubernetes is deployed in single availability zone then specify appropriate values zones: - us-central1-a storage: ## https://kubernetes.io/docs/concepts/storage/storage-classes/#gce ## provisioner: kubernetes.io/gce-pd reclaimPolicy: Delete parameters: type: pd-ssd
Enter the image registry endpoint.
The following example shows the default public image registry endpoint. If you are installing from images downloaded and stored locally or located elsewhere, you need to enter your unique endpoint. If the endpoint you use requires basic authentication, you need to change the credential parameter to
required: true
and enter a username and password.## Docker registry endpoint where Confluent Images are available. ## registry: fqdn: docker.io credential: required: false username: password:
Enable load balancing for external access to the Kafka cluster. The domain name is the domain name you use (or that you create) for your cloud project in the provider environment. See Configuring the network for more information.
## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true domain: "<provider-domain>" tls: enabled: false fullchain: |- privkey: |- cacerts: |-
The deployment example steps use SASL/PLAIN security with TLS disabled. This level of security can typically be used for testing and development purposes. For production environments, see Configuring security for information about how to set up the component YAML files with TLS enabled.
OpenShift deployment¶
Review and make changes based on the following information prior to beginning an OpenShift deployment.
- Go to the
helm/providers
directory on your local machine. - Open the
private.yaml
file.
Security Context Constraints
Create a Security Context Constraints (SCC) configuration and UID for the pods and containers. The following lists the options you can use to create the UID.
- OpenShift generates a random UID for containers (recommended).
- You configure a custom UID.
- OpenShift runs containers using default restricted SCC mode.
- OpenShift runs containers with a root UID (not recommended).
If you choose to use the recommended random UID option, add the following to the
private.yaml
configuration:
pod:
randomUID: true
If you want to use another option, see the comments and instructions located in
the downloaded helm bundle at helm/scripts/openshift
.
Additional private.yaml file changes
Make the following additional changes to the private.yaml
file.
Validate or change your region and zone or zones (if your cluster spans multiple zones). The example below shows a private cloud running in
ohio-dc-1
. Depending on your environment, this can be whatever you want associated with the name of your OpenShift implementation. Note that a meaningfulzone
name should entered because it is used as the prefix for the storage-class resources.Validate or change your storage provisioner. The example shows an entry for using AWS Elastic Block Store (EBS) storage.
global: provider: name: private region: ohio-dc kubernetes: deployment: ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate ## If kubernetes is deployed in single availability zone then specify appropriate values zones: - ohio-dc-1 storage: ## more information can be found here ## https://kubernetes.io/docs/concepts/storage/storage-classes/ provisioner: kubernetes.io/aws-ebs reclaimPolicy: Delete parameters: type: io1 iopsPerGB: "10" fsType: ext4
Enter the image registry endpoint and save the YAML file.
The following example shows the default public image registry endpoint. If you are installing from images pulled from the Red Hat Container Catalog or downloaded and stored locally, you need to enter a unique endpoint.
Docker registry credentials are only required if the registry you are pointing to uses basic authentication. If true, you need to change the credential parameter to
required: true
and enter a username and password. Generally, for OpenShift, all nodes are preconfigured to pull images through TLS mutual authentication and you only need to supply thefqdn
.## Docker registry endpoint where Confluent Images are available. ## registry: fqdn: docker.io credential: required: false username: password:
Enable load balancing for external access to the Kafka cluster. For OpenShift, you need to add a parameter to the provider YAML file to enable route-based load balancing. The parameter you add is
type: route
as shown in the example below.## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true type: route domain: "<OpenShiftDomain>" tls: enabled: false fullchain: |- privkey: |- cacerts: |-
Important
For OpenShift route-based load balacing, the Kafka cluster must have TLS enabled (either TLS, SASL_SSL or SSL). SASL/PLAIN security is not supported in OpenShift deployments. See Configuring security for information about how to set up the component YAML files with TLS enabled.
Deploy components¶
Complete the following steps to deploy Confluent Operator and Confluent Platform. Components must be installed in the order provided in the following steps.
Note
- Use
oc
instead ofkubectl
commands for OpenShift. Otherwise, the commands are identical. For example, instead ofkubectl get pods -n operator
you useoc get pods -n operator
. - The steps show the Google Cloud Platform (GCP) as the provider. Replace
gcp.yaml
with your provider YAML if not using GCP.
Step 1: Install Confluent Operator¶
Go to the
helm
directory on your local machine.Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name operator \ --namespace operator \ --set operator.enabled=true \ ./confluent-operator
You should see output similar to the following example:
NAME: operator LAST DEPLOYED: Mon May 20 13:31:14 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRoleBinding NAME AGE operator-cc-manager 1s operator-cc-operator 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 0/1 ContainerCreating 0 1s cc-operator-54f54c694d-fpncm 0/1 ContainerCreating 0 1s ==> v1/Secret NAME TYPE DATA AGE confluent-docker-registry kubernetes.io/dockerconfigjson 1 1s ==> v1/ServiceAccount NAME SECRETS AGE cc-manager 1 1s cc-operator 1 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE cc-manager 0/1 1 0 1s cc-operator 0/1 1 0 1s
Patch the Service Account so it can pull Confluent Platform images.
kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}'
If you are using a private or local registry with basic authentication, use the following command:
kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "<your registry name here>" }]}'
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Note
If you are using a different image registry (and basic authentication) you need to change
"confluent-docker-registry"
to"<your-image-registry>"
at Step 1 under NOTES.The following is sample output. Note that Confluent Manager is a service installed as part of Confluent Operator.
NOTES: The Confluent Operator The Confluent Operator interacts with kubernetes API to create StatefulSets resources 1. Give the `default` Service Account access to pull images from "confluent-docker-registry". kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}' 2. Validate if Confluent Operator is running. kubectl get pods -n operator 3. Validate if custom resource definition (CRD) is created. kubectl get crd -n operator The Confluent Manager The Confluent Manager brings the component (Confluent Services) specific controllers for kubernetes by providing components specific Custom Resource Definition (CRD). It runs two controllers Kafka/Zookeeper. 1. Validate if Confluent Manager is running. kubectl get pods -n operator | grep "manager" 2. Validate if custom resource definition (CRD) is created for Kafka/Zookeeper kubectl get crd -n operator 3. Validate if Kafka command is available kubectl get kafka -n operator OR kubectl get broker -n operator 4. Validate if Zookeeper command is available kubectl get zookeeper -n operator OR kubectl get zk -n operator Note: For Openshift Platform replace kubectl commands with 'oc' commands
Step 2: Install ZooKeeper¶
Enter the following command to verify that the Operator and Manager services are running:
kubectl get pods -n operator
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 25m cc-operator-54f54c694d-fpncm 1/1 Running 0 25m
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name zookeeper \ --namespace operator \ --set zookeeper.enabled=true \ ./confluent-operator
You should see output similar to the following example:
NAME: zookeeper LAST DEPLOYED: Mon May 20 14:01:31 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE zookeeper-apikeys Opaque 1 1s zookeeper-sslcerts Opaque 0 1s ==> v1/StorageClass NAME PROVISIONER AGE zookeeper-standard-ssd-us-central1-a kubernetes.io/gce-pd 1s ==> v1alpha1/ZookeeperCluster NAME AGE zookeeper 1s ==> v1beta1/PodDisruptionBudget NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE zookeeper N/A 1 0 1s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Step 3: Install Kafka brokers¶
Enter the following command to verify that the ZooKeeper services are running:
kubectl get pods -n operator
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 40m cc-operator-54f54c694d-fpncm 1/1 Running 0 40m zookeeper-0 1/1 Running 0 10m zookeeper-1 1/1 Running 0 10m zookeeper-2 1/1 Running 0 10m
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name kafka \ --namespace operator \ --set kafka.enabled=true \ ./confluent-operator
You should see output similar to the following example:
NAME: kafka LAST DEPLOYED: Mon May 20 14:13:14 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/Secret NAME TYPE DATA AGE kafka-apikeys Opaque 1 1s kafka-sslcerts Opaque 0 1s ==> v1/StorageClass NAME PROVISIONER AGE kafka-standard-ssd-us-central1-a kubernetes.io/gce-pd 1s ==> v1alpha1/KafkaCluster NAME AGE kafka 1s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Step 4: Install Schema Registry¶
Enter the following command to verify that the Kafka broker services are running:
kubectl get pods -n operator
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 68m cc-operator-54f54c694d-fpncm 1/1 Running 0 68m kafka-0 1/1 Running 0 26m kafka-1 1/1 Running 0 26m kafka-2 1/1 Running 0 26m zookeeper-0 1/1 Running 0 38m zookeeper-1 1/1 Running 0 38m zookeeper-2 1/1 Running 0 38m
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name schemaregistry \ --namespace operator \ --set schemaregistry.enabled=true \ ./confluent-operator
Note
If you want to bin pack this component or any of the remaining Confluent Platform components, add
--set disableHostPort=true
to the installation command. This is not recommended for production deployments. Note that you should not bin pack Kafka or ZooKeeper.You should see output similar to the following example:
NAME: schemaregistry LAST DEPLOYED: Mon May 20 14:39:26 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE schemaregistry 0s ==> v1/Secret NAME TYPE DATA AGE schemaregistry-apikeys Opaque 1 0s schemaregistry-sslcerts Opaque 0 0s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Step 5: Install Kafka Connect¶
Enter the following command to verify that the Schema Registry services are running:
kubectl get pods -n operator
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 74m cc-operator-54f54c694d-fpncm 1/1 Running 0 74m kafka-0 1/1 Running 0 32m kafka-1 1/1 Running 0 32m kafka-2 1/1 Running 0 32m schemaregistry-0 1/1 Running 0 6m4s schemaregistry-1 1/1 Running 0 6m4s zookeeper-0 1/1 Running 0 43m zookeeper-1 1/1 Running 0 43m zookeeper-2 1/1 Running 0 43m
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name connectors \ --namespace operator \ --set connect.enabled=true \ ./confluent-operator
You should see output similar to the following example:
NAME: connect LAST DEPLOYED: Mon May 20 14:47:23 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE connectors 1s ==> v1/Secret NAME TYPE DATA AGE connectors-apikeys Opaque 1 1s connectors-sslcerts Opaque 0 1s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Step 6. Install Confluent Replicator¶
Enter the following command to verify that the Connect services are running:
kubectl get pods -n operator
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 81m cc-operator-54f54c694d-fpncm 1/1 Running 0 81m connectors-0 1/1 Running 0 5m49s connectors-1 1/1 Running 0 5m49s kafka-0 1/1 Running 0 39m kafka-1 1/1 Running 0 39m kafka-2 1/1 Running 0 39m schemaregistry-0 1/1 Running 0 13m schemaregistry-1 1/1 Running 0 13m zookeeper-0 1/1 Running 0 51m zookeeper-1 1/1 Running 0 51m zookeeper-2 1/1 Running 0 51m
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name replicator \ --namespace operator \ --set replicator.enabled=true \ ./confluent-operator
You should see output similar to the following example:
LAST DEPLOYED: Mon May 20 14:54:38 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE replicator 1s ==> v1/Secret NAME TYPE DATA AGE replicator-apikeys Opaque 1 1s replicator-sslcerts Opaque 0 1s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Step 7: Install Confluent Control Center¶
Enter the following command to verify that the Replicator services are running:
kubectl get pods -n operator
You should see output similar to the following example:
NAME READY STATUS RESTARTS AGE cc-manager-85749c4b5-fsvvm 1/1 Running 1 85m cc-operator-54f54c694d-fpncm 1/1 Running 0 85m connectors-0 1/1 Running 0 9m25s connectors-1 1/1 Running 0 9m25s kafka-0 1/1 Running 0 43m kafka-1 1/1 Running 0 43m kafka-2 1/1 Running 0 43m replicator-0 1/1 Running 0 2m10s replicator-1 1/1 Running 0 2m10s schemaregistry-0 1/1 Running 0 17m schemaregistry-1 1/1 Running 0 17m zookeeper-0 1/1 Running 0 55m zookeeper-1 1/1 Running 0 55m zookeeper-2 1/1 Running 0 55m
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name controlcenter \ --namespace operator \ --set controlcenter.enabled=true \ ./confluent-operator
You should see output similar to the following example:
NAME: controlcenter LAST DEPLOYED: Mon May 20 14:58:23 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE controlcenter 1s ==> v1/Secret NAME TYPE DATA AGE controlcenter-apikeys Opaque 1 1s controlcenter-sslcerts Opaque 0 1s ==> v1/StorageClass NAME PROVISIONER AGE controlcenter-standard-ssd-us-central1-a kubernetes.io/gce-pd 1s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
Step 8: Install Confluent KSQL¶
Enter the following command to verify that the Confluent Control Center services are running:
kubectl get pods -n operator
Enter the following command:
helm install \ -f ./providers/gcp.yaml \ --name ksql \ --namespace operator \ --set ksql.enabled=true \ ./confluent-operator
You should see output similar to the following example:
NAME: ksql LAST DEPLOYED: Tue May 21 14:53:10 2019 NAMESPACE: operator STATUS: DEPLOYED RESOURCES: ==> v1/PhysicalStatefulCluster NAME AGE ksql 0s ==> v1/Secret NAME TYPE DATA AGE ksql-apikeys Opaque 1 0s ksql-sslcerts Opaque 0 0s ==> v1/StorageClass NAME PROVISIONER AGE ksql-standard-ssd-us-central1-a kubernetes.io/gce-pd 0s
Complete all additional steps in the displayed installation output under NOTES. The notes provide commands to validate the installation along with other valuable information you need to review and save. To redisplay the notes after installation, see Helm commands.
All components should be successfully installed and running.
kubectl get pods -n operator
NAME READY STATUS RESTARTS AGE
cc-manager-85749c4b5-8p58q 1/1 Running 0 4h31m
cc-operator-54f54c694d-qjb7w 1/1 Running 0 4h31m
connectors-0 1/1 Running 0 4h15m
connectors-1 1/1 Running 0 4h15m
controlcenter-0 1/1 Running 0 4h18m
kafka-0 1/1 Running 0 4h20m
kafka-1 1/1 Running 0 4h20m
kafka-2 1/1 Running 0 4h20m
ksql-0 1/1 Running 0 21m
ksql-1 1/1 Running 0 21m
replicator-0 1/1 Running 0 4h18m
replicator-1 1/1 Running 0 4h18m
schemaregistry-0 1/1 Running 0 4h18m
schemaregistry-1 1/1 Running 0 4h18m
zookeeper-0 1/1 Running 0 4h30m
zookeeper-1 1/1 Running 0 4h30m
zookeeper-2 1/1 Running 0 4h30m
Note
Deleting components: If you are installing components for testing purposes, you may want to delete components soon after deploying them. See Deleting a Cluster for instructions; otherwise, continue to next steps to test the deployment.
Step 9: Test the deployment¶
Complete the following steps to test and validate your deployment.
Note
Use oc
instead of kubectl
commands for OpenShift. Otherwise, the
commands are identical. For example, instead of kubectl get pods -n
operator
you would use oc get pods -n operator
.
Internal validation¶
On your local machine, enter the following command to display cluster namespace information (using the example namespace operator). This information contains the bootstrap endpoint you need to complete internal validation.
kubectl get kafka -n operator -oyaml
The bootstrap endpoint is shown on the
bootstrap.servers
line.... omitted internalClient: |- bootstrap.servers=kafka:9071
On your local machine, use kubectl exec to start a bash session on one of the pods in the cluster. The example uses the default pod name
kafka-0
on a Kafka cluster using the default namekafka
.kubectl -n operator exec -it kafka-0 bash
On the pod, create and populate a file named
kafka.properties
. There is no text editor installed in the containers, so you use the cat command as shown below to create this file. UseCTRL+D
to save the file.Note
The example shows default SASL/PLAIN security parameters. A production environment requires additional security. See Configuring security for additional information.
cat << EOF > kafka.properties bootstrap.servers=kafka:9071 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123"; sasl.mechanism=PLAIN security.protocol=SASL_PLAINTEXT EOF
On the pod, query the bootstrap server using the following command:
kafka-broker-api-versions --command-config kafka.properties --bootstrap-server kafka:9071
You should see output for each of the three Kafka brokers that resembles the following:
kafka-1.kafka.operator.svc.cluster.local:9071 (id: 1 rack: 0) -> ( Produce(0): 0 to 7 [usable: 7], Fetch(1): 0 to 10 [usable: 10], ListOffsets(2): 0 to 4 [usable: 4], Metadata(3): 0 to 7 [usable: 7], LeaderAndIsr(4): 0 to 1 [usable: 1], StopReplica(5): 0 [usable: 0], UpdateMetadata(6): 0 to 4 [usable: 4], ControlledShutdown(7): 0 to 1 [usable: 1], OffsetCommit(8): 0 to 6 [usable: 6], OffsetFetch(9): 0 to 5 [usable: 5], FindCoordinator(10): 0 to 2 [usable: 2], JoinGroup(11): 0 to 3 [usable: 3], Heartbeat(12): 0 to 2 [usable: 2], ... omitted
This output validates internal communication within your cluster.
External validation¶
Complete the following steps to validate external communication.
- Prerequisites:
- Access to download the Confluent Platform.
- Outside access to the Kafka brokers is only available through an external load balancer. You can’t complete these steps if you did not enable an external load balancer when configuring the provider YAML file and add DNS entries.
Note
The examples use default component names.
You use the Confluent CLI running on your local machine to complete external validation. The Confluent CLI is included with the Confluent Platform. On your local machine, download and start the Confluent Platform.
On your local machine, use the
kubectl get kafka -n operator -oyaml
command to get the bootstrap servers endpoint for external clients. In the example below, the boostrap servers endpoint iskafka.<providerdomain>:9092
.... omitted externalClient: |- bootstrap.servers=kafka.<providerdomain>:9092 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123"; sasl.mechanism=PLAIN security.protocol=SASL_PLAINTEXT
On your local machine where you have the Confluent Platform running locally, create and populate a file named
kafka.properties
based on the example used in the previous step.bootstrap.servers=kafka.<providerdomain>:9092 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123"; sasl.mechanism=PLAIN security.protocol=SASL_PLAINTEXT
Note
The example shows default SASL/PLAIN security parameters. A production environment requires additional security. See Configuring security for additional information.
Using the Confluent CLI on your local machine, create a topic using the bootstrap endpoint
kafka<providerdomain>:9092
. The example below creates a topic with 1 partition and 3 replicas.kafka-topics --bootstrap-server kafka:9092 \ --command-config kafka.properties \ --create --replication-factor 3 \ --partitions 1 --topic example
Using the Confluent CLI on your local machine, produce to the new topic using the bootstrap endpoint
kafka.<providerdomain>:9092
. Note that the bootstrap server load balancer is the only Kafka broker endpoint required because it provides gateway access to the load balancers for all Kafka brokers.seq 10000 | kafka-console-producer \ --topic example --broker-list kafka.<providerdomain>:9092 \ --producer.config kafka.properties
In a new terminal on your local machine, use the Confluent CLI to consume from the new topic.
kafka-console-consumer --from-beginning \ --topic example --bootstrap-server kafka.<providerdomain>:9092 \ --consumer.config kafka.properties
Successful completion of these steps validates external communication with your cluster.
Note
Deleting components: If you are installing components for testing purposes, you may want to delete components soon after deploying them. See Deleting a Cluster for instructions; otherwise, continue to next step to configure external access to Confluent Control Center.
The following step is an optional activity you can complete to gain additional knowledge of how upgrades work and how to access your environment using Control Center.
Step 10. Configure external access to Confluent Control Center¶
Complete the following steps to perform a rolling upgrade to your configuration, enable external access, and launch Control Center.
Upgrade the configuration¶
Enter the following Helm upgrade command to add an external load balancer for the Control Center instance. Replace
<provider-domain>
with your platform environment domain. This upgrades your cluster configuration and adds a bootstrap load balancer for Control Center.helm upgrade -f ./providers/gcp.yaml \ --set controlcenter.enabled=true \ --set controlcenter.loadBalancer.enabled=true \ --set controlcenter.loadBalancer.domain=<provider-domain> controlcenter \ ./confluent-operator
Get the Control Center bootstrap load balancer public IP. In the example, namespace operator is used. Change this to the namespace for your cluster.
kubectl get services -n operator
Add the bootstrap load balancer DNS entry and public IP to the DNS table for your platform environment.
Launch Confluent Control Center¶
Complete the following steps to launch Confluent Control Center in your cluster.
Start a new terminal session on your local machine. Enter the following command to set up port forwarding to the default Confluent Control Center endpoint. In the example, namespace operator is used.
kubectl port-forward svc/controlcenter 9021:9021 -n operator
Connect to Control Center using http://localhost:9021/.
Log in to Control Center. Basic authorization credentials are configured in the default
<provider.yaml>
file. In the example below, the userID is admin and the password is Developer1.## ## C3 authentication ## auth: basic: enabled: true ## ## map with key as user and value as password and role property: admin: Developer1,Administrators disallowed: no_access
Important
Basic authentication to Confluent Control Center can be used for development testing. Typically, this authentication type is disabled for production environments and LDAP is configured for user access. LDAP parameters are provided in the Control Center YAML file.
Note
Deleting components: If you are installing components for testing purposes, you may want to delete components soon after deploying them. See Deleting a Cluster for instructions.
See also
To get started with Confluent Operator on Kubernetes, try out the Kubernetes Demos.