Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Confluent Operator Quick Start¶
This quick start shows you how to use Confluent Operator to get up and running with Confluent Platform and its main components.
Prerequisites¶
- A Kubernetes cluster conforming to one of the supported environments.
- Cluster size based on the sizing recommendations.
- kubectl installed, initialized, with the context set. You also must have the
kubeconfig
file configured for your cluster. - Helm 3 installed.
- Access to the Confluent Operator bundle.
- For this quick start guide, your Kubernetes cluster is assumed to have a default dynamic storage provisioner. This will be the case for managed Kubernetes services like GKE, AKS, and EKS. In general, Confluent Operator can be used to deploy Confluent Platform on Kubernetes clusters that do not have a default dynamic storage provisioner.
Step 1. Download Operator bundle for Confluent Platform¶
Download the Confluent Operator bundle as the first step of installing and configuring Confluent Operator 5.5.15.
Step 2. Configure the default provider YAML file¶
Make the following configuration changes to your provider YAML file.
Go to the
helm/providers
directory under the directory where you downloaded the Confluent Platform bundle.Make a copy of the provider file corresponding to your provider environment. For example, copy your provider file to
my-values.yaml
.Set an environment variable pointing to your copy of the provider file. For example:
export VALUES_FILE="/path/to/my-values.yaml"
Validate or change your region and zone or zones (if your Kubernetes cluster spans multiple availability zones). The example below uses
region: us-central1
andzones: - us-central1-a
.Note
The example below uses GCE persistent disk storage (
gce-pd
) and solid-state drives (pd-ssd
).global: provider: name: gcp region: us-central1 kubernetes: deployment: ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate ## If kubernetes is deployed in single availability zone then specify appropriate values zones: - us-central1-a storage: ## https://kubernetes.io/docs/concepts/storage/storage-classes/#gce ## provisioner: kubernetes.io/gce-pd reclaimPolicy: Retain parameters: type: pd-ssd
Enable load balancing for external access to the Kafka cluster. The domain name is the domain name you use (or that you create) for your cloud project in the provider environment. See Configure Networking with Confluent Operator for more information.
## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true domain: "<provider-domain>" tls: enabled: false fullchain: |- privkey: |- cacerts: |-
Step 3. Deploy Confluent Operator¶
The operator-util.sh
script deploys and starts Confluent Operator and Confluent Platform.
Important
Do not use operator-util.sh
for production environments. The script is
designed for quick installs and testing.
To deploy all components, enter the following command from in the scripts
directory where you downloaded the Operator bundle.
bash ./operator-util.sh -n <namespace> -r <release-prefix> -f $VALUES_FILE
The following options are used in the command:
- -n or –namespace: If you do not enter a new namespace, the namespace used is the default Kubernetes namespace. Typically, you should enter a new simple namespace.
operator
is used in the example below. - -r or –release: A release prefix to use. This creates a unique release name for each component.
co1
is used in the example below. - -f or –helm-file: The path to the provider YAML file. The
$VALUES_FILE
environment variable is used in this tutorial.
The following shows an example using namespace operator
and prefix co1
.
bash ./operator-util.sh -n operator -r co1 -f $VALUES_FILE
It takes a few minutes for the script to completely deploy all components. You may see messages like the following when the script is running. This is normal and typically occurs because Apache Kafka® pods take a while to start up.
Error from server (NotFound): statefulsets.apps "ksql" not found
Retry 1/10 exited 1, retrying in 1 seconds...
Error from server (NotFound): statefulsets.apps "ksql" not found
Retry 2/10 exited 1, retrying in 2 seconds...
Error from server (NotFound): statefulsets.apps "ksql" not found
Retry 3/10 exited 1, retrying in 4 seconds...
NAME DESIRED CURRENT AGE
ksql 2 2 4s
Run Command:
kubectl --context gke-platform-develop -n operator rollout status sts/ksql -w
When the script completes, enter the following command:
kubectl get pods -n operator
The following output should be displayed.
NAME READY STATUS RESTARTS AGE
cc-operator-76c54d65cd-vgm5w 1/1 Running 0 7m28s
connectors-0 1/1 Running 0 2m17s
connectors-1 1/1 Running 0 2m17s
controlcenter-0 1/1 Running 0 2m14s
kafka-0 1/1 Running 0 6m23s
kafka-1 1/1 Running 0 4m58s
kafka-2 1/1 Running 0 3m40s
ksql-0 1/1 Running 0 2m10s
ksql-1 1/1 Running 0 2m10s
replicator-0 1/1 Running 0 2m7s
replicator-1 1/1 Running 0 2m7s
schemaregistry-0 1/1 Running 0 2m4s
schemaregistry-1 1/1 Running 0 2m4s
zookeeper-0 1/1 Running 0 7m15s
zookeeper-1 1/1 Running 0 7m15s
zookeeper-2 1/1 Running 0 7m15s
Note
See Step 9. Test the deployment in the manual instructions for internal and external deployment testing steps.
Tip
If you want to delete components, enter the command bash ./operator-util.sh --delete -n <namespace> -r <release> -f $VALUES_FILE
.