Confluent Operator Quick Start

Use the following instructions to use the shell script to quickly get running with Confluent Operator and Confluent Platform. The script deploys and starts Operator and Confluent Platform in the supported provider environments.

  • A Kubernetes cluster conforming to one of the supported environments.
  • Cluster size based on the Sizing Recommendations.
  • The provider CLI or Cloud SDK is installed and initialized.
  • kubectl is installed, initialized, with the context set. You also must have the kubeconfig file configured for your cluster.
  • Access to the Confluent Operator bundle (Step 1 below).

Step 1. Download the Operator bundle for Confluent Platform 5.4

Confluent offers a bundle of Helm charts, templates, and scripts used to deploy Confluent Operator and Confluent Platform components to your Kubernetes cluster. Note that the bundle is extracted to a directory named helm. You should run Helm install commands from within this directory.

Bundle download link:

Step 2. Install Helm 3

Use the following instructions to install Helm 3. Confluent recommends Helm 3 for use with Confluent Operator. If you need to migrate from Helm 2 to Helm 3, see Migrating from Helm 2 to Helm 3.


For information about the differences between Helm 2 and Helm 3, see the Helm 2 to Helm 3 changes .

Complete the following steps to install Helm 3.

  1. Install Helm using the Helm installation instructions.

  2. Verify that your $PATH is pointing to the Helm 3 binary file. Enter the following command:

    helm version

    This command should return Helm 3 version output similar to the following:

    version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

    If you see Helm 2 version information in the output, you must update your $PATH so it points to the Helm 3 binary file.

Step 3. Configure the default provider YAML file

Make the following configuration changes to your provider YAML file.

  1. Go to the helm/providers directory on your local machine.

  2. Open the <provider>.yaml file.

  3. Validate or change your region and zone or zones (if your cluster spans multiple availability zones). The example below uses region: us-central1 and zones: - us-central1-a.

  4. Validate or change your storage provisioner. See Storage Class Provisioners for configuration examples.


    The example below uses GCE persistent disk storage (gce-pd) and solid-state drives (pd-ssd).

        name: gcp
        region: us-central1
             ## If kubernetes is deployed in multi zone mode then specify availability-zones as appropriate
             ## If kubernetes is deployed in single availability zone then specify appropriate values
              - us-central1-a
          reclaimPolicy: Delete
            type: pd-ssd
  5. Enter the image registry endpoint.

    The following example shows the default public image registry endpoint for Confluent images. If you are installing from images downloaded and stored locally or located elsewhere, you need to enter your unique endpoint. If the endpoint you use requires basic authentication, you need to change the credential parameter to required: true and enter a username and password.

    ## Docker registry endpoint where Confluent images are available.
        required: false
  6. Enable load balancing for external access to the Kafka cluster. Enable load balancing for external access to the Kafka cluster. The domain name is the domain name you use (or that you create) for your cloud project in the provider environment. See Configuring the network for more information.

    ## Kafka Cluster
      name: kafka
      replicas: 3
         cpu: 200m
         memory: 1Gi
         enabled: true
         domain: "<provider-domain>"
         enabled: false
         fullchain: |-
         privkey: |-
         cacerts: |-

Step 4. Deploy Confluent Operator

The script is located in the scripts directory. To deploy all components, enter the following command from within the /scripts directory:

./ -n <namespace> -r <release-prefix> -f <path-to-yaml-file>

The following options are used in the command:

  • -n or –namespace: If you do not enter a new namespace, the namespace used is the default Kubernetes namespace. Typically, you should enter a new simple namespace. operator is used in the example below.
  • -r or –release: A release prefix to use. This creates a unique release name for each component. co1 is used in the example below.
  • -f or –helm-file: The path to the provider YAML file. The path to the gcp.yaml file is shown in the example below.

The following shows an example using namespace operator and prefix co1.

./ -n operator -r co1 -f ../helm/providers/gcp.yaml

It takes a few minutes for the script to completely deploy all components. You may see messages like the following when the script is running. This is normal and typically occurs because Apache Kafka® pods take a while to start up.

Error from server (NotFound): statefulsets.apps "ksql" not found
Retry 1/10 exited 1, retrying in 1  seconds...
Error from server (NotFound): statefulsets.apps "ksql" not found
Retry 2/10 exited 1, retrying in 2  seconds...
Error from server (NotFound): statefulsets.apps "ksql" not found
Retry 3/10 exited 1, retrying in 4  seconds...
ksql   2         2         4s
Run Command:

     kubectl --context gke-platform-develop -n operator rollout status sts/ksql -w

When the script completes, enter the following command:

kubectl get pods -n operator

The following output should be displayed.

NAME                           READY   STATUS    RESTARTS   AGE
cc-operator-76c54d65cd-vgm5w   1/1     Running   0          7m28s
connectors-0                   1/1     Running   0          2m17s
connectors-1                   1/1     Running   0          2m17s
controlcenter-0                1/1     Running   0          2m14s
kafka-0                        1/1     Running   0          6m23s
kafka-1                        1/1     Running   0          4m58s
kafka-2                        1/1     Running   0          3m40s
ksql-0                         1/1     Running   0          2m10s
ksql-1                         1/1     Running   0          2m10s
replicator-0                   1/1     Running   0          2m7s
replicator-1                   1/1     Running   0          2m7s
schemaregistry-0               1/1     Running   0          2m4s
schemaregistry-1               1/1     Running   0          2m4s
zookeeper-0                    1/1     Running   0          7m15s
zookeeper-1                    1/1     Running   0          7m15s
zookeeper-2                    1/1     Running   0          7m15s


See Step 9. Test the deployment in the manual instructions for internal and external deployment testing steps.


If you want to delete components, enter the command ./ --delete -n <namespace> -r <release> -f <path-to-yaml-file>.