Install Confluent Operator and Confluent Platform

This topic describes the steps to deploy Confluent Operator and Confluent Platform.

The examples in this guide use the following assumptions:

  • $VALUES_FILE refers to the configuration file you set up in Create the global configuration file.

  • To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file ($VALUES_FILE). However, in your production deployments, use the --set or --set-file option when applying sensitive data with Helm. For example:

    helm upgrade --install kafka \
     --set kafka.services.mds.ldap.authentication.simple.principal="cn=mds\,dc=test\,dc=com" \
     --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \
     --set kafka.enabled=true
    
  • operator is the namespace that Confluent Platform is deployed in.

  • All commands are executed in the helm directory under the directory Confluent Operator was downloaded to.

Step 1. Install Confluent Operator

  1. Go to the helm directory under the directory where you downloaded Confluent Operator bundle.

  2. Install Operator using the following command (using the example operator namespace):

    helm upgrade --install \
      operator \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set operator.enabled=true
    

    --set operator.enabled=true in the above command is a modification of the default setting false. This parameter change enables and starts components immediately after the component is installed.

  3. To verify that Operator is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with the Operator service and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    cc-operator-76c54d65cd-vgm5w   1/1     Running   0          7m28s
    

Step 2. Install ZooKeeper

  1. After verifying that Operator is running, enter the following command to install ZooKeeper:

    helm upgrade --install \
      zookeeper \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set zookeeper.enabled=true
    

    You should see an output similar to the following:

    NAME: zookeeper
    LAST DEPLOYED: Wed Jan  8 14:51:26 2020
    NAMESPACE: operator
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Zookeeper Cluster Deployment
    
    Zookeeper cluster is deployed through CR.
    
      1. Validate if Zookeeper Custom Resource (CR) is created
    
         kubectl get zookeeper -n operator | grep zookeeper
    
      2. Check the status/events of CR: zookeeper
    
         kubectl describe zookeeper zookeeper -n operator
    
      3. Check if Zookeeper cluster is Ready
    
         kubectl get zookeeper zookeeper -ojson -n operator
    
         kubectl get zookeeper zookeeper -ojsonpath='{.status.phase}' -n operator
    
      4. Update/Upgrade Zookeeper Cluster
    
         The upgrade can be done either through the ``helm upgrade`` command or by editing the CR directly as below;
    
         kubectl edit zookeeper zookeeper  -n operator
    
  2. To verify that ZooKeeper is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with ZooKeeper and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    zookeeper-0                    1/1     Running   0          4h30m
    zookeeper-1                    1/1     Running   0          4h30m
    zookeeper-2                    1/1     Running   0          4h30m
    

Step 3. Install Kafka brokers

  1. To install Kafka, enter the following command:

    helm upgrade --install \
      kafka \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set kafka.enabled=true
    

    You should see an output similar to the following:

    NAME: kafka
    LAST DEPLOYED: Wed Jan  8 15:07:46 2020
    NAMESPACE: operator
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Kafka Cluster Deployment
    
    Kafka Cluster is deployed to kubernetes through CR Object
    
    
      1. Validate if Kafka Custom Resource (CR) is created
    
         kubectl get kafka -n operator | grep kafka
    
      2. Check the status/events of CR: kafka
    
         kubectl describe kafka kafka -n operator
    
      3. Check if Kafka cluster is Ready
    
         kubectl get kafka kafka -ojson -n operator
    
         kubectl get kafka kafka -ojsonpath='{.status.phase}' -n operator
    
    ... output omitted
    
  2. To verify that Kafka is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with Kafka and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    kafka-0                        1/1     Running   0          4h20m
    kafka-1                        1/1     Running   0          4h20m
    kafka-2                        1/1     Running   0          4h20m
    

Step 4. Install Schema Registry

  1. To install Schema Registry, enter the following command:

    helm upgrade --install \
      schemaregistry \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set schemaregistry.enabled=true
    

    You should see an output similar to the following:

    NAME: schemaregistry
    LAST DEPLOYED: Thu Jan  9 15:51:21 2020
    NAMESPACE: operator
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    NOTES:
    Schema Registry is deployed through PSC. Configure Schema Registry through REST Endpoint
    
      1. Validate if schema registry cluster is running
    
         kubectl get pods -n operator | grep schemaregistry
    
      2. Access
    
        Internal REST Endpoint : http://schemaregistry:8081  (Inside kubernetes)
    
        OR
    
        http://localhost:8081 (Inside Pod)
    
        More information about schema registry REST API can be found here,
    
        https://docs.confluent.io/platform/current/schema-registry/docs/api.html
    
  2. To verify that Schema Registry is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with Schema Registry and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    schemaregistry-0               1/1     Running   0          4h18m
    schemaregistry-1               1/1     Running   0          4h18m
    

Step 5. Install Kafka Connect

Note

If you plan to use connectors, follow the steps in Deploy Confluent connectors to extend the Connect image to include the connectors before you install Connect.

  1. To install Connect, enter the following command:

    helm upgrade --install \
      connectors \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set connect.enabled=true
    
  2. To verify that Connect is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with Connect and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    connectors-0                   1/1     Running   0          4h15m
    connectors-1                   1/1     Running   0          4h15m
    

Step 6. Install Confluent Replicator

  1. To install Replicator, enter the following command:

    helm upgrade --install \
      replicator \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set replicator.enabled=true
    
  2. To verify that Replicator is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with Replicator and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    replicator-0                   1/1     Running   0          4h18m
    replicator-1                   1/1     Running   0          4h18m
    

Step 7. Install Confluent Control Center

  1. To install Confluent Control Center, enter the following command:

    helm upgrade --install \
      controlcenter \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set controlcenter.enabled=true
    
  2. To verify that Confluent Control Center is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with Confluent Control Center and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    controlcenter-0                1/1     Running   0          4h18m
    

Step 8. Install ksqlDB

  1. To install ksqlDB, enter the following command:

    helm upgrade --install \
      ksql \
      ./confluent-operator \
      --values $VALUES_FILE \
      --namespace operator \
      --set ksql.enabled=true
    
  2. To verify that ksqlDB is successfully installed and running, enter the following command:

    kubectl get pods -n operator
    

    You should see output similar to the following example with ksqlDB and the status as Running.

    NAME                           READY   STATUS    RESTARTS   AGE
    ksql-0                         1/1     Running   0          21m
    ksql-1                         1/1     Running   0          21m
    

Step 9. Test the deployment

Complete the following steps to test and validate your deployment.

Internal access validation

Complete the following steps to validate internal communication.

  1. On your local machine, enter the following command to display cluster namespace information (using the example namespace operator). This information contains the bootstrap endpoint you need to complete internal validation.

    kubectl get kafka -n operator -oyaml
    

    The bootstrap endpoint is shown on the bootstrap.servers line.

    ... omitted
    
       internalClient: |-
          bootstrap.servers=kafka:9071
    
  2. On your local machine, use kubectl exec to start a bash session on one of the pods in the cluster. The example uses the default pod name kafka-0 on a Kafka cluster using the default name kafka.

    kubectl -n operator exec -it kafka-0 bash
    
  3. On the pod, create and populate a file named kafka.properties. There is no text editor installed in the containers, so you use the cat command as shown below to create this file. Use CTRL+D to save the file.

    Note

    The example shows default SASL/PLAIN security parameters. A production environment requires additional security. See Configure Security with Confluent Operator for additional information.

    cat << EOF > kafka.properties
    bootstrap.servers=kafka:9071
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123";
    sasl.mechanism=PLAIN
    security.protocol=SASL_PLAINTEXT
    EOF
    
  4. On the pod, query the bootstrap server using the following command:

    kafka-broker-api-versions --command-config kafka.properties --bootstrap-server kafka:9071
    

    You should see output for each of the three Kafka brokers that resembles the following:

    kafka-1.kafka.operator.svc.cluster.local:9071 (id: 1 rack: 0) -> (
       Produce(0): 0 to 7 [usable: 7],
       Fetch(1): 0 to 10 [usable: 10],
       ListOffsets(2): 0 to 4 [usable: 4],
       Metadata(3): 0 to 7 [usable: 7],
       LeaderAndIsr(4): 0 to 1 [usable: 1],
       StopReplica(5): 0 [usable: 0],
       UpdateMetadata(6): 0 to 4 [usable: 4],
       ControlledShutdown(7): 0 to 1 [usable: 1],
       OffsetCommit(8): 0 to 6 [usable: 6],
       OffsetFetch(9): 0 to 5 [usable: 5],
       FindCoordinator(10): 0 to 2 [usable: 2],
       JoinGroup(11): 0 to 3 [usable: 3],
       Heartbeat(12): 0 to 2 [usable: 2],
    
    ... omitted
    

    This output validates internal communication within your cluster.

External access validation

Take the following steps to validate external communication after you have enabled external access to Kafka and added DNS entries as described in External access to Kafka.

Note

The examples use default Confluent Platform component names and the default Kafka bootstrap prefix, kafka.

  1. On your local machine, download the Confluent Platform. You only need to download and set the PATH and required environment variables to use Confluent CLI. You do not need to start Confluent Platform on your local machine.

    You use the Confluent CLI running on your local machine to complete external validation. The Confluent CLI is included with the Confluent Platform.

  2. On your local machine, run the command to get the bootstrap servers endpoint for external clients.

    kubectl get kafka -n operator -oyaml
    

    In the example output below, the bootstrap server endpoint is kafka.mydomain:9092.

    ... omitted
    
    externalClient: |-
       bootstrap.servers=kafka.mydomain:9092
       sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123";
       sasl.mechanism=PLAIN
       security.protocol=SASL_PLAINTEXT
    
  3. On your local machine where you have the Confluent Platform running locally, create and populate a file named kafka.properties with the following content. Assign the external endpoint you retrieved in the above step to bootstrap.servers.

    bootstrap.servers=<kafka bootstrap endpoint>
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="test" password="test123";
    sasl.mechanism=PLAIN
    security.protocol=SASL_PLAINTEXT
    

    Note

    The example shows default SASL/PLAIN security parameters. A production environment requires additional security. See Configure Security with Confluent Operator for additional information.

  4. On your local machine, create a topic using the bootstrap endpoint <kafka bootstrap endpoint>. The example below creates a topic with 1 partition and 3 replicas.

    kafka-topics --bootstrap-server <kafka bootstrap endpoint> \
    --command-config kafka.properties \
    --create --replication-factor 3 \
    --partitions 1 --topic example
    
  5. On your local machine, produce to the new topic using the bootstrap endpoint <kafka bootstrap endpoint>. Note that the bootstrap server endpoint is the only Kafka broker endpoint required because it provides gateway access to all Kafka brokers.

    seq 10000 | kafka-console-producer \
    --topic example --broker-list <kafka bootstrap endpoint> \
    --producer.config kafka.properties
    
  6. In a new terminal on your local machine, from the directory you put kafka.properties, issue the Confluent CLI command to consume from the new topic.

    kafka-console-consumer --from-beginning \
    --topic example --bootstrap-server <kafka bootstrap endpoint> \
    --consumer.config kafka.properties
    

Successful completion of these steps validates external communication with your cluster.

External access validation of Confluent Control Center

Complete the following steps to access your Confluent Platform cluster using Control Center. Prior to the steps, enable an external load balancer for Confluent Control Center and add a DNS entry as described in Configure External Load Balancer.

  1. On your local machine, enter the following command to set up port forwarding to the default Confluent Control Center endpoint.

    kubectl port-forward svc/controlcenter 9021:9021 -n operator
    
  2. Connect to Control Center in a browser:

    http://localhost:9021/
    
  3. Log in to Control Center. Basic authentication credentials are set in the configuration file ($VALUES_FILE). In the example below, the userID is admin and the password is Developer1.

    ##
    ## C3 authentication
    ##
    auth:
      basic:
        enabled: true
        ##
        ## map with key as user and value as password and role
        property:
          admin: Developer1,Administrators
          disallowed: no_access
    

Important

Basic authentication to Confluent Control Center can be used for development testing. Typically, this authentication type is disabled for production environments and LDAP is configured for user access. LDAP parameters are provided in the Control Center values file.