Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Tutorial: Confluent Cloud CLI

Overview

This tutorial shows you how to use the Confluent Cloud CLI to interact with your Confluent Cloud cluster. It uses real resources in Confluent Cloud, and it creates and deletes topics, service accounts, credentials, and ACLs. Following the workflow in this tutorial, you accomplish the following steps:

Prerequisites

  • Access to Confluent Cloud.
  • Confluent Cloud user credentials saved in ~/.netrc. (Use ccloud login --save when logging in to the Confluent Cloud CLI. The --save flag will save your login credentials to the ~/.netrc file.)
  • Local install of Confluent Cloud CLI (v1.7.0 or later)
  • Docker and Docker Compose for the local Connect worker
  • timeout: used by the bash scripts to terminate a consumer process after a certain period of time. timeout is available on most Linux distributions but not on macOS. macOS users should view the installation instructions for macOS.
  • mvn installed on your host
  • jq installed on your host

Confluent Cloud Promo Code

The first 20 users to sign up for Confluent Cloud and use promo code C50INTEG will receive an additional $50 free usage (details).

Run the tutorial

To run this tutorial, complete the following steps:

  1. Clone the Confluent examples repository:

    git clone https://github.com/confluentinc/examples.git
    
  2. Navigate to the examples/ccloud/beginner-cloud/ directory and switch to the Confluent Platform release branch:

    cd examples/ccloud/beginner-cloud/
    git checkout 5.5.15-post
    
  3. If you want to manually step through the tutorial, which is advised for new users who want to gain familiarity with Confluent Cloud CLI, skip ahead to the next section. Alternatively, you can run the full tutorial end-to-end with the start.sh script, which automates all the steps in the tutorial:

    ./start.sh
    

Create a new Confluent Cloud environment

  1. Run the following command to create a new Confluent Cloud environment demo-script-env:

    ccloud environment create demo-script-env -o json
    
  2. Verify your output resembles:

    {
      "id": "env-5qz2q",
      "name": "demo-script-env"
    }
    

    The value of the environment ID, in this case env-5qz2q, may differ in your output. In this tutorial, the values for certain variables, including your environment ID, Kafka cluster ID, API key, will be unique and will not match the output shown.

  3. Specify env-5qz2q as the active environment by running the following command:

    ccloud environment use env-5qz2q
    
  4. Verify your output resembles:

    Now using "env-5qz2q" as the default (active) environment.
    

Create a new Confluent Cloud cluster

  1. Run the following command to create a new Confluent Cloud cluster demo-kafka-cluster. It takes up to 5 minutes for the Kafka cluster to be ready.

    ccloud kafka cluster create demo-kafka-cluster --cloud aws --region us-west-2
    

    Tip

    You may choose any provider or region from the list generated by running ccloud kafka region list.

  2. Verify your output resembles:

    +--------------+---------------------------------------------------------+
    | Id           | lkc-x6m01                                               |
    | Name         | demo-kafka-cluster                                      |
    | Type         | BASIC                                                   |
    | Ingress      |                                                     100 |
    | Egress       |                                                     100 |
    | Storage      |                                                    5000 |
    | Provider     | aws                                                     |
    | Availability | LOW                                                     |
    | Region       | us-west-2                                               |
    | Status       | UP                                                      |
    | Endpoint     | SASL_SSL://pkc-4kgmg.us-west-2.aws.confluent.cloud:9092 |
    | ApiEndpoint  | https://pkac-ldgj1.us-west-2.aws.confluent.cloud        |
    +--------------+---------------------------------------------------------+
    

    The value of the Kafka cluster ID, in this case lkc-x6m01, and Kafka cluster endpoint, in this case pkc-4kgmg.us-west-2.aws.confluent.cloud:9092, may differ in your output.

  3. Specify lkc-x6m01 as the active Kafka cluster by running the following command:

    ccloud kafka cluster use lkc-x6m01
    
  4. Verify your output resembles:

    Set Kafka cluster "lkc-x6m01" as the active cluster for environment "env-5qz2".
    

Create a new API key/secret pair for user

  1. Run the following command to create a user API key/secret pair for your Kafka cluster lkc-x6m01:

    cloud api-key create --description "Demo credentials" --resource lkc-x6m01 -o json
    
  2. Verify your output resembles:

    {
       "key": "QX7X4VA4DFJTTOIA",
       "secret": "fjcDDyr0Nm84zZr77ku/AQqCKQOOmb35Ql68HQnb60VuU+xLKiu/n2UNQ0WYXp/D"
    }
    

    The value of the API key, in this case QX7X4VA4DFJTTOIA, and API secret, in this case fjcDDyr0Nm84zZr77ku/AQqCKQOOmb35Ql68HQnb60VuU+xLKiu/n2UNQ0WYXp/D may differ in your output.

  3. Specify the API key QX7X4VA4DFJTTOIA for the Kafka cluster lkc-x6m01:

    ccloud api-key use QX7X4VA4DFJTTOIA --resource lkc-x6m01
    

    Your output should resemble:

    Set the API Key "QX7X4VA4DFJTTOIA" as the active API key for ``lkc-x6m0``.
    
    Waiting for Confluent Cloud cluster to be ready and for credentials to propagate
    ....
    

Produce and consume records with Confluent Cloud CLI

  1. Run the following command to create a new Kafka topic demo-topic-1:

    ccloud kafka topic create demo-topic-1
    
  2. Produce 10 messages to topic demo-topic-1 by running the following commands:

    (for i in `seq 1 10`; do echo "${i}" ; done) | \
      ccloud kafka topic produce demo-topic-1
    
  3. Verify your output resembles:

    Starting Kafka Producer. ^C or ^D to exit
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
  4. Run the following command to consume messages from topic demo-topic-1. The flag -b allows the consumer to read from the beginning of the topic.

    ccloud kafka topic consume demo-topic-1 -b
    
  5. Verify your output resembles:

    Starting Kafka Consumer. ^C or ^D to exit
    2
    3
    9
    4
    5
    7
    10
    1
    6
    8
    
  6. Press CTRL-C to stop the consumer.

Create a new service account with an API key/secret pair

  1. Run the following command to create a new service account:

    ccloud service-account create demo-app-3288 --description demo-app-3288 -o json
    
  2. Verify your output resembles:

    {
       "id": 104349,
       "name": "demo-app-3288",
       "description": "demo-app-3288"
    }
    

    The value of the service account ID, in this case 104349, may differ in your output.

  3. Create an API key and secret for the service account 104349 for the Kafka cluster lkc-x6m01 by running the following command:

    ccloud api-key create --service-account 104349 --resource lkc-x6m01 -o json
    
  4. Verify your output resembles:

    {
      "key": "ESN5FSNDHOFFSUEV",
      "secret": "nzBEyC1k7zfLvVON3vhBMQrNRjJR7pdMc2WLVyyPscBhYHkMwP6VpPVDTqhctamB"
    }
    

    The value of the service account’s API key, in this case ESN5FSNDHOFFSUEV, and API secret, in this case nzBEyC1k7zfLvVON3vhBMQrNRjJR7pdMc2WLVyyPscBhYHkMwP6VpPVDTqhctamB, may differ in your output.

  5. Create a local configuration file /tmp/client.config with Confluent Cloud connection information using the newly created Kafka cluster and the API key and secret for the service account:

    ssl.endpoint.identification.algorithm=https
    sasl.mechanism=PLAIN
    security.protocol=SASL_SSL
    bootstrap.servers=pkc-4kgmg.us-west-2.aws.confluent.cloud:9092
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username\="ESN5FSNDHOFFSUEV" password\="nzBEyC1k7zfLvVON3vhBMQrNRjJR7pdMc2WLVyyPscBhYHkMwP6VpPVDTqhctamB";
    
  6. Wait about 90 seconds for the Confluent Cloud cluster to be ready and for the service account credentials to propagate.

Run a Java producer without ACLs

  1. By default, no ACLs are configured for the service account, which means the service account has no access to any Confluent Cloud resources. Run the following command to verify no ACLs are configured:

    ccloud kafka acl list --service-account 104349
    

    Your output should resemble:

      ServiceAccountId | Permission | Operation | Resource | Name | Type
    +------------------+------------+-----------+----------+------+------+
    
  2. Run a Java producer to demo-topic-1 before configuring ACLs (expected to fail). Note that you pass in an argument to /tmp/client.config which has the Confluent Cloud connection information:

    mvn -q -f ../../clients/cloud/java/pom.xml exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ProducerExample" -Dexec.args="/tmp/client.config demo-topic-1" -Dlog4j.configuration=file:log4j.properties > /tmp/log.1 2>&1
    
  3. Verify you see org.apache.kafka.common.errors.TopicAuthorizationException in the log file /tmp/log.1 as shown in the following example (expected because there are no ACLs to allow this client application):

    [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:java (default-cli) on project clients-example: An exception occured while executing the Java class. null: InvocationTargetException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TopicAuthorizationException: Authorization failed. -> [Help 1]
    

Run a Java producer with ACLs

  1. Run the following commands to create ACLs for the service account:

    ccloud kafka acl create --allow --service-account 104349 --operation CREATE --topic demo-topic-1
    ccloud kafka acl create --allow --service-account 104349 --operation WRITE --topic demo-topic-1
    
  2. Verify your output resembles:

      ServiceAccountId | Permission | Operation | Resource |     Name     |  Type
    +------------------+------------+-----------+----------+--------------+---------+
      User:104349      | ALLOW      | CREATE    | TOPIC    | demo-topic-1 | LITERAL
    
      ServiceAccountId | Permission | Operation | Resource |     Name     |  Type
    +------------------+------------+-----------+----------+--------------+---------+
      User:104349      | ALLOW      | WRITE     | TOPIC    | demo-topic-1 | LITERAL
    
  3. Run the following command and verify the ACLs were configured:

    ccloud kafka acl list --service-account 104349
    

    Your output should resemble below. Observe that the ACL Type is LITERAL.

      ServiceAccountId | Permission | Operation | Resource |     Name     |  Type
    +------------------+------------+-----------+----------+--------------+---------+
      User:104349      | ALLOW      | CREATE    | TOPIC    | demo-topic-1 | LITERAL
      User:104349      | ALLOW      | WRITE     | TOPIC    | demo-topic-1 | LITERAL
    
  4. Run the Java producer to demo-topic-1 after configuring the ACLs (expected to pass):

    mvn -q -f ../../clients/cloud/java/pom.xml exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ProducerExample" -Dexec.args="/tmp/client.config demo-topic-1" -Dlog4j.configuration=file:log4j.properties > /tmp/log.2 2>&1
    
  5. Verify you see the 10 messages were produced to topic message in the log file /tmp/log.2 as shown in the following example:

    [2020-08-29 13:52:10,836] WARN The configuration 'sasl.jaas.config' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2020-08-29 13:52:10,837] WARN The configuration 'ssl.endpoint.identification.algorithm' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    Producing record: alice        {"count":0}
    Producing record: alice        {"count":1}
    Producing record: alice        {"count":2}
    Producing record: alice        {"count":3}
    Producing record: alice        {"count":4}
    Producing record: alice        {"count":5}
    Producing record: alice        {"count":6}
    Producing record: alice        {"count":7}
    Producing record: alice        {"count":8}
    Producing record: alice        {"count":9}
    Produced record to topic demo-topic-1 partition [3] @ offset 0
    Produced record to topic demo-topic-1 partition [3] @ offset 1
    Produced record to topic demo-topic-1 partition [3] @ offset 2
    Produced record to topic demo-topic-1 partition [3] @ offset 3
    Produced record to topic demo-topic-1 partition [3] @ offset 4
    Produced record to topic demo-topic-1 partition [3] @ offset 5
    Produced record to topic demo-topic-1 partition [3] @ offset 6
    Produced record to topic demo-topic-1 partition [3] @ offset 7
    Produced record to topic demo-topic-1 partition [3] @ offset 8
    Produced record to topic demo-topic-1 partition [3] @ offset 9
    10 messages were produced to topic demo-topic-1
    
  6. Delete the ACLs:

    ccloud kafka acl delete --allow --service-account 104349 --operation CREATE --topic demo-topic-1
    ccloud kafka acl delete --allow --service-account 104349 --operation WRITE --topic demo-topic-1
    

    You should see two Deleted ACLs. messages.

Run a Java producer with a prefixed ACL

  1. Create a new Kafka topic demo-topic-2:

    ccloud kafka topic create demo-topic-2
    

    Verify you see the Created topic "demo-topic-2" message.

  2. Run the following command to create ACLs for the producer using a prefixed ACL which matches any topic that starts with the prefix demo-topic:

    ccloud kafka acl create --allow --service-account 104349 --operation CREATE --topic demo-topic --prefix
    ccloud kafka acl create --allow --service-account 104349 --operation WRITE --topic demo-topic --prefix
    
  3. Verify your output resembles:

    ServiceAccountId | Permission | Operation | Resource |    Name    |   Type
    +------------------+------------+-----------+----------+------------+----------+
    User:104349      | ALLOW      | CREATE    | TOPIC    | demo-topic | PREFIXED
    
    ServiceAccountId | Permission | Operation | Resource |    Name    |   Type
    +------------------+------------+-----------+----------+------------+----------+
    User:104349      | ALLOW      | WRITE     | TOPIC    | demo-topic | PREFIXED
    
  4. Verify the ACLs were configured by running the following command:

    ccloud kafka acl list --service-account 104349
    

    Your output should resemble below. Observe that the ACL Type is PREFIXED.

      ServiceAccountId | Permission | Operation | Resource |    Name    |   Type
    +------------------+------------+-----------+----------+------------+----------+
      User:104349      | ALLOW      | WRITE     | TOPIC    | demo-topic | PREFIXED
      User:104349      | ALLOW      | CREATE    | TOPIC    | demo-topic | PREFIXED
    
  5. Run the Java producer to demo-topic-2, which should match the newly created prefixed ACLs.

    mvn -q -f ../../clients/cloud/java/pom.xml exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ProducerExample" -Dexec.args="/tmp/client.config demo-topic-2" -Dlog4j.configuration=file:log4j.properties > /tmp/log.3 2>&1
    
  6. Verify you see the 10 messages were produced to topic message in the log file /tmp/log.3 as shown in the following example:

    [2020-08-29 13:52:39,012] WARN The configuration 'sasl.jaas.config' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    [2020-08-29 13:52:39,013] WARN The configuration 'ssl.endpoint.identification.algorithm' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)
    Producing record: alice   {"count":0}
    Producing record: alice   {"count":1}
    Producing record: alice   {"count":2}
    Producing record: alice   {"count":3}
    Producing record: alice   {"count":4}
    Producing record: alice   {"count":5}
    Producing record: alice   {"count":6}
    Producing record: alice   {"count":7}
    Producing record: alice   {"count":8}
    Producing record: alice   {"count":9}
    Produced record to topic demo-topic-2 partition [3] @ offset 0
    Produced record to topic demo-topic-2 partition [3] @ offset 1
    Produced record to topic demo-topic-2 partition [3] @ offset 2
    Produced record to topic demo-topic-2 partition [3] @ offset 3
    Produced record to topic demo-topic-2 partition [3] @ offset 4
    Produced record to topic demo-topic-2 partition [3] @ offset 5
    Produced record to topic demo-topic-2 partition [3] @ offset 6
    Produced record to topic demo-topic-2 partition [3] @ offset 7
    Produced record to topic demo-topic-2 partition [3] @ offset 8
    Produced record to topic demo-topic-2 partition [3] @ offset 9
    10 messages were produced to topic demo-topic-2
    
  7. Run the following commands to delete ACLs:

    ccloud kafka acl delete --allow --service-account 104349 --operation CREATE --topic demo-topic --prefix
    ccloud kafka acl delete --allow --service-account 104349 --operation WRITE --topic demo-topic --prefix
    

    You should see two Deleted ACLs. messages.

Run kafka-connect-datagen connector with wildcard ACLs

  1. Create a new Kafka topic demo-topic-3:

    ccloud kafka topic create demo-topic-3
    

    You should see a Created topic "demo-topic-3" message.

  2. Run the following command to create an ACL that allows creation of any topic:

    ccloud kafka acl create --allow --service-account 104349 --operation CREATE --topic '*'
    
  3. Verify your output resembles:

      ServiceAccountId | Permission | Operation | Resource | Name |  Type
    +------------------+------------+-----------+----------+------+---------+
      User:104349      | ALLOW      | CREATE    | TOPIC    | *    | LITERAL
    
  4. Run the following command to allow service account ID 104349 to write to any topic:

    ccloud kafka acl create --allow --service-account 104349 --operation WRITE --topic '*'
    
  5. Verify your output resembles:

      ServiceAccountId | Permission | Operation | Resource | Name |  Type
    +------------------+------------+-----------+----------+------+---------+
      User:104349      | ALLOW      | WRITE     | TOPIC    | *    | LITERAL
    
  6. Run the following command to allow service account ID 104349 to read from any topic:

    ccloud kafka acl create --allow --service-account 104349 --operation READ --topic '*'
    
  7. Verify your output resembles:

      ServiceAccountId | Permission | Operation | Resource | Name |  Type
    +------------------+------------+-----------+----------+------+---------+
      User:104349      | ALLOW      | READ      | TOPIC    | *    | LITERAL
    
  8. Run the following command to allow service account ID 104349 to have a consumer group called connect:

    ccloud kafka acl create --allow --service-account 104349 --operation READ --consumer-group connect
    

    Your output should resemble:

    ServiceAccountId | Permission | Operation | Resource |  Name   |  Type
    +------------------+------------+-----------+----------+---------+---------+
    User:104349      | ALLOW      | READ      | GROUP    | connect | LITERAL
    
  9. Verify the ACLs were configured by running the following command:

    ccloud kafka acl list --service-account 104349
    

    Your output should resemble:

      ServiceAccountId | Permission | Operation | Resource |  Name   |  Type
    +------------------+------------+-----------+----------+---------+---------+
      User:104349      | ALLOW      | WRITE     | TOPIC    | *       | LITERAL
      User:104349      | ALLOW      | CREATE    | TOPIC    | *       | LITERAL
      User:104349      | ALLOW      | READ      | TOPIC    | *       | LITERAL
      User:104349      | ALLOW      | READ      | GROUP    | connect | LITERAL
    
  10. Generate environment variables with Confluent Cloud connection information for Connect to use:

    ../../ccloud/ccloud-generate-cp-configs.sh /tmp/client.config &>/dev/null
    source delta_configs/env.delta
    
  11. Run the following docker-compose.yml file which is a Connect container with the`kafka-connect-datagen <https://www.confluent.io/hub/confluentinc/kafka-connect-datagen>`__ plugin:

    docker-compose up -d
    

    Your output should resemble:

    Creating connect-cloud ... done
    Waiting up to 180 seconds for Docker container for connect to be up
    ............
    
  12. Post the configuration for the kafka-connect-datagen connector that produces pageviews data to Confluent Cloud topic demo-topic-3:

    DATA=$( cat << EOF
    {
       "name": "datagen-demo-topic-3",
       "config": {
         "connector.class": "io.confluent.kafka.connect.datagen.DatagenConnector",
         "kafka.topic": "demo-topic-3",
         "quickstart": "pageviews",
         "key.converter": "org.apache.kafka.connect.storage.StringConverter",
         "value.converter": "org.apache.kafka.connect.json.JsonConverter",
         "value.converter.schemas.enable": "false",
         "max.interval": 5000,
         "iterations": 1000,
         "tasks.max": "1"
       }
    }
    EOF
    )
    
    curl --silent --output /dev/null -X POST -H "Content-Type: application/json" --data "${DATA}" http://localhost:8083/connectors
    
  13. Wait about 20 seconds for kafka-connect-datagen to start producing messages.

  14. Run the following command to verify connector is running:

    curl --silent http://localhost:8083/connectors/datagen-demo-topic-3/status | jq -r '.'
    

    Your output should resemble:

    {
       "name": "datagen-demo-topic-3",
       "connector": {
         "state": "RUNNING",
         "worker_id": "connect:8083"
       },
       "tasks": [
         {
           "id": 0,
           "state": "RUNNING",
           "worker_id": "connect:8083"
         }
       ],
       "type": "source"
    }
    

Run a Java consumer with a Wildcard ACL

  1. Create ACLs for the consumer using a wildcard by running the following commands:

    ccloud kafka acl create --allow --service-account 104349 --operation READ --consumer-group demo-beginner-cloud-1
    ccloud kafka acl create --allow --service-account 104349 --operation READ --topic '*'
    
  2. Verify your output resembles:

      ServiceAccountId | Permission | Operation | Resource |         Name          |  Type
    +------------------+------------+-----------+----------+-----------------------+---------+
      User:104349      | ALLOW      | READ      | GROUP    | demo-beginner-cloud-1 | LITERAL
    
      ServiceAccountId | Permission | Operation | Resource | Name |  Type
    +------------------+------------+-----------+----------+------+---------+
      User:104349      | ALLOW      | READ      | TOPIC    | *    | LITERAL
    
  3. Verify the ACLs were configured by running the following command:

    ccloud kafka acl list --service-account 104349
    

    Your output should resemble:

      ServiceAccountId | Permission | Operation | Resource |         Name          |  Type
    +------------------+------------+-----------+----------+-----------------------+---------+
      User:104349      | ALLOW      | READ      | GROUP    | connect               | LITERAL
      User:104349      | ALLOW      | CREATE    | TOPIC    | *                     | LITERAL
      User:104349      | ALLOW      | WRITE     | TOPIC    | *                     | LITERAL
      User:104349      | ALLOW      | READ      | TOPIC    | *                     | LITERAL
      User:104349      | ALLOW      | READ      | GROUP    | demo-beginner-cloud-1 | LITERAL
    
  4. Run the Java consumer from demo-topic-3 which is populated by kafka-connect-datagen:

    mvn -q -f ../../clients/cloud/java/pom.xml exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ConsumerExamplePageviews" -Dexec.args="/tmp/client.config demo-topic-3" -Dlog4j.configuration=file:log4j.properties > /tmp/log.4 2>&1
    
  5. Verify you see the Consumed record with message in the log file /tmp/log.4 as shown in the following example:

    Consumed record with key 1 and value {"viewtime":1,"userid":"User_6","pageid":"Page_82"}
    Consumed record with key 71 and value {"viewtime":71,"userid":"User_6","pageid":"Page_11"}
    Consumed record with key 51 and value {"viewtime":51,"userid":"User_7","pageid":"Page_24"}
    Consumed record with key 31 and value {"viewtime":31,"userid":"User_7","pageid":"Page_68"}
    Consumed record with key 81 and value {"viewtime":81,"userid":"User_5","pageid":"Page_25"}
    Consumed record with key 41 and value {"viewtime":41,"userid":"User_2","pageid":"Page_88"}
    Consumed record with key 91 and value {"viewtime":91,"userid":"User_2","pageid":"Page_74"}
    
  6. Delete the ACLs by running the following command:

    ccloud kafka acl delete --allow --service-account 104349 --operation READ --consumer-group demo-beginner-cloud-1
    ccloud kafka acl delete --allow --service-account 104349 --operation READ --topic '*'
    

    You should see two Deleted ACLs. messages.

  7. Stop Docker:

    docker-compose down
    
  8. Verify you see the following output:

    Stopping connect-cloud ... done
    Removing connect-cloud ... done
    Removing network beginner-cloud_default
    
  9. Delete the ACLs:

    ccloud kafka acl delete --allow --service-account 104349 --operation CREATE --topic '*'
    ccloud kafka acl delete --allow --service-account 104349 --operation WRITE --topic '*'
    ccloud kafka acl delete --allow --service-account 104349 --operation READ --topic '*'
    ccloud kafka acl delete --allow --service-account 104349 --operation READ --consumer-group connect
    

    You should see a Deleted ACLs. message after running each of the previous commands.

Clean up your Confluent Cloud resources

  1. Run the following command to delete the service account:

    ccloud service-account delete 104349
    
  2. Complete the following steps to delete all the Kafka topics:

    1. Delete demo-topic-1:

      ccloud kafka topic delete demo-topic-1
      

      You should see: Deleted topic "demo-topic-1".

    2. Delete demo-topic-2:

      ccloud kafka topic delete demo-topic-2
      

      You should see: Deleted topic "demo-topic-2".

    3. Delete demo-topic-3:

      ccloud kafka topic delete demo-topic-3
      

      You should see: Deleted topic "demo-topic-3".

    4. Delete connect-configs, one of the 3 topics created by the Connect worker:

      ccloud kafka topic delete connect-configs
      

      You should see: Deleted topic "connect-configs".

    5. Delete connect-offsets, one of the 3 topics created by the Connect worker:

      ccloud kafka topic delete connect-offsets
      

      You should see: Deleted topic "connect-offsets".

    6. Delete connect-status, one of the 3 topics created by the Connect worker:

      ccloud kafka topic delete connect-status
      

      You should see: Deleted topic "connect-status".

  3. Run the following commands to delete the API keys:

    ccloud api-key delete ESN5FSNDHOFFSUEV
    ccloud api-key delete QX7X4VA4DFJTTOIA
    
  4. Delete the Kafka cluster:

    ccloud kafka cluster delete lkc-x6m01
    
  5. Delete the environment:

    ccloud environment delete env-5qz2q
    

    You should see: Deleted environment "env-5qz2q".

If you run a demo that ends prematurely, you may receive the following error message when trying to run the demo again (ccloud environment create demo-script-env):

Error: 1 error occurred:
   * error creating account: Account name is already in use

Failed to create environment demo-script-env. Please troubleshoot and run again

In this case, run the following script to delete the demo’s topics, Kafka cluster, and environment:

./cleanup.sh

Advanced usage

The demo script provides variables that allow you to alter the default Kafka cluster name, cloud provider, and region. For example:

CLUSTER_NAME=my-demo-cluster CLUSTER_CLOUD=aws CLUSTER_REGION=us-west-2 ./start.sh

Here are the variables and their default values:

Variable Default
CLUSTER_NAME demo-kafka-cluster
CLUSTER_CLOUD aws
CLUSTER_REGION us-west-2

Additional Resources