You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Confluent Cloud Quick Start

This quick start shows you how to get up and running with Confluent Cloud. This quick start will show the basics of using Confluent Cloud, including creating topics and producing and consuming to a Kafka cluster in Confluent Cloud.

Confluent Cloud is a resilient, scalable streaming data service based on Apache Kafka, delivered as a fully managed service. Confluent Cloud has a web interface and local command line interface. You can manage cluster resources, settings, and billing with the web interface. You can use Confluent Cloud CLI to create and manage Kafka topics.

For more information about Confluent Cloud, see the Confluent Cloud documentation.


Step 1: Create Kafka Cluster in Confluent Cloud


This step is for Confluent Cloud Professional users only. Confluent Cloud Enterprise users can skip to Step 2: Install and Configure the Confluent Cloud CLI.

  1. Log into Confluent Cloud at

  2. Click Create cluster.

  3. Specify a cluster name, choose a cloud provider, and click Continue. Optionally, you can specify read and write throughput, storage, region, and durability.

  4. Confirm your cluster subscription details, payment information, and click Save and launch cluster.


Step 2: Install and Configure the Confluent Cloud CLI

After you have a working Kafka cluster in Confluent Cloud, you can use the Confluent Cloud command line tool to interact with your cluster from your laptop. This quick start assumes your are configuring Confluent Cloud for Java clients. You can also use Confluent Cloud with librdkafka-based clients. For more information about installing the Confluent Cloud CLI, see Install the Confluent Cloud CLI.

  1. From the Management -> Clusters page, click the ellipses on the right-hand side of your cluster name and click Client config.

  2. Follow the on-screen Confluent Cloud installation instructions.


Step 3: Create Topics and Produce and Consume to Kafka

  1. Create a topic named page_visits with default options.

    $ ccloud topic create page_visits


    By default the Confluent Cloud CLI creates topics with a replication factor of 3.

  2. Optional: describe the page_visits topic.

    $ ccloud topic describe page_visits

    Your output should resemble:

    Topic:page_visits       PartitionCount:12       ReplicationFactor:3     Configs:min.insync.replicas=2
        Topic: page_visits  Partition: 0    Leader: 4       Replicas: 4,5,6 Isr: 4,5,6
        Topic: page_visits  Partition: 1    Leader: 5       Replicas: 5,6,7 Isr: 5,6,7
        Topic: page_visits  Partition: 2    Leader: 6       Replicas: 6,7,8 Isr: 6,7,8
        Topic: page_visits  Partition: 3    Leader: 7       Replicas: 7,8,9 Isr: 7,8,9
        Topic: page_visits  Partition: 4    Leader: 8       Replicas: 8,9,10        Isr: 8,9,10
        Topic: page_visits  Partition: 5    Leader: 9       Replicas: 9,10,11       Isr: 9,10,11
        Topic: page_visits  Partition: 6    Leader: 10      Replicas: 10,11,12      Isr: 10,11,12
        Topic: page_visits  Partition: 7    Leader: 11      Replicas: 11,12,13      Isr: 11,12,13
        Topic: page_visits  Partition: 8    Leader: 12      Replicas: 12,13,14      Isr: 12,13,14
        Topic: page_visits  Partition: 9    Leader: 13      Replicas: 13,14,15      Isr: 13,14,15
        Topic: page_visits  Partition: 10   Leader: 14      Replicas: 14,15,16      Isr: 14,15,16
        Topic: page_visits  Partition: 11   Leader: 15      Replicas: 15,16,17      Isr: 15,16,17
  3. Modify the page_visits topic to have a retention period of 259200000 milliseconds.

    $ ccloud topic alter page_visits --config=""

    Your output should resemble:

    $ Topic configuration for "page_visits" altered.
  4. Produce items into the page_visits topic.

    1. Run this command.

      ccloud produce -t page_visits
    2. Enter text in your terminal and press Ctrl + C to exit. For example:

  5. Consume items from the page_visits topic. Press Ctrl + C to exit.

    ccloud consume -b -t page_visits

    Your output should show the items that you entered in the production step:

    Processed a total of 3 messages.

    The order of the consumed messages does not match the order that they were produced. This is because the producer spread them over the 10 partitions in the page_visits topic and the consumer reads from all 10 partitions in parallel.

Step 4: Run Java Examples

In this step you clone the Examples repository from GitHub and run Confluent Cloud Java examples. The examples repository contains demo applications and code examples for Confluent Platform and Apache Kafka.

  1. Clone the Confluent Cloud examples repository from GitHub.

    $ git clone
  2. Navigate to the /examples/ccloud/java-clients directory.

  3. Build the client examples.

    $ mvn clean package
  4. Run the producer

    $ mvn exec:java -Dexec.mainClass="io.confluent.examples.clients.ProducerExample" \
      -Dexec.args="$HOME/.ccloud/config page_visits 10"
  5. Run the Kafka consumer application to read the records that were just published to the Kafka cluster, and to display the records in the console.

    1. Run the client examples.

      $ mvn clean package
    2. Run the consumer.

      $ mvn exec:java -Dexec.mainClass="io.confluent.examples.clients.ConsumerExample" \
        -Dexec.args="$HOME/.ccloud/config page_visits"

      Hit Ctrl+C to stop.

  6. Delete your page_visits topic.


    Use this command carefully as data loss can occur.

    $​ ccloud topic delete page_visits

    Your output should resemble:

    Topic "page_visits" marked for deletion.

Next Steps