Quick Start for Confluent Cloud

Confluent Cloud is a resilient, scalable, streaming data service based on Apache Kafka®, delivered as a fully managed service. Confluent Cloud has a web interface called the Cloud Console, a local command line interface, and REST APIs. You can manage cluster resources, settings, and billing with the Cloud Console. You can use the Confluent CLI and REST APIs to create and manage Kafka topics and more.

Get started for Free

Sign up for a Confluent Cloud trial and get $400 of free credit.

This quick start gets you up and running with Confluent Cloud using a Basic Kafka cluster.

  • The first section shows how to use Confluent Cloud to create topics, and produce and consume data to and from the cluster.
  • The second section walks you through how to use Confluent Cloud for Apache Flink® to run queries on the data using SQL syntax.
Prerequisites

The quick start workflows assume you already have a working Confluent Cloud environment, which incorporates a Stream Governance package at time of environment creation. Stream Governance will already be enabled in the environment as a prerequisite to this quick start. To learn more about Stream Governance packages, features, and environment setup workflows, see Stream Governance Packages, Features, and Limits in Confluent Cloud.

Section 1: Create a cluster and add a topic

Follow the steps in this section to set up a Kafka cluster on Confluent Cloud and produce data to Kafka topics on the cluster.

Step 1: Create a Kafka cluster in Confluent Cloud

In this step, you create and launch a basic Kafka cluster inside your default environment.

  1. Sign in to Confluent Cloud at https://confluent.cloud.

  2. Click Add cluster.

  3. On the Create cluster page, for the Basic cluster, select Begin configuration.

    Screenshot of Confluent Cloud showing the Create cluster page

    This example creates a Basic Kafka cluster, which supports single zone availability. For information about other cluster types, see Kafka Cluster Types in Confluent Cloud.

  4. On the Region/zones page, choose a cloud provider, region, and select single availability zone.

    Screenshot of Confluent Cloud showing the Create Cluster workflow
  5. Select Continue.

    Note

    If you haven’t set up a payment method, you see the Set payment page. Enter payment method and select Review or select Skip payment.

  6. Specify a cluster name, review the configuration and cost information, and then select Launch cluster.

    Screenshot of Confluent Cloud showing the Create Cluster workflow

Depending on the chosen cloud provider and other settings, it may take a few minutes to provision your cluster, but after the cluster has provisioned, the Cluster Overview page displays.

Screenshot of Confluent Cloud showing Cluster Overview page

Now you can get started configuring apps and data on your new cluster.

Step 2: Create a Kafka topic

In this step, you create a users Kafka topic by using the Cloud Console. A topic is a unit of organization for a cluster, and is essentially an append-only log. For more information about topics, see What is Apache Kafka.

  1. From the navigation menu, click Topics, and then click Create topic.

    Create topic page Confluent Cloud
  2. In the Topic name field, type “users” and then select Create with defaults.

    Topic page in Confluent Cloud showing a newly created topic

The users topic is created on the Kafka cluster and is available for use by producers and consumers.

The success message may prompt you to take an action, but you should continue with Step 3: Create a sample producer.

Step 3: Create a sample producer

You can produce example data to your Kafka cluster by using the hosted Datagen Source Connector for Confluent Cloud.

  1. From the navigation menu, select Connectors.

    To open Confluent Cloud at Connectors: https://confluent.cloud/go/connectors.

  2. In the Search box, type “datagen”.

  3. From the search results, select the Datagen Source connector.

    Screenshot that shows searching for the datagen connector

    Tip

    If you see the Launch Sample Data box, select Additional configuration.

  4. On the Topic selection pane, select the users topic you created in the previous section and then select Continue.

  5. In the Kafka credentials pane, leave Global access selected, and click Generate API key & download. This creates an API key and secret that allows the connector to access your cluster, and downloads the key and secret to your computer.

    The key and secret are required for the connector and also for the Confluent CLI and ksqlDB CLI to access your cluster.

    Note

    An API key and associated secret apply to the active Kafka cluster. If you add a new cluster, you must create a new API key for producers and consumers on the new Kafka cluster. For more information, see Use API Keys to Authenticate to Confluent Cloud.

  6. Enter “users” as the description for the key, and click Continue.

  7. On the Configuration page, select JSON_SR for the output record value format, Users for the template, and click Continue.

  8. For Connector sizing, leave the slider at the default of 1 task and click Continue

  9. On the Review and launch page, select the text in the Connector name box and replace it with “DatagenSourceConnector_users”.

  10. Click Continue to start the connector.

    The status of your new connector should read Provisioning, which lasts for a few seconds. When the status changes to Running, your connector is producing data to the users topic.

    Screenshot of Confluent Cloud showing a running Datagen Source Connector

Step 4: View messages

Your new users topic is now receiving messages. Use Confluent Cloud Console to see the data.

  1. From the navigation menu, select Topics to show the list of topics in your cluster.

    Screenshot of Confluent Cloud showing the Topics page
  2. Select the users topic.

  3. In the users topic detail page, select the Messages tab to view the messages being produced to the topic.

    Confluent Cloud showing the Messages page

Step 5: Inspect the data stream

Use Stream Lineage to track data movement through your cluster.

  1. Click Stream Lineage in the navigation menu.

  2. Click the node labeled DatagenSourceConnector_users, which is the connector that you created in Step 3. The details view opens, showing graphs for total production and other data.

    Screenshot of Confluent Cloud showing details for a source connector
  3. Dismiss the details view and select the topic labeled users. The details view opens, showing graphs for total throughput and other data.

    Screenshot of Confluent Cloud showing details for a topic
  4. Click the arrow on the left border of the canvas to open the navigation menu.

Step 6: Delete resources (optional)

Skip this step if you plan to move on to Section 2: Query streaming data with Flink SQL and learn how to use Flink SQL statements to query your data.

If you don’t plan to complete Section 2 and you’re ready to quit the Quick Start, delete the resources you created to avoid unexpected charges to your account.

  • Delete the connector:
    1. From the navigation menu, select Connectors.
    2. Click DatagenSourceConnector_users and choose the Settings tab.
    3. Click Delete connector, enter the connector name (DatagenSourceConnector_users), and click Confirm.
  • Delete the topic:
    1. From the navigation menu, click Topics, select the users topic, and then choose the Configuration tab.
    2. Click Delete topic, enter the topic name (users), and select Continue.
  • Delete the cluster:
    1. From the navigation menu, select Cluster Settings.
    2. Click Delete cluster, enter the cluster name, and click Continue.