Quick Start for Apache Kafka using Confluent Cloud¶
This quick start gets you up and running with Confluent Cloud using a basic cluster. It shows how to use Confluent Cloud to create topics, produce and consume to an Apache Kafka® cluster. The quick start introduces both the web UI and the Confluent Cloud CLI to manage clusters and topics in Confluent Cloud, as these can be used interchangeably for most tasks.
Follow these steps to set up a Kafka cluster on Confluent Cloud and produce data to Kafka topics on the cluster.
Confluent Cloud is a resilient, scalable streaming data service based on Apache Kafka®, delivered as a fully managed service. Confluent Cloud has a web interface and local command line interface. You can manage cluster resources, settings, and billing with the web interface. You can use Confluent Cloud CLI to create and manage Kafka topics. Sign up for Confluent Cloud to get started.
For more information about Confluent Cloud, see the Confluent Cloud documentation.
Step 1: Create a Kafka Cluster in Confluent Cloud¶
This step is for Confluent Cloud users only. Confluent Cloud Enterprise users can skip to Install the Confluent Cloud CLI.
Sign in to Confluent Cloud at https://confluent.cloud.
Click Create cluster.
Specify a cluster name, cloud provider and region, cluster type Basic, and click Launch.
This quick start creates a Basic cluster which only supports single zone availability. For information about other cluster types, including standard and dedicated types, see Confluent Cloud Cluster Types.
Step 2: Create a Topic¶
In this step, you create the
users topic by using the Confluent Cloud UI.
You can also create topics by using the Confluent Cloud CLI.
In the navigation bar, click Topics.
Click Add a topic, and in the New Topic Page, enter “users” for the topic name.
Click Create with defaults to create the
Step 3: Create a Sample Producer¶
You can produce example data to your Kafka cluster by using the hosted Datagen Source Connector for Confluent Cloud.
In the navigation bar, click Connectors.
Click Add connector to open the Connectors menu.
Find the Datagen Source tile and click Select. The Add Datagen Source Connector form opens.
The connector requires an API key and secret to access your cluster. In the Kafka Cluster Credentials section, click Generate Kafka API key & secret.
Copy the key and secret to a local file and check I have saved my API key and secret and am ready to continue. The key and secret are also required for the Confluent Cloud CLI and ksqlDB CLI to access your cluster.
An API key and associated secret apply to the active Kafka cluster. If you add a new cluster, you must create a new API key for producers and consumers on the new Kafka cluster. For more information, see Confluent Cloud API Keys.
Fill in the following fields to configure your connector.
Field Value Name enter “DatagenSourceConnector_users” Which topic do you want to send data to? select users Output Messages select JSON Quickstart select USERS Max interval between messages enter “1000” for one second interval Number of tasks for this connector enter “1”
When the form is filled in, it should resemble the following.
At the bottom of the form, click Continue to review the details for your connector, and click Launch to start it. On the Connectors page, the status of your new connector displays Provisioning, which lasts for a few seconds. When the status changes to Running, your connector is producing data to the users topic.
Step 4: Consume Messages¶
Click Topics and click the users topic name.
Click the Messages tab in the topics page in the Confluent Cloud UI to view the messages being produced. The message viewer shows messages produced since the page was loaded, but it doesn’t show a historical view.
Step 5: Inspect Data Flow¶
Track the movement of data through your cluster by using the Data Flow page, where you can see sources, sinks, and topics and monitor messages as they move for one to another.
In the navigation bar, click Data flow. The topology of topics on your is displayed.
Click the node labeled ..ctor-producer-lcc-, which is the Datagen connector that you created in Step 3. Click Inspect to open the details view, which shows graphs for total production and other data.
lccsubstring is an acronym for “logical Connect cluster”.
Click the node labeled users and click Inspect. The details view opens, showing graphs for total throughput and other data.
Click Show partitions to view details about consumption on each partition for the
- Create streaming queries in Confluent Cloud ksqlDB
- Quick Start for Schema Management on Confluent Cloud
- Connect Clients to Confluent Cloud
- Connect External Systems to Confluent Cloud
- Connect your components and data to Confluent Cloud
- Configure Multi-Node Environment
- Try out the Confluent Cloud Demos and Examples