Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Confluent Cloud Quick Start¶
This quick start shows you how to get up and running with Confluent Cloud. This quick start will show the basics of using Confluent Cloud, including creating topics and producing and consuming to a Apache Kafka® cluster in Confluent Cloud.
Confluent Cloud is a resilient, scalable streaming data service based on Apache Kafka®, delivered as a fully managed service. Confluent Cloud has a web interface and local command line interface. You can manage cluster resources, settings, and billing with the web interface. You can use Confluent Cloud CLI to create and manage Kafka topics.
For more information about Confluent Cloud, see the Confluent Cloud documentation.
- Prerequisites
- Access to Confluent Cloud
- Confluent Cloud Limits and Supported Features
- Maven to compile the client Java code
Step 1: Create Kafka Cluster in Confluent Cloud¶
Important
This step is for Confluent Cloud users only. Confluent Cloud Enterprise users can skip to Step 2: Install and Configure the Confluent Cloud CLI.
Log into Confluent Cloud at https://confluent.cloud.
Click Create cluster.
Specify a cluster name, choose a cloud provider, and click Continue. Optionally, you can specify read and write throughput, storage, region, and durability.
Confirm your cluster subscription details, payment information, and click Save and launch cluster.
Step 2: Install and Configure the Confluent Cloud CLI¶
After you have a working Kafka cluster in Confluent Cloud, you can use the Confluent Cloud command line tool to interact with your cluster from your laptop. This quick start assumes your are configuring Confluent Cloud for Java clients. You can also use Confluent Cloud with librdkafka-based clients. For more information about installing the Confluent Cloud CLI, see Install the Confluent Cloud CLI.
From the Environment overview page, click your cluster name.
Click Data In/Out in the sidebar and click CLI. Follow the on-screen Confluent Cloud installation instructions.
Step 3: Configure Confluent Cloud Schema Registry¶
Important
- Confluent Cloud Schema Registry is currently available as a preview. For more information, see Confluent Cloud Schema Registry Preview.
- Your VPC must be able to communicate with the Confluent Cloud Schema Registry public internet endpoint. For more information, see Using Confluent Cloud Schema Registry in a VPC Peered Environment.
Enable Schema Registry for your environment¶
Configure the Confluent Cloud CLI for Schema Registry¶
From the Environment Overview page, click CLUSTERS and select your cluster.
Tip
You can view Confluent Cloud Schema Registry usage and API access information from the Environment Overview -> SCHEMA REGISTRY page.
Select Data In/Out -> Clients and the JAVA tab. Follow the onscreen instructions to create the Schema Registry-specific Java configuration, including API key pairs for Schema Registry and your Kafka cluster.
Copy this information and paste in your Confluent Cloud CLI configuration file (
~/.ccloud/config
).Your configuration file should resemble this.
cat ~/.ccloud/config ssl.endpoint.identification.algorithm=https sasl.mechanism=PLAIN request.timeout.ms=20000 bootstrap.servers=<bootstrap-server-url> retry.backoff.ms=500 sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="<kafka-api-key>" password="<kafka-api-secret>"; security.protocol=SASL_SSL // Schema Registry specific settings basic.auth.credentials.source=USER_INFO schema.registry.basic.auth.user.info=<schema-registry-api-key>:<schema-registry-api-secret> schema.registry.url=<schema-registry-url> // Enable Avro serializer with Schema Registry (optional) key.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer value.serializer=io.confluent.kafka.serializers.KafkaAvroSerializer
Tip
If Schema Registry credentials are not properly configured, you will get error messages when you run a producer from the Java client examples. The messages will indicate a problem registering the schema even if the schema is valid. Make sure Schema Registry credentials and URL are correct and properly formatted to match the
~/.ccloud/config
examples.Optional: Verify that your Schema Registry credentials are properly configured, where Schema Registry API key (
<schema-registry-api-key>
), API secret (<schema-registry-api-secret>
), and endpoint (<schema-registry-url>
) are specified.Run this command to authenticate with the cluster and list the topics registered in your schema.
curl -u <schema-registry-api-key>:<schema-registry-api-secret> \ <schema-registry-url>/subjects
If no subjects are created, your output will be empty (
[]
). If you have subjects, your output should resemble:["test2-value"]
Here is an example command:
curl -u schemaregistry5000:alsdkjaslkdjqwemnoilbkjerlkqj123123opwrqpru \ https://psrc-lq2dm.us-east-2.aws.confluent.cloud/subjects
Step 4: Create Topics and Produce and Consume to Kafka¶
Create a topic named
my_topic
with default options.ccloud topic create my_topic
Tip
By default the Confluent Cloud CLI creates topics with a replication factor of 3.
Optional: Describe the
my_topic
topic.ccloud topic describe my_topic
Your output should resemble:
Topic:my_topic PartitionCount:12 ReplicationFactor:3 Configs:message.format.version=1.0-IV0,max.message.bytes=2097164,min.insync.replicas=2 Topic: my_topic Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2 Topic: my_topic Partition: 1 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3 Topic: my_topic Partition: 2 Leader: 1 Replicas: 1,3,0 Isr: 1,3,0 Topic: my_topic Partition: 3 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1 Topic: my_topic Partition: 4 Leader: 3 Replicas: 3,2,0 Isr: 3,2,0 Topic: my_topic Partition: 5 Leader: 0 Replicas: 0,3,1 Isr: 0,3,1 Topic: my_topic Partition: 6 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2 Topic: my_topic Partition: 7 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3 Topic: my_topic Partition: 8 Leader: 3 Replicas: 3,0,1 Isr: 3,0,1 Topic: my_topic Partition: 9 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2 Topic: my_topic Partition: 10 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 Topic: my_topic Partition: 11 Leader: 2 Replicas: 2,3,0 Isr: 2,3,0
Modify the
my_topic
topic to have a retention period of days (259200000
milliseconds).ccloud topic alter my_topic --config="retention.ms=259200000"
Your output should resemble:
Topic configuration for "my_topic" altered.
Produce records to the
my_topic
topic.ccloud produce --topic my_topic
You can type messages in as standard input. By default they are newline separated. Press
Ctrl + C
to exit.foo bar baz ^C
Consume items from the
my_topic
topic and pressCtrl + C
to exit.ccloud consume -b -t my_topic
Your output should show the items that you entered in
ccloud produce
:baz foo bar ^C Processed a total of 3 messages.
The order of the consumed messages does not match the order that they were produced. This is because the producer spread them over the 10 partitions in the
my_topic
topic and the consumer reads from all 10 partitions in parallel.
Step 5: Run Java Examples¶
In this step you clone the Examples repository from GitHub and run Confluent Cloud Java examples with Avro. The examples repository contains demo applications and code examples for Confluent Platform and Kafka.
Clone the Confluent Cloud examples repository from GitHub and navigate to the Confluent Cloud Java directory.
git clone https://github.com/confluentinc/examples.git cd examples/clients/cloud/java
Build the client example.
mvn clean package
Run the producer (with arguments that specify the path to connect to your local Confluent Cloud instance and topic name).
mvn exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ProducerAvroExample" \ -Dexec.args="$HOME/.ccloud/config my_topic_avro"
Tip
If Schema Registry credentials are not properly configured, you will get error messages when you run a producer from the Java client examples. The messages will indicate a problem registering the schema even if the schema is valid. Make sure Schema Registry credentials and URL are correct and properly formatted to match the
~/.ccloud/config
examples.Run the Kafka consumer application to read the records that were just published to the Kafka cluster, and display the records in the console.
Rebuild the example.
mvn clean package
Run the consumer (with arguments that specify the path to the Confluent Cloud instance and topic name).
mvn exec:java -Dexec.mainClass="io.confluent.examples.clients.cloud.ConsumerAvroExample" \ -Dexec.args="$HOME/.ccloud/config my_topic_avro"
Hit
Ctrl+C
to stop.
View the schema information registered in Confluent Cloud Schema Registry, where Schema Registry API key (
<schema-registry-api-key>
), API secret (<schema-registry-api-secret>
), and endpoint (<schema-registry-url>
) are specified.View the list of registered subjects.
curl -u <schema-registry-api-key>:<schema-registry-api-secret> \ <schema-registry-url>/subjects/my_topic_avro-value/versions/1
Your output should resemble:
{"subject":"my_topic_avro","version":1,"id":100001,"schema":"{\"name\":\"io.confluent.examples.clients.cloud.DataRecordAvro\",\"type\":\"record\",\"fields\":[{\"name\":\"count\",\"type\":\"long\"}]}"}
View the list of topics.
ccloud topic list
Your output should show:
my_topic my_topic_avro
Delete the topics
my_topic
andmy_topic_avro
.Caution
Use this command carefully as data loss can occur.
ccloud topic delete my_topic ccloud topic delete my_topic_avro
Your output should resemble:
Topic "my_topic" marked for deletion. Topic "my_topic_avro" marked for deletion.
Next Steps¶
- Connect your components and data to Confluent Cloud
- Configure Multi-Node Environment
- Learn more about Confluent Cloud in the documentation