Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Confluent Cloud Demos

Confluent Cloud is a resilient, scalable streaming data service based on Kafka, delivered as a fully managed service. It has a web interface and local command-line interface that you can use to manage cluster resources, Kafka topics, Schema Registry, and other services.

This page describes a few resources to help you build and validate your solutions on Confluent Cloud.

Confluent Cloud Promo Code

The first 20 users to sign up for Confluent Cloud and use promo code C50INTEG will receive an additional $50 free usage (details).

Caution

All the following demos and examples use real Confluent Cloud resources. They create Confluent Cloud environments, clusters, topics, ACLs, service accounts, ksqlDB applications, and potentially other Confluent Cloud resources that are billable. To avoid unexpected charges, carefully evaluate the cost of resources before launching any demo and manually verify that all Confluent Cloud resources are destroyed after you are done.

Demos

Confluent Cloud Quickstart

The Confluent Cloud Quickstart is an automated version of the Confluent Platform Quickstart, but this one runs in Confluent Cloud.

../../../../_images/quickstart.png

ccloud-stack Utility

The ccloud-stack Utility for Confluent Cloud creates a stack of fully managed services in Confluent Cloud. Executed with a single command, it is a quick way to create fully managed components in Confluent Cloud, which you can then use for learning and building other demos. Do not use this in a production environment. The script uses the Confluent Cloud CLI to dynamically do the following in Confluent Cloud:

  • Create a new environment.
  • Create a new service account.
  • Create a new Kafka cluster and associated credentials.
  • Enable Confluent Cloud Schema Registry and associated credentials.
  • Create a new ksqlDB app and associated credentials.
  • Create ACLs with wildcard for the service account.
  • Generate a local configuration file with all above connection information, useful for other demos/automation.
../../../../_images/ccloud-stack-resources1.png

Client Code Examples

If you are looking for code examples of producers writing to and consumers reading from Confluent Cloud, or producers and consumers using Avro with Confluent Schema Registry, refer to the client examples. It provides client examples written in various programming languages.

../../../../_images/clients-all1.png

Confluent Cloud CLI

The Tutorial: Confluent Cloud CLI is a fully scripted demo that shows users how to interact with Confluent Cloud using the Confluent Cloud CLI. It steps through the following workflow:

  • Create a new environment and specify it as the default.
  • Create a new Kafka cluster and specify it as the default.
  • Create a user key/secret pair and specify it as the default.
  • Produce and consume with Confluent Cloud CLI.
  • Create a service account key/secret pair.
  • Run a Java producer: before and after ACLs.
  • Run a Java producer: showcase a Prefix ACL.
  • Run Connect and kafka-connect-datagen connector with permissions.
  • Run a Java consumer: showcase a Wildcard ACL.
  • Delete the API key, service account, Kafka topics, Kafka cluster, environment, and the log files.
../../../../_images/confluent-cli.png

Cloud ETL

The cloud ETL demo showcases a cloud ETL solution leveraging all fully-managed services on Confluent Cloud. Using Confluent Cloud CLI, the demo creates a source connector that reads data from an AWS Kinesis stream into Confluent Cloud, then a Confluent Cloud ksqlDB application processes that data, and then a sink connector writes the output data into cloud storage in the provider of your choice (GCP GCS, AWS S3, or Azure Blob).

../../../../_images/topology3.png

Hybrid Cloud

The hybrid cloud demo and playbook showcase a hybrid Kafka deployment: one cluster is a self-managed cluster running locally, the other is a Confluent Cloud cluster. Data streams into topics, in both the local cluster and the Confluent Cloud cluster. Replicator copies the on-prem data to Confluent Cloud so that stream processing can happen in the cloud.

../../../../_images/services-in-cloud.jpg

Microservices in the Cloud

The microservices cloud demo showcases an order management workflow targeting Confluent Cloud. Microservices are deployed locally on Docker, and they are configured to use a Kafka cluster, ksqlDB, and Confluent Schema Registry in Confluent Cloud. Kafka Connect is also deployed locally on Docker, and it runs a SQL source connector to produce to Confluent Cloud and a Elasticsearch sink connector to consume from Confluent Cloud.

../../../../_images/microservices-demo1.png

Confluent Operator with Cloud

The Kubernetes demo features a deployment of Confluent Platform on Google Kubernetes Engine (GKE) leveraging Confluent Operator and Replicator, highlighting a data replication strategy to Confluent Cloud. Upon running this demo, you will have a GKE-based Confluent Platform deployment with simulated data replicating to your Confluent Cloud cluster.

../../../../_images/operator-demo-phase-2.png

Build Your Own Cloud Demo

ccloud-stack Utility

The ccloud-stack Utility for Confluent Cloud creates a stack of fully managed services in Confluent Cloud. Executed with a single command, it is a quick way to create fully managed components in Confluent Cloud, which you can then use for learning and building other demos. Do not use this in a production environment. The script uses the Confluent Cloud CLI to dynamically do the following in Confluent Cloud:

  • Create a new environment.
  • Create a new service account.
  • Create a new Kafka cluster and associated credentials.
  • Enable Confluent Cloud Schema Registry and associated credentials.
  • Create a new ksqlDB app and associated credentials.
  • Create ACLs with wildcard for the service account.
  • Generate a local configuration file with all above connection information, useful for other demos/automation.
../../../../_images/ccloud-stack-resources1.png

Auto-generate Configurations to connect to Confluent Cloud

The configuration generation script reads a configuration file and auto-generates delta configurations for all Confluent Platform components and clients. Use these per-component configurations for Confluent Platform components and clients connecting to Confluent Cloud:

  • Confluent Platform Components:
    • Schema Registry
    • ksqlDB Data Generator
    • ksqlDB
    • Confluent Replicator
    • Confluent Control Center
    • Kafka Connect
    • Kafka connector
    • Kafka command line tools
  • Kafka Clients:
    • Java (Producer/Consumer)
    • Java (Streams)
    • Python
    • .NET
    • Go
    • Node.js
    • C++
  • OS:
    • ENV file

Self Managed Components to Confluent Cloud

This Docker-based environment can be used with Confluent Cloud. The docker-compose.yml launches all services in Confluent Platform (except for the Kafka brokers), runs them in containers on localhost, and automatically configures them to connect to Confluent Cloud. Using this as a foundation, you can then add any connectors or applications.

../../../../_images/cp-all-in-one-cloud1.png

Put It All Together

You can chain these utilities to build your own hybrid demos that span on-prem and Confluent Cloud, where some self-managed components run on-prem and fully-managed services run in Confluent Cloud.

For example, you may want an easy way to run a connector not yet available in Confluent Cloud. In this case, you can run a self-managed connect worker and connector on prem and connect it to your Confluent Cloud cluster. Or perhaps you want to build a Kafka demo in Confluent Cloud and run the REST Proxy client or Confluent Control Center against it.

You can build any demo with a mix of fully-managed services in Confluent Cloud and self-managed components on localhost, in a few easy steps.

  1. Create a ccloud-stack of fully managed services in Confluent Cloud. One of the outputs is a local configuration file with key-value pairs of the required connection values to Confluent Cloud. (If you already have provisioned your Confluent Cloud resources, you can skip this step).

    ./ccloud_stack_create.sh
    
  2. Run the configuration generation script, passing in that local configuration file (created in previous step) as input. This script generates delta configuration files for all Confluent Platform components and clients, including information for bootstrap servers, endpoints, and credentials required to connect to Confluent Cloud.

    # stack-configs/java-service-account-<SERVICE_ACCOUNT_ID>.config is generated by step above
    ./ccloud-generate-cp-configs.sh stack-configs/java-service-account-<SERVICE_ACCOUNT_ID>.config
    

    One of the generated delta configuration files from this step is for environment variables, and it resembles this example, with credentials filled in.

    export BOOTSTRAP_SERVERS='<CCLOUD_BOOTSTRAP_SERVER>'
    export SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username\="<CCLOUD_API_KEY>" password\="<CCLOUD_API_SECRET>";'
    export SASL_JAAS_CONFIG_PROPERTY_FORMAT='org.apache.kafka.common.security.plain.PlainLoginModule required username="<CCLOUD_API_KEY>" password="<CCLOUD_API_SECRET>";'
    export REPLICATOR_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<CCLOUD_API_KEY>\" password=\"<CCLOUD_API_SECRET>\";'
    export BASIC_AUTH_CREDENTIALS_SOURCE=USER_INFO
    export SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO=<SCHEMA_REGISTRY_API_KEY>:<SCHEMA_REGISTRY_API_SECRET>
    export SCHEMA_REGISTRY_URL=https://<SCHEMA_REGISTRY_ENDPOINT>
    export CLOUD_KEY=<CCLOUD_API_KEY>
    export CLOUD_SECRET=<CCLOUD_API_SECRET>
    
  3. Source the above delta env file to export variables into the shell environment.

    # delta_configs/env.delta is generated by step above
    source delta_configs/env.delta
    
  4. Run the desired Confluent Platform services locally using this Docker-based example. The Docker Compose file launches Confluent Platform services on your localhost and uses environment variable substitution to populate the parameters with the connection values to your Confluent Cloud so that they can connect to Confluent Cloud. If you want to run a single service, you can bring up just that service.

    docker-compose up -d <service>
    

    In the case of running a self-managed connector locally that connects to Confluent Cloud, first add your desired connector to the base Kafka Connect Docker image as described in Add Connectors or Software, and then substitute that Docker image in your Docker Compose file.

  5. Refer to the library of bash functions for examples on how to interact with Confluent Cloud via the Confluent Cloud CLI.

Additional Resources