Build Your Own Apache Kafka® Demos¶
This page provides resources for you to build your own Apache Kafka® demo or test environment. You can run all the services with no pre-configured topics, connectors, data sources, or schemas. Once the services are running, you can connect to your clusters and provision topics, run Kafka producers or consumers, run ksqlDB or Kafka Streams applications, or load connectors.
Choose your Apache Kafka® deployment type:
On-Premises with Confluent Platform
On-Premises with Confluent Platform Community Components
On-Premises with KRaft (early access)
Confluent Cloud¶
Confluent Cloud provides managed services for key components such as:
- Apache Kafka® broker
- REST Proxy
- Confluent Schema Registry
- Connectors
- ksqlDB
To get started with Confluent Cloud, follow the quickstart and then use the Confluent Cloud Console tutorials to provision the other components.
To receive an additional $50 free usage in Confluent Cloud, enter promo code C50INTEG
in the Confluent Cloud Console Billing & payment section (details).
This promo code should sufficiently cover up to one day of running this Confluent Cloud example, beyond which you may be billed for the services that have an hourly charge until you destroy the Confluent Cloud resources created by this example.
However, if you want to run just the Apache Kafka® broker in Confluent Cloud but the remaining services locally, read the next few sections.
ccloud-stack Utility¶
Overview¶
The ccloud-stack Utility for Confluent Cloud creates a stack of fully managed services in Confluent Cloud. Executed with a single command, it is a quick way to create fully managed components in Confluent Cloud, which you can then use for learning and building other demos. Do not use this in a production environment. The script uses the Confluent Cloud CLI to dynamically do the following in Confluent Cloud:
- Create a new environment.
- Create a new service account.
- Create a new Kafka cluster and associated credentials.
- Enable Confluent Cloud Schema Registry and associated credentials.
- Create a new ksqlDB app and associated credentials.
- Create ACLs with wildcard for the service account.
- Generate a local configuration file with all above connection information, useful for other demos/automation.

To learn how to use ccloud-stack
with Confluent Cloud, read more at ccloud-stack Utility for Confluent Cloud.
How do I connect to the services?¶
In addition to creating all the resources in Confluent Cloud with associated service account and ACLs, ccloud-stack
generates a local configuration file with all of the Confluent Cloud connection information.
You can either pass this entire file to your client application, or parse out and set each parameter in your application.
After running ccloud-stack
, view the configuration file at stack-configs/java-service-account-<SERVICE_ACCOUNT_ID>.config
.
It resembles:
# ------------------------------
# ENVIRONMENT ID: <ENVIRONMENT ID>
# SERVICE ACCOUNT ID: <SERVICE ACCOUNT ID>
# KAFKA CLUSTER ID: <KAFKA CLUSTER ID>
# SCHEMA REGISTRY CLUSTER ID: <SCHEMA REGISTRY CLUSTER ID>
# KSQLDB APP ID: <KSQLDB APP ID>
# ------------------------------
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
bootstrap.servers=<BROKER ENDPOINT>
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='<API KEY>' password='<API SECRET>';
basic.auth.credentials.source=USER_INFO
schema.registry.basic.auth.user.info=<SR API KEY>:<SR API SECRET>
schema.registry.url=https://<SR ENDPOINT>
ksql.endpoint=<KSQLDB ENDPOINT>
ksql.basic.auth.user.info=<KSQLDB API KEY>:<KSQLDB API SECRET>
cp-all-in-one-cloud¶
Overview¶
Confluent Cloud provides many fully-managed services, including connectors, ksqlDB, Confluent Schema Registry, and REST Proxy. However, you may want to locally run and self-manage some of these services, but connect them to your Confluent Cloud cluster. For example, users may want to run their own source connector to write to Confluent Cloud or a sink connector to read from Confluent Cloud. In this case, you could run ccloud-stack Utility for Confluent Cloud to create your Confluent Cloud instance, and then run cp-all-in-one-cloud to selectively run services to it.
This Docker Compose file automatically launches self-managed Confluent Platform components (except for the Kafka brokers that are in Confluent Cloud) in containers on your local host, and configures them to connect to Confluent Cloud.
For an automated example that uses cp-all-in-one-cloud
, refer to cp-all-in-one-cloud automated quickstart which follows the Quick Start for Apache Kafka using Confluent Cloud.

Setup¶
By default, the example uses Schema Registry running in a local Docker container. If you prefer to use Confluent Cloud Schema Registry instead, you need to first enable Confluent Cloud Schema Registry prior to running the example.
By default, the example uses ksqlDB running in a local Docker container. If you prefer to use Confluent Cloud ksqlDB instead, you need to first enable Confluent Cloud Schema Registry prior to running the example.
Clone the confluentinc/cp-all-in-one GitHub repository.
git clone https://github.com/confluentinc/cp-all-in-one.git
Navigate to the
cp-all-in-one-cloud
directory.cd cp-all-in-one/cp-all-in-one-cloud
Check out the 7.0.1-post branch.
git checkout 7.0.1-post
The docker-compose.yml has parameterized the values to connect to your Confluent Cloud instance, including the bootstrap servers and security configuration. You could fill these in manually, but a more programmatic method is to create a local file (for example, at
$HOME/.confluent/java.config
) with configuration parameters to connect to your Kafka cluster. Starting with one of the templates below, customize the file with connection information to your cluster. Substitute your values for{{ BROKER_ENDPOINT }}
,{{CLUSTER_API_KEY }}
, and{{ CLUSTER_API_SECRET }}
.# Required connection configs for Kafka producer, consumer, and admin bootstrap.servers={{ BROKER_ENDPOINT }} security.protocol=SASL_SSL sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}'; sasl.mechanism=PLAIN # Required for correctness in Apache Kafka clients prior to 2.6 client.dns.lookup=use_all_dns_ips # Best practice for Kafka producer to prevent data loss acks=all
If Schema Registry and ksqlDB are running locally (per docker-compose.yml), add these lines:
schema.registry.url=http://localhost:8081 ksql.endpoint=http://localhost:8089
If Schema Registry and ksqlDB are running in Confluent Cloud, add these lines, substituting your values for the endpoints and credentials:
basic.auth.credentials.source=USER_INFO schema.registry.basic.auth.user.info={{ SR API KEY }}:{{ SR API SECRET }} schema.registry.url={{ SR ENDPOINT }} ksql.endpoint={{ KSQLDB ENDPOINT }} ksql.basic.auth.user.info={{ KSQLDB API KEY }}:{{ KSQLDB API SECRET }}
Get a bash library of useful functions for interacting with Confluent Cloud. This library is community-supported and not supported by Confluent.
curl -sS -o ccloud_library.sh https://raw.githubusercontent.com/confluentinc/examples/latest/utils/ccloud_library.sh
Using
ccloud_library.sh
(which you downloaded in the previous step) auto-generate configuration files for downstream clients. One of the output files isdelta_configs/env.delta
.source ./ccloud_library.sh ccloud::generate_configs $HOME/.confluent/java.config
Source the ENV variables in your environment, which will now be available to the
docker-compose.yml
file.source delta_configs/env.delta
Bring up all local services¶
Make sure you completed the steps in the setup section before proceeding.
View the docker-compose.yml file.
To bring up all services locally, at once:
docker-compose up -d
Bring up some local services¶
Make sure you completed the steps in the setup section before proceeding.
View the docker-compose.yml file.
To bring up Schema Registry locally (if you are not using Confluent Cloud Schema Registry):
docker-compose up -d schema-registry
To bring up Connect locally (if you are not using fully-managed connectors):
docker-compose up -d connect
The docker-compose.yml file has a container called
connect
that is running a custom Docker image cnfldemos/cp-server-connect-datagen which pre-bundles the kafka-connect-datagen connector. To run Connect with other connectors, see Run a self-managed connector to Confluent Cloud.To bring up Confluent Control Center locally:
docker-compose up -d control-center
To bring up ksqlDB locally (if you are not using Confluent Cloud ksqlDB):
docker-compose up -d ksqldb-server
To bring up ksqlDB CLI locally, assuming you are using Confluent Cloud ksqldB, if you want to just run a Docker container that is transient:
docker run -it confluentinc/cp-ksqldb-cli:5.5.0 -u $(echo $KSQLDB_BASIC_AUTH_USER_INFO | awk -F: '{print $1}') -p $(echo $KSQLDB_BASIC_AUTH_USER_INFO | awk -F: '{print $2}') $KSQLDB_ENDPOINT
If you want to run a Docker container for ksqlDB CLI from the Docker Compose file and connect to Confluent Cloud ksqlDB in a separate step:
docker-compose up -d ksqldb-cli
To bring up REST Proxy locally:
docker-compose up -d rest-proxy
Run a self-managed connector to Confluent Cloud¶
Confluent Cloud provides many fully-managed connectors, but there may be use cases where you want to run self-managed connectors to your Confluent Cloud cluster.
If you want to run a Connect Docker image with a specific connector, you need to first build a custom Docker image that adds the desired connector’s jars to the base Kafka Connect Docker image. The general process is described in Add Connectors or Software, but here we elaborate with more targeted instructions for running Connect to Confluent Cloud.
Verify you have set up your environment so that it is customized to your specific Confluent Cloud instance, and that you have your environment variables set that will be used by the Docker Compose files. In particular, you must have variables defined in your environment that resemble the variables shown below:
export BOOTSTRAP_SERVERS="<CCLOUD_BOOTSTRAP_SERVER>" export SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username='<CCLOUD_API_KEY>' password='<CCLOUD_API_SECRET>';" export SASL_JAAS_CONFIG_PROPERTY_FORMAT="org.apache.kafka.common.security.plain.PlainLoginModule required username='<CCLOUD_API_KEY>' password='<CCLOUD_API_SECRET>';" export REPLICATOR_SASL_JAAS_CONFIG="org.apache.kafka.common.security.plain.PlainLoginModule required username='<CCLOUD_API_KEY>' password='<CCLOUD_API_SECRET>';" export BASIC_AUTH_CREDENTIALS_SOURCE="USER_INFO" export SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO="<SCHEMA_REGISTRY_API_KEY>:<SCHEMA_REGISTRY_API_SECRET>" export SCHEMA_REGISTRY_URL="https://<SCHEMA_REGISTRY_ENDPOINT>" export CLOUD_KEY="<CCLOUD_API_KEY>" export CLOUD_SECRET="<CCLOUD_API_SECRET>" export KSQLDB_ENDPOINT="" export KSQLDB_BASIC_AUTH_USER_INFO=""
Search through Confluent Hub to find the desired connector.
Set
CONNECTOR_OWNER
,CONNECTOR_NAME
andCONNECTOR_VERSION
. For example, to use the ElasticSearch sink connector, set:export CONNECTOR_OWNER=confluentinc export CONNECTOR_NAME=kafka-connect-elasticsearch export CONNECTOR_VERSION=11.0.2
Decide if you want to run Connect in distributed mode or standalone mode. Distributed mode can be scaled up with multiple connect workers and is more fault-tolerant. However, you may want to run in standalone mode for rapid development. For instance, standalone mode does not create 3 connect Kafka topics, you don’t need to clean up those topics if you restart the worker (because state is local), and you don’t need to use the REST API to submit connectors.
Distributed mode¶
Run the following commands from the cp-all-in-one-cloud folder.
Build a new, custom Docker image with the connector jar files from Dockerfile:
docker build \ --build-arg CONNECTOR_OWNER=${CONNECTOR_OWNER} \ --build-arg CONNECTOR_NAME=${CONNECTOR_NAME} \ --build-arg CONNECTOR_VERSION=${CONNECTOR_VERSION} \ -t localbuild/connect_distributed_with_${CONNECTOR_NAME}:${CONNECTOR_VERSION} \ -f ../Docker-connect/distributed/Dockerfile ../Docker-connect/distributed
Create a configuration file for the connector. The configuration parameters will vary per connector, please see documentation. For ElasticSearch, it can resemble:
{ "name": "elasticsearch-orders", "config": { "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", "topics": "orders", "connection.url": "$ELASTICSEARCH_URL", "type.name": "microservices", "key.ignore": true, "key.converter": "org.apache.kafka.connect.storage.StringConverter", "value.converter": "io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url": "$SCHEMA_REGISTRY_URL", "value.converter.basic.auth.credentials.source": "$BASIC_AUTH_CREDENTIALS_SOURCE", "value.converter.schema.registry.basic.auth.user.info": "$SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO", "schema.ignore": true } }
Start Connect Docker Compose from docker-compose.connect.distributed.yml. Notice that the file references locally-defined environment parameters for connecting to your Confluent Cloud instance.
docker-compose -f docker-compose.connect.distributed.yml up -d
Use the Connect REST API to submit your connector, something similar to:
curl -XPOST \ -H Accept:application/json \ -H Content-Type:application/json \ http://localhost:8083/connectors/ \ -d @/path/to/connector.config
Verify the connector has started.
docker-compose logs connect
Standalone mode¶
Run the following commands from the cp-all-in-one-cloud folder.
Build a new, custom Docker image with the connector jar files from Dockerfile:
docker build \ --build-arg CONNECTOR_OWNER=${CONNECTOR_OWNER} \ --build-arg CONNECTOR_NAME=${CONNECTOR_NAME} \ --build-arg CONNECTOR_VERSION=${CONNECTOR_VERSION} \ -t localbuild/connect_standalone_with_${CONNECTOR_NAME}:${CONNECTOR_VERSION} \ -f ../Docker-connect/standalone/Dockerfile ../Docker-connect/standalone
Create a configuration for the connector within the docker-compose.connect.standalone.yml file. The configuration parameters will vary per connector, please see documentation. For ElasticSearch, it can resemble:
CONNECTOR_NAME: elasticsearch-sink-connector CONNECTOR_CONNECTOR_CLASS: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector CONNECTOR_TYPE_NAME: _doc CONNECTOR_TOPICS: test01 CONNECTOR_KEY_IGNORE: true CONNECTOR_SCHEMA_IGNORE: false CONNECTOR_CONNECTION_URL: http://elasticsearch:9200
Start Connect Docker Compose from docker-compose.connect.standalone.yml. Notice that the file references locally-defined environment parameters for connecting to your Confluent Cloud instance.
docker-compose -f docker-compose.connect.standalone.yml up -d
Verify the connector has started.
docker-compose logs connect
How do I connect to the services?¶
To connect to the services running locally in the Docker containers from cp-all-in-one-cloud
:
- If you’re connecting from localhost, use
localhost:<port>
. - If you’re connecting from another Docker container, use
<container name>:<port>
.
See the docker-compose.yml file for container names and ports.
Generate Test Data with Datagen¶
Read the blog post Creating a Serverless Environment for Testing Your Apache Kafka Applications: a “Hello, World!” for getting started with Confluent Cloud, plus different ways to generate more interesting test data to the Kafka topics.
On-Premises¶
cp-all-in-one¶
Overview¶
Use cp-all-in-one to run the Confluent Platform stack on-premises. This Docker Compose file launches all services in Confluent Platform, and runs them in containers in your local host.
For an automated example that uses cp-all-in-one
, refer to cp-all-in-one automated quickstart which follows the Quick Start for Confluent Platform.
- Prerequisites:
- Docker
- Docker version 1.11 or later is installed and running.
- Docker Compose is installed. Docker Compose is installed by default with Docker for Mac.
- Docker memory is allocated minimally at 6 GB. When using Docker Desktop for Mac, the default Docker memory allocation is 2 GB. You can change the default allocation to 6 GB in Docker. Navigate to Preferences > Resources > Advanced.
- Internet connectivity
- Operating System currently supported by Confluent Platform
- Networking and Kafka on Docker
- Configure your hosts and ports to allow both internal and external components to the Docker network to communicate. For more details, see this article.
- Docker

Clone the confluentinc/cp-all-in-one GitHub repository.
git clone https://github.com/confluentinc/cp-all-in-one.git
Navigate to the
cp-all-in-one
directory.cd cp-all-in-one/cp-all-in-one
Check out the 7.0.1-post branch.
git checkout 7.0.1-post
To bring up all services:
docker-compose up -d
How do I connect to the services?¶
To connect to the services running locally in the Docker containers from cp-all-in-one
:
- If you’re connecting from localhost, use
localhost:<port>
. For example, to connect to the Kafka broker, connect tolocalhost:9092
. - If you’re connecting from another Docker container, use
<container name>:<port>
. For example, to connect to the Kafka broker, connect tobroker:29092
.
See the docker-compose.yml file for container names and ports.
cp-all-in-one-community¶
Overview¶
Use cp-all-in-one-community to run only the community services from Confluent Platform on-premesis. This Docker Compose file launches all community services and runs them in containers in your local host.
For an automated example of how to use cp-all-in-one-community
, refer to cp-all-in-one-community automated quickstart.
- Prerequisites:
- Docker
- Docker version 1.11 or later is installed and running.
- Docker Compose is installed. Docker Compose is installed by default with Docker for Mac.
- Docker memory is allocated minimally at 6 GB. When using Docker Desktop for Mac, the default Docker memory allocation is 2 GB. You can change the default allocation to 6 GB in Docker. Navigate to Preferences > Resources > Advanced.
- Internet connectivity
- Operating System currently supported by Confluent Platform
- Networking and Kafka on Docker
- Configure your hosts and ports to allow both internal and external components to the Docker network to communicate. For more details, see this article.
- Docker

Clone the confluentinc/cp-all-in-one GitHub repository.
git clone https://github.com/confluentinc/cp-all-in-one.git
Navigate to the
cp-all-in-one-community
directory.cd cp-all-in-one/cp-all-in-one-community
Check out the 7.0.1-post branch.
git checkout 7.0.1-post
To bring up all services:
docker-compose up -d
How do I connect to the services?¶
To connect to the services running locally in the Docker containers from cp-all-in-one-community
:
- If you’re connecting from localhost, use
localhost:<port>
. For example, to connect to the Kafka broker, connect tolocalhost:9092
. - If you’re connecting from another Docker container, use
<container name>:<port>
. For example, to connect to the Kafka broker, connect tobroker:29092
.
See the docker-compose.yml file for container names and ports.
KRaft¶
Overview¶
You can now experiment running Kafka without ZooKeeper, also called Kafka Raft Metadata mode, aka KRaft. To learn more see these resources:
- Blog post: Apache Kafka Made Simple: A First Glimpse of a Kafka Without ZooKeeper
- Course: KRaft: Apache Kafka without ZooKeeper
Note that KRaft is in early access and should be used in development only. It is not suitable for production.
- Prerequisites:
- Docker
- Docker version 1.11 or later is installed and running.
- Docker Compose is installed. Docker Compose is installed by default with Docker for Mac.
- Docker memory is allocated minimally at 6 GB. When using Docker Desktop for Mac, the default Docker memory allocation is 2 GB. You can change the default allocation to 6 GB in Docker. Navigate to Preferences > Resources > Advanced.
- Internet connectivity
- Operating System currently supported by Confluent Platform
- Networking and Kafka on Docker
- Configure your hosts and ports to allow both internal and external components to the Docker network to communicate. For more details, see this article.
- Docker

Clone the confluentinc/cp-all-in-one GitHub repository.
git clone https://github.com/confluentinc/cp-all-in-one.git
Navigate to the
cp-all-in-one-kraft
directory.cd cp-all-in-one/cp-all-in-one-kraft
Check out the 7.0.1-post branch.
git checkout 7.0.1-post
To bring up the Kafka broker using this Docker Compose file:
docker-compose up -d
How do I connect to the services?¶
To connect to the services running locally in the Docker containers from cp-all-in-one-kraft
:
- If you’re connecting from localhost, use
localhost:<port>
. For example, to connect to the Kafka broker, connect tolocalhost:9092
. - If you’re connecting from another Docker container, use
<container name>:<port>
. For example, to connect to the Kafka broker, connect tobroker:29092
.
See the docker-compose.yml file for container names and ports.
Generate Test Data with kafka-connect-datagen¶
Read the blog post Easy Ways to Generate Test Data in Kafka: a “Hello, World!” for launching Confluent Platform, plus different ways to generate more interesting test data to the Kafka topics.
Next Steps¶
- Run examples of Kafka client producers and consumers, with and without Avro, as documented at Code Examples for Apache Kafka®.
- Try out basic Kafka, Kafka Streams, and ksqlDB tutorials with step-by-step instructions at Kafka Tutorials.