Example: Create Fully-Managed Services in Confluent Cloud¶
ccloud-stack utility creates a stack of fully managed services in Confluent Cloud.
It is a quick way to create resources in Confluent Cloud with correct credentials and permissions, useful as a starting point from which you can then use for learning, extending, and building other examples.
The utility uses the Confluent CLI under the hood to dynamically do the following in Confluent Cloud :
- Create a new environment
- Create a new service account
- Create a new Kafka cluster and associated credentials
- Enable Schema Registry and associated credentials
- (Optional) Create a new ksqlDB app and associated credentials
- Create role binding for the service account
In addition to creating these resources,
ccloud-stack also generates a local configuration file with connection information to all of the above services.
This file is particularly useful because it contains connection information to your Confluent Cloud instance, and any downstream application or Kafka client can use it, or you can use it for other demos or automation workflows.
Cost to Run ccloud-stack¶
Any Confluent Cloud example uses real Confluent Cloud resources that may be billable. An example may create a new Confluent Cloud environment, Kafka cluster, topics, ACLs, and service accounts, as well as resources that have hourly charges like connectors and ksqlDB applications. To avoid unexpected charges, carefully evaluate the cost of resources before you start. After you are done running a Confluent Cloud example, destroy all Confluent Cloud resources to avoid accruing hourly charges for services and verify that they have been deleted.
This utility uses real Confluent Cloud resources. It is intended to be a quick way to create resources in Confluent Cloud with correct credentials and permissions, useful as a starting point from which you can then use for learning, extending, and building other examples.
- If you just run
ccloud-stackwithout explicitly enabling Confluent Cloud ksqlDB, then there is no billing charge until you create a topic, produce data to the Kafka cluster, or provision any other fully-managed service.
- If you run
ccloud-stackwith enabling Confluent Cloud ksqlDB (1 CSU), then you will begin to accrue charges immediately.
Here is a list of Confluent CLI commands issued by the utility that create resources in Confluent Cloud (function
ccloud::create_ccloud_stack() source code is in ccloud_library).
By default, the Confluent Cloud ksqlDB app is not created with
ccloud-stack, you have to explicitly enable it.
confluent iam service-account create $SERVICE_NAME --description "SA for $EXAMPLE run by $CCLOUD_EMAIL" -o json confluent environment create $ENVIRONMENT_NAME -o json confluent kafka cluster create "$CLUSTER_NAME" --cloud $CLUSTER_CLOUD --region $CLUSTER_REGION confluent api-key create --service-account $SERVICE_ACCOUNT_ID --resource $RESOURCE -o json // for kafka confluent iam rbac role-binding create --principal User:$SERVICE_ACCOUNT_ID --role EnvironmentAdmin --environment $ENVIRONMENT -o json confluent schema-registry cluster enable --cloud $SCHEMA_REGISTRY_CLOUD --geo $SCHEMA_REGISTRY_GEO -o json confluent api-key create --service-account $SERVICE_ACCOUNT_ID --resource $RESOURCE -o json // for schema-registry # By default, ccloud-stack does not enable Confluent Cloud ksqlDB, but if you explicitly enable it: confluent ksql cluster create --cluster $CLUSTER --api-key "$KAFKA_API_KEY" --api-secret "$KAFKA_API_SECRET" --csu 1 -o json "$KSQLDB_NAME" confluent api-key create --service-account $SERVICE_ACCOUNT_ID --resource $RESOURCE -o json // for ksqlDB REST API
Confluent Cloud Promo Code¶
- Create a user account in Confluent Cloud
- Local install of the Confluent CLI v3.0.0 or later.
ccloud-stack has been validated on macOS 10.15.3 with bash version 3.2.57.
If you encounter issues on any other operating systems or versions, please open a GitHub issue at confluentinc/examples.
Clone the confluentinc/examples GitHub repository and check out the
git clone https://github.com/confluentinc/examples cd examples git checkout current-post
Change directory to the ccloud-stack utility:
Log in to Confluent Cloud with the command
confluent login, and use your Confluent Cloud username and password. The
--saveargument saves your Confluent Cloud user login credentials or refresh token (in the case of SSO) to the local
confluent login --save
Create a ccloud-stack¶
By default, the
cloud-stackutility creates resources in the cloud provider
us-west-2. If this is the target provider and region, create the stack by calling the bash script ccloud_stack_create.sh. For more options when configuring your
ccloud-stack, see Advanced Options.
You will be prompted twice. Note the second prompt which is where you can optionally enable Confluent Cloud ksqlDB.
Do you still want to run this script? [y/n] y Do you also want to create a Confluent Cloud ksqlDB app (hourly charges may apply)? [y/n] n
ccloud-stackassigns the EnvironmentAdmin role to the service account it creates. This permissive role is useful for development and learning environments. In production, configure a stricter role and potentially use ACLs with RBAC as documented here.
In addition to creating all of the resources in Confluent Cloud with an associated service account, running
ccloud-stackalso generates a local configuration file with Confluent Cloud connection information, which is useful for creating demos or additional automation. View this file at
stack-configs/java-service-account-<SERVICE_ACCOUNT_ID>.config. It resembles:
# ------------------------------ # ENVIRONMENT_ID: <ENVIRONMENT ID> # SERVICE_ACCOUNT_ID: <SERVICE ACCOUNT ID> # KAFKA_CLUSTER_ID: <KAFKA CLUSTER ID> # SCHEMA_REGISTRY_CLUSTER_ID: <SCHEMA REGISTRY CLUSTER ID> # KSQLDB_APP_ID: <KSQLDB APP ID> # ------------------------------ sasl.mechanism=PLAIN security.protocol=SASL_SSL bootstrap.servers=<BROKER ENDPOINT> sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='<API KEY>' password='<API SECRET>'; basic.auth.credentials.source=USER_INFO basic.auth.user.info=<SR API KEY>:<SR API SECRET> schema.registry.url=https://<SR ENDPOINT> replication.factor=3 ksql.endpoint=<KSQLDB ENDPOINT> ksql.basic.auth.user.info=<KSQLDB API KEY>:<KSQLDB API SECRET>
Destroy a ccloud-stack¶
To destroy a
ccloud-stackcreated in the previous step, call the bash script ccloud_stack_destroy.sh and pass in the client properties file auto-generated in the step above. By default, this deletes all resources, including the Confluent Cloud environment specified by the service account ID in the configuration file.
Any Confluent Cloud example uses real Confluent Cloud resources. After you are done running a Confluent Cloud example, manually verify that all Confluent Cloud resources are destroyed to avoid unexpected charges.
Select Cloud Provider and Region¶
By default, the
ccloud-stack utility creates resources in the cloud provider
aws in region
us-west-2. To create resources in another cloud provider or region other than the default, complete the following steps:
View the available cloud providers and regions using the Confluent CLI:
confluent kafka region list
ccloud-stackand override the parameters
CLUSTER_REGION, as shown in the following example:
CLUSTER_CLOUD=aws CLUSTER_REGION=us-west-2 ./ccloud_stack_create.sh
Reuse Existing Environment¶
By default, a new
ccloud-stack creates a new environment.
This means that, by default,
./ccloud_stack_create.sh creates a new environment and
./ccloud_stack_destroy.sh deletes the environment specified in the configuration file.
However, due to Confluent Cloud environment limits per organization, it may be desirable to work within an existing environment.
When you create a new stack, to reuse an existing environment, set the parameter
ENVIRONMENT with an existing environment ID, as shown in the example:
When you destroy resources that were created by
ccloud-stack, the default behavior is that the environment specified by the service account ID in the configuration file is deleted.
However, there are two additional options.
To preserve the environment when destroying all the other resources in the
ccloud-stack, set the parameter
PRESERVE_ENVIRONMENT=true, as shown in the following example.
If you do not specify
PRESERVE_ENVIRONMENT=true, then the environment specified by the service account ID in the configuration file is deleted.
PRESERVE_ENVIRONMENT=true ./ccloud_stack_destroy.sh stack-configs/java-service-account-<SERVICE_ACCOUNT_ID>.config
To destroy the environment when destroying all the other resources in the
ccloud-stack, but the service account is not part of the environment name (i.e., multiple
ccloud-stacks were created in the same environment), set the parameter
ENVIRONMENT_NAME_PREFIX=ccloud-stack-<SERVICE_ACCOUNT_ID>, as shown in the following example.
Note that the service account ID in the environment name is not the same as the service account ID in the config name.
If you do not specify the environment name prefix, then the destroy function will not be able to identify the proper environment ID to delete.
ENVIRONMENT_NAME_PREFIX=ccloud-stack-<SERVICE_ACCOUNT_ID_original> ./ccloud_stack_destroy.sh stack-configs/java-service-account-<SERVICE_ACCOUNT_ID_current>.config
If you don’t want to create and destroy a
ccloud-stack using the provided bash scripts ccloud_stack_create.sh and ccloud_stack_destroy.sh, you may pull in the ccloud_library and call the functions
Get the ccloud_library:
curl -f -sS -o ccloud_library.sh https://raw.githubusercontent.com/confluentinc/examples/latest/utils/ccloud_library.sh
Source the library
Optionally override the
Run the bash function directly from the command line.
To create the
ccloud-stackwithout Confluent Cloud ksqlDB:
To create the
ccloud-stackwith Confluent Cloud ksqlDB:
To destroy the
ccloud-stack, run the following command. By default, it deletes all resources, including the Confluent Cloud environment specified by the service account ID in the configuration file.
- For a practical guide to configuring, monitoring, and optimizing your Kafka client applications when using Confluent Cloud, see Developing Client Applications on Confluent Cloud.
- Read this blog post about using Confluent Cloud to manage data pipelines that use both on-premise and cloud deployments.
- For sample usage of
ccloud-stack, see Confluent Cloud Tutorials or Observability for Apache Kafka® Clients to Confluent Cloud demo.