.. _quickstart-demos-ccloud: On-Prem Kafka to Cloud ====================== This |ccloud| demo showcases a hybrid Kafka cluster: one cluster is a self-managed Kafka cluster running locally, the other is a |ccloud| cluster. The use case is "Bridge to Cloud" as customers migrate from on premises to cloud. .. figure:: images/services-in-cloud.jpg :alt: image ======== Overview ======== The major components of the demo are: * Two Kafka clusters: one cluster is a self-managed cluster running locally, the other is a |ccloud| cluster. * |c3|: manages and monitors the deployment. Use it for topic inspection, viewing the schema, viewing and creating ksqlDB queries, streams monitoring, and more. * ksqlDB: Confluent Cloud ksqlDB running queries on input topics `users` and `pageviews` in |ccloud|. * Two Kafka Connect clusters: one cluster connects to the local self-managed cluster and one connects to the |ccloud| cluster. Both Connect worker processes themselves are running locally. * One instance of `kafka-connect-datagen`: a source connector that produces mock data to prepopulate the topic `pageviews` locally * One instance of `kafka-connect-datagen`: a source connector that produces mock data to prepopulate the topic `users` in the |ccloud| cluster * Confluent Replicator: copies the topic `pageviews` from the local cluster to the |ccloud| cluster * |sr-long|: the demo runs with Confluent Cloud Schema Registry, and the Kafka data is written in Avro format. .. note:: This is a demo environment and has many services running on one host. Do not use this demo in production, and do not use Confluent CLI in production. This is meant exclusively to easily demo the |cp| and |ccloud|. .. include:: includes/ccloud-promo-code.rst ======= Caution ======= This demo uses real |ccloud| resources. To avoid unexpected charges, carefully evaluate the cost of resources before launching the demo and ensure all resources are destroyed after you are done running it. ============= Prerequisites ============= 1. The following are prerequisites for the demo: - An initialized `Confluent Cloud cluster `__ - `Confluent Cloud CLI `__ v1.7.0 or later - `Download `__ |cp| if using the local install (not required for Docker) - jq 2. Create a |ccloud| configuration file with information on connecting to your Confluent Cloud cluster (see `Auto-Generating Configurations for Components to Confluent Cloud `__ for more information). By default, the demo looks for this configuration file at ``~/.ccloud/config``. 3. This demo has been validated with: - Docker version 17.06.1-ce - Docker Compose version 1.14.0 with Docker Compose file format 2.1 - Java version 1.8.0_162 - MacOS 10.12 ======== Run demo ======== Setup ----- #. This demo creates a new |ccloud| environment with required resources to run this demo. As a reminder, this demo uses real |ccloud| resources and you may incur charges. #. Clone the `examples GitHub repository `__ and check out the :litwithvars:`|release|-post` branch. .. codewithvars:: bash git clone https://github.com/confluentinc/examples cd examples git checkout |release|-post #. Change directory to the |ccloud| demo. .. sourcecode:: bash cd ccloud Run --- #. Log in to |ccloud| with the command ``ccloud login``, and use your |ccloud| username and password. .. code:: shell ccloud login --url https://confluent.cloud --save #. Start the entire demo by running a single command. You have two choices: using Docker Compose or a |cp| local install. This will take several minutes to complete as it creates new resources in |ccloud|. .. sourcecode:: bash # For Docker Compose ./start-docker.sh .. sourcecode:: bash # For Confluent Platform local ./start.sh #. As part of this script run, it creates a new |ccloud| stack of fully managed resources and generates a local configuration file with all connection information, cluster IDs, and credentials, which is useful for other demos/automation. View this local configuration file, where ``SERVICE ACCOUNT ID`` is auto-generated by the script. .. sourcecode:: bash cat stack-configs/java-service-account-.config Your output should resemble: :: # ------------------------------ # Confluent Cloud connection information for demo purposes only # Do not use in production # ------------------------------ # ENVIRONMENT ID: # SERVICE ACCOUNT ID: # KAFKA CLUSTER ID: # SCHEMA REGISTRY CLUSTER ID: # KSQLDB APP ID: # ------------------------------ ssl.endpoint.identification.algorithm=https security.protocol=SASL_SSL sasl.mechanism=PLAIN bootstrap.servers= sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username\="" password\=""; basic.auth.credentials.source=USER_INFO schema.registry.basic.auth.user.info=: schema.registry.url=https:// ksql.endpoint= ksql.basic.auth.user.info=: #. Log into the Confluent Cloud UI at http://confluent.cloud . #. Use Google Chrome to navigate to |c3| GUI at http://localhost:9021 . ======== Playbook ======== |ccloud| CLI ------------------- #. Validate you can list topics in your cluster. .. sourcecode:: bash ccloud kafka topic list #. View the ACLs associated to the service account that was created for this demo at the start. The resource name corresponds to the respective cluster, Kafka topic name, or consumer group name. Note: in production, you would not use the wildcard ``*``, this is included just for demo purposes. .. sourcecode:: bash ccloud kafka acl list --service-account For example, if the service account ID were 69995, your output would resemble: :: ServiceAccountId | Permission | Operation | Resource | Name | Type +------------------+------------+------------------+----------+-----------------------+----------+ User:69995 | ALLOW | WRITE | TOPIC | _confluent-monitoring | PREFIXED User:69995 | ALLOW | READ | TOPIC | _confluent-monitoring | PREFIXED User:69995 | ALLOW | READ | TOPIC | _confluent-command | PREFIXED User:69995 | ALLOW | WRITE | TOPIC | _confluent-command | PREFIXED User:69995 | ALLOW | READ | TOPIC | _confluent | PREFIXED User:69995 | ALLOW | CREATE | TOPIC | _confluent | PREFIXED User:69995 | ALLOW | WRITE | TOPIC | _confluent | PREFIXED User:69995 | ALLOW | CREATE | GROUP | * | LITERAL User:69995 | ALLOW | WRITE | GROUP | * | LITERAL User:69995 | ALLOW | READ | GROUP | * | LITERAL User:69995 | ALLOW | WRITE | TOPIC | connect-demo-statuses | PREFIXED User:69995 | ALLOW | READ | TOPIC | connect-demo-statuses | PREFIXED User:69995 | ALLOW | READ | TOPIC | connect-demo-offsets | PREFIXED User:69995 | ALLOW | WRITE | TOPIC | connect-demo-offsets | PREFIXED User:69995 | ALLOW | DESCRIBE | TOPIC | pageviews | LITERAL User:69995 | ALLOW | DESCRIBE_CONFIGS | TOPIC | pageviews | LITERAL User:69995 | ALLOW | CREATE | TOPIC | pageviews | LITERAL User:69995 | ALLOW | ALTER_CONFIGS | TOPIC | pageviews | LITERAL User:69995 | ALLOW | READ | TOPIC | pageviews | LITERAL User:69995 | ALLOW | WRITE | TOPIC | pageviews | LITERAL User:69995 | ALLOW | WRITE | TOPIC | users | LITERAL User:69995 | ALLOW | WRITE | TOPIC | * | LITERAL User:69995 | ALLOW | CREATE | TOPIC | * | LITERAL User:69995 | ALLOW | READ | TOPIC | * | LITERAL User:69995 | ALLOW | DESCRIBE | TOPIC | * | LITERAL User:69995 | ALLOW | DESCRIBE_CONFIGS | TOPIC | * | LITERAL User:69995 | ALLOW | READ | GROUP | connect-cloud | LITERAL User:69995 | ALLOW | DESCRIBE | CLUSTER | kafka-cluster | LITERAL User:69995 | ALLOW | CREATE | CLUSTER | kafka-cluster | LITERAL User:69995 | ALLOW | READ | GROUP | connect-replicator | LITERAL User:69995 | ALLOW | WRITE | TOPIC | connect-demo-configs | PREFIXED User:69995 | ALLOW | READ | TOPIC | connect-demo-configs | PREFIXED User:69995 | ALLOW | WRITE | GROUP | _confluent | PREFIXED User:69995 | ALLOW | READ | GROUP | _confluent | PREFIXED User:69995 | ALLOW | CREATE | GROUP | _confluent | PREFIXED kafka-connect-datagen --------------------- #. In the demo, view :devx-examples:`this code|ccloud/connectors/submit_datagen_pageviews_config.sh` which automatically loads the ``kafka-connect-datagen`` connector for the Kafka topic ``pageviews`` into the ``connect-local`` cluster, which is later replicated by |crep| into |ccloud| (more on |crep| later). .. literalinclude:: ../connectors/submit_datagen_pageviews_config.sh :lines: 13-29 #. In |c3|, view the data in the ``pageviews`` topic in the local cluster. .. figure:: images/topic_pageviews.png :alt: image #. In the demo, view :devx-examples:`this code|ccloud/connectors/submit_datagen_users_config.sh` which automatically loads the ``kafka-connect-datagen`` connector for the Kafka topic ``users`` into the ``connect-cloud`` cluster. .. literalinclude:: ../connectors/submit_datagen_users_config.sh :lines: 13-29 #. In |c3|, view the data in the ``users`` topic in |ccloud|. .. figure:: images/topic_users.png :alt: image ksqlDB ------ #. In the demo, the Confluent Cloud ksqlDB queries are created from :devx-examples:`statements.sql|ccloud/statements.sql` (for ksqlDB version 0.10.0) using the REST API in :devx-examples:`this code|ccloud/create_ksqldb_app.sh` with proper credentials. .. literalinclude:: ../create_ksqldb_app.sh :lines: 31-52 #. From the Confluent Cloud UI, view the ksqlDB application flow. .. figure:: images/ksqlDB_flow.png :alt: image #. Click on any stream to view its messages and its schema. .. figure:: images/ksqlDB_stream_messages.png :alt: image Confluent Replicator -------------------- Confluent Replicator copies data from a source Kafka cluster to a destination Kafka cluster. In this demo, the source cluster is a local install of a self-managed cluster, and the destination cluster is |ccloud|. |crep| is replicating a Kafka topic ``pageviews`` from the local install to |ccloud|, and it is running with Confluent Monitoring Interceptors for |c3| streams monitoring. #. In the demo, view :devx-examples:`this code|ccloud/connectors/submit_replicator_docker_config.sh` which automatically loads the |crep| connector into the ``connect-cloud`` cluster. Notice that |crep| configuration sets ``confluent.topic.replication.factor=3``, which is required because the source cluster has ``replication.factor=1`` and |ccloud| requires ``replication.factor=3``: .. literalinclude:: ../connectors/submit_replicator_docker_config.sh :lines: 13-41 :emphasize-lines: 8 #. |c3| is configured to manage a locally running connect cluster called ``connect-cloud`` running on port 8087, which is running the ``kafka-connect-datagen`` (for the Kafka topic ``users``) connector and the |crep| connector. From the |c3| UI, view the connect clusters. .. figure:: images/c3_clusters.png :alt: image #. In the demo, view :devx-examples:`this code|ccloud/docker-compose.yml` to see the ``connect-cloud`` connect cluster which is connected to |ccloud|. .. literalinclude:: ../docker-compose.yml :lines: 168-237 #. Click on `replicator` to view the |crep| configuration. Notice that it is replicating the topic ``pageviews`` from the local Kafka cluster to |ccloud|. .. figure:: images/c3_replicator_config.png :alt: image #. Validate that messages are replicated from the local ``pageviews`` topic to the Confluent Cloud ``pageviews`` topic. From the Confluent Cloud UI, view messages in this topic. .. figure:: images/cloud_pageviews_messages.png :alt: image #. View the Consumer Lag for |crep| from the |ccloud| UI. In ``Consumers`` view, click on ``connect-replicator``. Your output should resemble: .. figure:: images/replicator_consumer_lag.png :alt: image Confluent Schema Registry ------------------------- The connectors used in this demo are configured to automatically write Avro-formatted data, leveraging the |ccloud| |sr|. #. View all the |sr| subjects. .. sourcecode:: bash # Confluent Cloud Schema Registry curl -u : https:///subjects #. From the Confluent Cloud UI, view the schema for the ``pageviews`` topic. The topic value is using a Schema registered with |sr| (the topic key is just a String). .. figure:: images/topic_schema.png :alt: image #. If you need to migrate schemas from on-prem |sr| to |ccloud| |sr|, follow this :ref:`step-by-step guide `. Refer to the file :devx-examples:`submit_replicator_schema_migration_config.sh|ccloud/connectors/submit_replicator_schema_migration_config.sh#L13-L33>` for an example of a working Replicator configuration for schema migration. =============================== Confluent Cloud Configurations =============================== #. View the the template delta configuration for Confluent Platform components and clients to connect to Confluent Cloud: .. sourcecode:: bash ls template_delta_configs/ #. Generate the per-component delta configuration parameters, automatically derived from your Confluent Cloud configuration file: .. sourcecode:: bash ./ccloud-generate-cp-configs.sh #. If you ran this demo as `start-docker.sh`, configurations for all the |cp| components are available in the :devx-examples:`docker-compose.yml file|ccloud/docker-compose.yml`. :: # For Docker Compose cat docker-compose.yml #. If you ran this demo as `start.sh` which uses Confluent CLI, it saves all configuration files and log files in the respective component subfolders in the current Confluent CLI temp directory (requires demo to be actively running): .. sourcecode:: bash # For Confluent Platform local install using Confluent CLI ls `confluent local current | tail -1` ======================== Troubleshooting the demo ======================== #. If you ran with Docker, then run `docker-compose logs | grep ERROR`. #. To view log files, look in the current Confluent CLI temp directory (requires demo to be actively running): .. sourcecode:: bash # View all files ls `confluent local current | tail -1` # View log file per service, e.g. for the Kafka broker confluent local log kafka ========= Stop Demo ========= #. Stop the demo, destroy all resources in |ccloud| and local components. .. sourcecode:: bash # For Docker Compose ./stop-docker.sh .. sourcecode:: bash # For Confluent Platform local install using Confluent CLI ./stop.sh #. Always verify that resources in |ccloud| have been destroyed. ==================== Additional Resources ==================== - To find additional |ccloud| demos, see :ref:`Confluent Cloud Demos Overview`.