You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Confluent Platform Demo (cp-demo)¶
cp-demo example builds a full Confluent Platform deployment with an |ak-tm| event streaming application using ksqlDB and Kafka Streams for stream processing, and all the components have security enabled end-to-end.
The tutorial includes a module to extend it into a hybrid deployment that runs |crep| to copy data from a local on-prem |ak| cluster to Confluent Cloud, a fully-managed service for |ak-tm|.
Follow the accompanying guided tutorial, broken down step-by-step, to learn how |ak| and Confluent Cloud work with |kconnect|, Confluent Schema Registry, Confluent Control Center, |crep|, and security enabled end-to-end.
The use case is an |ak-tm| event streaming application that processes real-time edits to real Wikipedia pages.
The full event streaming platform based on Confluent Platform is described as follows. Wikimedia’s EventStreams publishes a continuous stream of real-time edits happening to real wiki pages. A Kafka source connector kafka-connect-sse streams the server-sent events (SSE) from https://stream.wikimedia.org/v2/stream/recentchange, and a custom |kconnect| transform kafka-connect-json-schema extracts the JSON from these messages and then are written to a |ak| cluster. This example uses ksqlDB and a Kafka Streams application for data processing. Then a Kafka sink connector kafka-connect-elasticsearch streams the data out of Kafka and is materialized into Elasticsearch for analysis by Kibana. |crep-full| is also copying messages from a topic to another topic in the same cluster. All data is using Confluent Schema Registry and Avro, and Confluent Control Center is managing and monitoring the deployment.
Data pattern is as follows:
|Components||Consumes From||Produces To|
|SSE source connector||Wikipedia||
||ksqlDB streams and tables|
|Kafka Streams application||
|Elasticsearch sink connector||
How to use this tutorial¶
We suggest following the
cp-demo tutorial in order:
- Module 1: On-Prem Tutorial: bring up the on-prem |ak| cluster and explore the different technical areas of Confluent Platform
- Module 2: Hybrid Deployment to Confluent Cloud Tutorial: run |crep| to copy data from a local on-prem |ak| cluster to Confluent Cloud, and use the Metrics API to monitor both
- Teardown: clean up your on-prem and Confluent Cloud environment