Import a recipe into a pipeline¶
Stream Designer enables creating a pipeline by importing a ksqlDB file that describes a pipeline.
You can find example pipelines at Confluent Developer. The following list shows just a few of the recipes that you can import.
- Detect unusual credit card activity
- Understand user behavior with clickstream data
- Monitor security threats by analyzing and filtering audit logs
- Analyze political campaign fundraising performance
- Enrich orders with change data capture (CDC)
- Identify Offline Devices via IoT Data
- Notify passengers of flight updates
- Integrate legacy messaging systems with Kafka
Note
The ksqlDB recipes provide templates to get you started on production pipelines. Production data sources will require additional workflow to configure for your specific use cases.
Step 1: Create a pipeline project¶
A Stream Designer pipeline project defines all the components that are deployed for an application. In this step, you create a pipeline project and a canvas for designing the component graph.
Log in to the Confluent Cloud Console and open the Cluster Overview page for the cluster you want to use for creating pipelines.
In the navigation menu, click Stream Designer.
Click Create pipeline, and in the Create new pipeline dialog, enter
my-pipeline
for the pipeline name.In the ksqlDB Cluster dropdown, select the ksqlDB cluster to use for your pipeline logic.
Note
If you don’t have a ksqlDB cluster yet, click Create new ksqlDB cluster to open the ksqlDB Cluster page and click Add cluster. When you’ve finished creating the ksqlDB cluster, return to the Create new pipeline dialog and click the refresh button to see your new ksqlDB cluster in the dropdown.
Click Create pipeline.
The Create a new pipeline page opens.
Click Grant pipeline privileges.
In the Grant activation privileges dialog, type “confirm” and click Confirm.
The Activate pipeline and Deactivate pipeline buttons appear.
Step 2: Import the Clickstream recipe¶
The following steps show how to import the recipe for understanding user behavior with clickstream data.
Click Use ksqlDB recipe and click Start building.
The Browse ksqlDB recipes page opens.
In the CUSTOMER 360 tiles, click Understand user behavior with clickstream data and click Start with recipe.
The code editor opens and shows the SQL for the recipe.
Click Apply changes to import the recipe.
The canvas is populated with components.
Click X to dismiss the code editor.
Step 3: Provision the Datagen source connectors¶
The pipeline starts with two Datagen source connectors that produce mock user data and clicks. Before you activate the pipeline, the connectors requires API keys to complete their provisioning.
Click the DATAGEN_CLICKSTREAM connector to open the configuration dialog.
In the Kafka credentials section, click the edit icon.
Click Global access and Generate API key & download to create the API key for the Datagen connector.
A text file containing the newly generated API key and secret is downloaded to your local machine.
Click Save changes and click and Save.
Repeat the previous steps for the DATAGEN_CLICKSTREAM_USERS connector.
Delete the sink connector, named RECIPE_ELASTICSEARCH_ANALYZED_CLICKSTREAM, which isn’t used in this example. Also, delete the corresponding USER_IP_ACTIVITY and ERRORS_PER_MIN_ALERT topics.
Step 4: Activate the pipeline¶
Click Activate to deploy and start the pipeline.
Pipeline activation begins. It may take a few minutes for all components to activate.
When all components show the Activated state, click the user_ip_activity topic, and in the details view, click Messages.
Your output should resemble:
Click the other topics in the pipeline to see the output from different queries.
Step 5: Deactivate the pipeline¶
To avoid incurring costs, click Deactivate pipeline to delete all resources created by the pipeline.
Step 6: Delete the pipeline¶
When all components have completed deactivation, you can delete the pipeline safely.