Import and export a pipeline¶
Stream Designer enables exporting a pipeline to a SQL file and importing it into another pipeline.
Step 1: Create a pipeline project¶
A Stream Designer pipeline project defines all the components that are deployed for an application. In this step, you create a pipeline project and a canvas for designing the component graph.
Log in to the Confluent Cloud Console and open the Cluster Overview page for the cluster you want to use for creating pipelines.
In the navigation menu, click Stream Designer.
Click Create pipeline, and in the Create new pipeline dialog, enter
my-pipelinefor the pipeline name.
In the ksqlDB Cluster dropdown, select the ksqlDB cluster to use for your pipeline logic.
If you don’t have a ksqlDB cluster yet, click Create new ksqlDB cluster to open the ksqlDB Cluster page and click Add cluster. When you’ve finished creating the ksqlDB cluster, return to the Create new pipeline dialog and click the refresh button to see your new ksqlDB cluster in the dropdown.
Click Create pipeline.
The Create a new pipeline page opens.
Click Grant pipeline privileges.
In the Grant activation privileges dialog, type “confirm” and click Confirm.
The Activate pipeline and Deactivate pipeline buttons appear.
Step 2: Create a table definition¶
In this step, you start creating your pipeline by using a CREATE TABLE
statement. The statement includes the schema definition for a
table that has the following columns.
ID STRING PRIMARY KEY REGISTERTIME BIGINT USERID STRING REGIONID STRING GENDER STRING
Click Start with SQL and click Start building.
The source code editor opens.
Copy the following code into the source code editor.
CREATE OR REPLACE TABLE "users_table" (ID STRING PRIMARY KEY, REGISTERTIME BIGINT, USERID STRING, REGIONID STRING, GENDER STRING) WITH (kafka_topic='users_topic', partitions=1, value_format='JSON_SR');
Your output should resemble:
Click Apply changes.
Topic and Table components appear on the canvas.
In the source code editor, click X to dismiss the code view.
Step 2: Create a connector for the topic¶
In the Components menu, click Source Connector.
Hover over the Datagen connector component, click + and drag the arrow to the Topic icon.
Step 3: Configure the connector¶
In the Source Connector component, click Configure.
The Source Connector page opens.
In the search box, enter “datagen”.
Click the Datagen Source tile to open the Configuration page.
In the Topic textbox, type “users_topic”.
The Kafka credentials page opens.
Ensure that the the Global access tile is selected and click Generate a Kafka API key & secret to create the API key for the Datagen connector.
A text file containing the newly generated API key and secret is downloaded to your local machine.
In the Select output record value format section, click JSON, and in the Select a template section, click Users.
In the Connector sizing section, leave the minimum number of tasks at
1and click Continue.
In the Connector name textbox, enter “Datagen_users” and click Continue.
The Datagen source connector is configured and appears on the canvas with a corresponding topic component. The topic component is configured with the name you provided during connector configuration.
You have a Datagen Source connector ready to produce mock user data to
a Kafka topic named
users_topic with a table registered on the topic.
Step 4: Export the pipeline definition¶
In this step, you export the SQL code that defines your pipeline to a file on yout local machine.
Step 5: Import the pipeline code¶
In this step, you delete the pipeline components from the canvas and import the pipeline definition that you saved earlier.
Right-click on the Connector component and click Remove.
Right-click on the Topic component and click Remove.
The canvas resets to the Create a new pipeline page.
Click Start with SQL and Start building.
In the source code editor, paste the contents of the SQL file that you downloaded earlier.
From the saved API key file, copy the API key and secret into the corresponding
kafka.api.secretfields of the CREATE SOURCE CONNECTOR statement.
Your output should resemble:
CREATE SOURCE CONNECTOR "Datagen_users" WITH ( "connector.class"='DatagenSource', "kafka.api.key"='<your-api-key>', "kafka.api.secret"='<your-api-secret>', "kafka.auth.mode"='KAFKA_API_KEY', "kafka.topic"='users_topic', "output.data.format"='JSON_SR', "quickstart"='USERS', "tasks.max"='1' ); CREATE OR REPLACE TABLE "users_table" (GENDER STRING, ID STRING PRIMARY KEY, REGIONID STRING, REGISTERTIME BIGINT, USERID STRING) WITH (kafka_topic='users_topic', partitions=1, key_format='KAFKA', value_format='JSON_SR');
Click Apply changes.
The Datagen Source connector and Topic components are imported and appear on the canvas. Your pipeline is ready to activate.
When you import code, the canvas resets and all existing components are deleted.
In the source code editor, click X to dismiss it.
The Connector and Topic components appear on the canvas.
Step 6: Activate the pipeline¶
Click Activate pipeline.
Pipeline activation begins. It may take a few minutes for all components to activate.
Click the Topic component, and in the details page, click Messages to confirm that the Datagen Source connector is producing messages to the topic.
Your output should resemble the following.
Step 7: Deactivate the pipeline¶
To avoid incurring costs, click Deactivate pipeline to delete all resources created by the pipeline.
When you deactivate a pipeline, you have the option of retaining or deleting topics in the pipeline.
Click Deactivate pipeline to delete all resources created by the pipeline.
The Revert pipeline to draft? dialog appears. Click the dropdowns to delete or retain the listed topics. For this example, keep the Delete settings.
Click Confirm and revert to draft to deactivate the pipeline and delete topics.
Step 8: Delete the pipeline¶
When all components have completed deactivation, you can delete the pipeline safely.