Azure Service Bus Source Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see Azure Service Bus Source connector for Confluent Platform.

The Azure Service Bus is a multitenant cloud messaging service you can use to send information between applications and services. The Kafka Connect Azure Service Bus Source connector for Confluent Cloud reads data from an Azure Service Bus queue or topic, and persists the data in a Kafka topic.

Features

The Azure Service Bus Source connector supports the following features:

  • Topics created automatically: The connector can automatically create Kafka topics.
  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic.
  • Supports multiple tasks: The connector supports running one or more tasks.
  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, and JSON (schemaless) output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). See Environment Limitations for additional information.

Note

See Configuration Properties for configuration property descriptions.

Refer to Cloud connector limitations for additional information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Azure Service Bus Source connector. The quick start provides the basics of selecting the connector and configuring it to stream events.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  • You must have the Azure Service Bus connection details. For additional information, see the Azure Service Bus docs.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Azure Service Bus Source connector icon.

Azure Service Bus Source Connector Icon

Step 4: Set up the connection.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector Name.
  2. Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
  3. Enter the Kafka topic name where you want data sent. The connector can create a topic automatically if no topics exist.
  4. Enter the required Azure Service Bus connection details. For details about shared access authorization, see Shared Access Authorization Policies.
  5. Select the Output Kafka record value format (data going into the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  6. Enter the number of tasks to use with the connector. More tasks may improve performance.
  7. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for configuration property values and descriptions.

Step 5: Launch the connector.

Verify the connection details by previewing the running configuration. Once you’ve validated that the properties are configured to your satisfaction, click Launch.

Tip

For information about previewing your connector output, see Connector Data Previews.

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 7: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe AzureServiceBusSource

Example output:

Following are the required configs:
connector.class: AzureServiceBusSource
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
kafka.topic
azure.servicebus.namespace
azure.servicebus.sas.keyname
azure.servicebus.sas.key
azure.servicebus.entity.name
output.data.format
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "AzureServiceBusSource",
  "name": "AzureServiceBusSource_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "************************************************",
  "kafka.topic": "<topic-name>",
  "azure.servicebus.namespace": "<namespace>",
  "azure.servicebus.sas.keyname": "<keyname>",
  "azure.servicebus.sas.key": "****************************************",
  "azure.servicebus.entity.name": "<entity>",
  "output.data.format": "AVRO",
  "tasks.max": "1",
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "kafka.topic": Enter the topic name where data is sent.

  • "azure.servicebus.<>": Enter the Azure Service Bus details. For additional information, see the Azure Service Bus docs. For details about shared access signature (SAS) authorization, see Shared Access Authorization Policies.

  • output.data.format": Enter an output Kafka record value format (data going into the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.

  • "tasks.max": Enter the maximum number of tasks for the connector to use.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for configuration property values and descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config azure-service-bus-source-config.json

Example output:

Created connector AzureServiceBusSource_0 lcc-do6vzd

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID           |             Name              | Status  | Type | Trace
+------------+-------------------------------+---------+------+-------+
lcc-do6vzd   | AzureServiceBusSource_0       | RUNNING | sink |       |

Step 6: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration Properties

The following connector configuration properties are used with the Azure Service Bus Source connector for Confluent Cloud.

kafka.topic

The name of the Kafka topic to publish data to. You can specify only one topic.

  • Type: string
  • Importance: high
azure.servicebus.namespace

Azure Service Bus Namespace that the message entity belongs to.

  • Type: string
  • Default: null
  • Importance: high
azure.servicebus.sas.keyname

Azure Service Bus Shared Access Signature (SAS) key name for the Service Bus queue/topic.

  • Type: string
  • Default: null
  • Importance: high
azure.servicebus.sas.key

Azure Service Bus SAS access key for the Service Bus queue/topic.

  • Type: password
  • Default: null
  • Importance: high
azure.servicebus.entity.name

Azure Service Bus messaging entity name. Should be either the name of the queue or topic from which messages have to be polled.

  • Type: string
  • Default: null
  • Importance: high
azure.servicebus.subscription

The Azure Service Bus subscription name for the Service Bus topic. This should only be configured when the message entity is a topic.

  • Type: string
  • Default: “”
  • Importance: high

Note

For details about Azure Service Bus SAS authorization, see Shared Access Authorization Policies.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png