Azure Event Hubs Source Connector for Confluent Cloud¶
Note
If you are installing the connector locally for Confluent Platform, see Azure Event Hubs Source Connector for Confluent Platform.
The Kafka Connect Azure Event Hubs Source connector for Confluent Cloud is used to poll data from Azure Event Hubs and persist the data to an Apache Kafka® topic. For additional information about Azure Event Hubs, see the Azure Event Hubs documentation. The connector fetches records from Azure Event Hubs through a subscription.
Features¶
The Azure Event Hubs Source connector provides the following features:
- Topics created automatically: The connector can automatically create Kafka topics.
- Select configuration properties:
azure.eventhubs.partition.starting.position
azure.eventhubs.consumer.group
azure.eventhubs.transport.type
azure.eventhubs.offset.type
max.events
(defaults to 50 with a maximum of 499 events)
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Azure Event Hubs Source Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Environment Limitations.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud Azure Event Hubs Source connector.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
- At least one topic must exist before creating the connector.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- An Azure account with an existing Event Hubs Namespace, Event Hub, and Consumer Group.
- An Azure Event Hubs Shared Access Policy with its policy name and key.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster.¶
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Step 2: Add a connector.¶
In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Set up the connection.¶
Complete the following and click Continue.
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
- Enter a connector name.
- Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
- Enter the Kafka topic name where you want data sent. The connector can create a topic automatically if no topics exist.
- Enter your Azure Event Hubs details.
- Enter your Connection details.
- Select the starting position in the Event Hub if no offsets are stored and a reset occurs.
- Select the transport type for communicating with Event Hubs. Event Hubs supports the following two types:
AMQP
: AMQP over TCP (uses port 5671)AMQP_WEB_SOCKETS
: AMQP over web sockets (uses port 443)
- Select the offset type used to keep track of events. Event Hubs supports the following two types:
OFFSET
: The Azure Event Hubs offset for the event.SEQ_NUM
: The sequence number of the event.
- The maximum number of events to read when polling an Event Hub partition.
50
events are typical.499
is the maximum number of events.
- Enter the maximum number of tasks for the connector. Refer to Confluent Cloud connector limitations for additional information.
- Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details about adding SMTs. See Unsupported transformations for a list of SMTs that are not supported with this connector.
See Configuration Properties for all property values and definitions.
Step 6: Check the connector status.¶
The status for the connector should go from Provisioning to Running. It may take a few minutes.
Step 7: Check the Kafka topic.¶
After the connector is running, verify that messages are populating your Kafka topic.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Important
Make sure you have all your prerequisites completed.
The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.
You must create topic names before creating and launching this connector. Use the following command to create a topic using the Confluent CLI.
confluent kafka topic create <topic-name>
Step 1: List the available connectors.¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: Show the required connector configuration properties.¶
Enter the following command to show the required connector properties:
confluent connect plugin describe <connector-catalog-name>
For example:
confluent connect plugin describe AzureEventHubsSource
Example output:
Following are the required configs:
connector.class: AzureEventHubsSource
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
azure.eventhubs.sas.keyname
azure.eventhubs.sas.key
azure.eventhubs.namespace
azure.eventhubs.hub.name
kafka.topic
tasks.max
Step 3: Create the connector configuration file.¶
Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties.
{
"connector.class": "AzureEventHubsSource",
"name": "azure-eventhubs-source",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"azure.eventhubs.sas.keyname": "<-my-shared-access-policy name->",
"azure.eventhubs.sas.key": "<my-shared-access-key>",
"azure.eventhubs.namespace": "<my-eventhubs-namespace>",
"azure.eventhubs.hub.name": "<my-eventhub-name>",
"azure.eventhubs.consumer.group": "<my-eventhub-consumer-group>",
"kafka.topic": "<my-topic-name>",
"azure.eventhubs.partition.starting.position": "START_OF_STREAM",
"azure.eventhubs.transport.type": "AMQP",
"azure.eventhubs.offset.type": "OFFSET",
"max.events": "50",
"tasks.max": "1"
}
Note the following property definitions:
"name"
: Sets a name for your new connector."connector.class"
: Identifies the connector plugin name.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"azure.eventhubs.partition.starting.position"
: (Optional) Sets the starting position in the Event Hub if no offsets are stored and a reset occurs. The value can beSTART_OF_STREAM
orEND_OF_STREAM
. If no property is entered, the configuration defaults toSTART_OF_STREAM
."azure.eventhubs.transport.type"
: (Optional) Sets the transport type for communicating with Azure Event Hubs. The value can beAMQP
orAMQP_WEB_SOCKETS
. AMQP (over TCP) uses port 5671. AMQP over web sockets uses port 443. If no property is entered, the configuration defaults toAMQP
."azure.eventhubs.offset.type"
: (Optional) Sets the offset type used to keep track of events. The value can beOFFSET
(the Azure Event Hubs offset for the event) orSEQ_NUM
(the sequence number of the event). If no property is entered, the configuration defaults toOFFSET
."max.events"
: (Optional) The maximum number of events to read from an Event Hub partition when polling. If no property is entered, the configuration defaults to50
.499
is the maximum number of events.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs. See:ref:cc_single-message-transforms-unsupported-transforms for a list of SMTs that are not supported with this connector.
See Configuration Properties for all property values and definitions.
Step 4: Load the properties file and create the connector.¶
Enter the following command to load the configuration and start the connector:
confluent connect create --config <file-name>.json
For example:
confluent connect create --config az-event-hubs.json
Example output:
Created connector azure-eventhubs-source lcc-ix4dl
Step 5: Check the connector status.¶
Enter the following command to check the connector status:
confluent connect list
Example output:
ID | Name | Status | Type
+-----------+--------------------------+---------+--------+
lcc-ix4dl | azure-eventhubs-source | RUNNING | source
Step 6: Check the Kafka topic.¶
After the connector is running, verify that messages are populating your Kafka topic.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Configuration Properties¶
Use the following configuration properties with this connector.
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
- Type: password
- Importance: high
Which topic do you want to send data to?¶
kafka.topic
Identifies the topic name to write the data to.
- Type: string
- Importance: high
How should we connect to your Event Hub?¶
azure.eventhubs.sas.keyname
Shared access policy name to use for access authentication.
- Type: string
- Importance: high
azure.eventhubs.sas.key
Shared access key to use for access authentication.
- Type: password
- Importance: high
azure.eventhubs.namespace
Event Hubs namespace to connect to.
- Type: string
- Importance: high
azure.eventhubs.hub.name
Event Hub to read from.
- Type: string
- Importance: high
azure.eventhubs.consumer.group
Specific consumer group to read from.
- Type: string
- Default: $Default
- Importance: low
Connection details¶
azure.eventhubs.partition.starting.position
Default reset position if no offsets are stored.
- Type: string
- Default: START_OF_STREAM
- Importance: medium
azure.eventhubs.transport.type
Event Hubs communication transport type.
- Type: string
- Default: AMQP
- Importance: low
azure.eventhubs.offset.type
Offset type to use to keep track of events
- Type: string
- Default: OFFSET
- Importance: medium
max.events
Maximum number of events to read when polling an Event Hub partition.
- Type: int
- Default: 50
- Valid Values: [1,…,500]
- Importance: low
Number of tasks for this connector¶
tasks.max
- Type: int
- Valid Values: [1,…]
- Importance: high
Next Steps¶
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.