Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud

Note

This is a Quick Start for the managed cloud connector. If you are installing the connector locally for Confluent Platform, see Azure Data Lake Storage Gen2 Sink connector for Confluent Platform.

You can use the Azure Data Lake Storage Gen2 Sink connector for Confluent Cloud to export Avro, JSON Schema, Protobuf, JSON (schemaless), or Bytes data from Apache Kafka® topics to Azure storage in Avro, JSON, or Bytes format. Depending on your configuration, the Azure Data Lake Storage Gen2 (ADLS Gen2) Sink connector can export data by guaranteeing exactly-once delivery semantics to consumers of the Azure Data Lake Storage Gen2 files it produces.

The ADLS Gen2 Sink connector periodically polls data from Kafka and, in turn, uploads it to Azure Data Lake storage. A partitioner is used to split the data of every Kafka partition into chunks. Each chunk of data is represented as an Azure Data Lake Storage Gen2 file. The key name encodes the topic, the Kafka partition, and the start offset of this data chunk. If no partitioner is specified in the configuration, the default partitioner which preserves Kafka partitioning is used. The size of each data chunk is determined by the number of records written to Azure Data Lake storage and by schema compatibility.

Features

The Azure Data Lake Storage Gen2 (ADLS Gen2) Sink connector provides the following features:

  • Exactly Once Delivery: Records that are exported using a deterministic partitioner are delivered with exactly-once semantics regardless of the eventual consistency of Azure Data Lake storage.

  • Data formats with or without a schema: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats and Avro, Parquet, JSON, and Bytes output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). See Schema Registry Enabled Environments for additional information.

  • Schema Evolution: schema.compatibility is set to NONE.

  • Scheduled Rotation and Rotation Interval: The connector supports a regularly scheduled interval for closing and uploading files to storage. See Scheduled Rotation for details.

  • Time-Based Partitioner: The connector supports the TimeBasedPartitioner class based on the Kafka class TimeStamp. Time-based partitioning options are daily or hourly.

  • Flush size: Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Limitations

Be sure to review the following information.

Azure Storage Authentication

The following provides basic information about setting up connector access to Azure storage. Before continuing, you should be familiar with accessing blobs using Azure Active Directory.

There are two ways to set up authentication for your connector to access Azure: Basic authentication or Active Directory authentication.

Basic authentication

The highlighted properties are required for basic authentication. Basic authentication is typically used to test the connector in development. You provide a storage account name and the storage access key.

Basic authentication

Active Directory authentication

The highlighted properties are required for authentication with Active Directory (AD).

Active Directory authentication

When authenticating with AD, you first need to establish the relationship between the connector and the Microsoft identity platform. You do this by registering the connector as a trusted application. Once you have registered the application, you can get or create the following items from the Azure portal or using the Azure CLI:

  • Client ID: In Azure, this is the Application (client) ID created when registering the application. It is displayed in the Azure portal or you can get it using the Azure CLI.

    Client ID
  • Client Key: In Azure, this is the Client secret you create after registering the AD application. The example below shows what this looks like in the Azure portal.

    Client key
  • Azure Token Endpoint: The OAuth 2.0 token endpoint is created automatically when you register the application with AD. You can find these in the dashboard for the AD application. Copy and paste the OAuth 2.0 token endpoint (v1) into the property.

    Token endpoint

Minimum role permission

You must create a role assignment for the connector to be able to access Azure Data Lake storage. Minimally, the connector requires the role Storage Blob Data Contributor. For information about assigning this role to the AD application, see Assign roles.

Azure CLI commands

The following are a few example commands to use when setting up authentication for the connector. Be sure to use the latest version of the Azure CLI.

To create the AD application for the connector:

az ad app create --display-name <Connector-AD-Application-Name> \
--is-fallback-public-client false --sign-in-audience AzureADandPersonalMicrosoftAccount --query appId -o tsv

To create a service principal:

az ad sp create --id <AD-Application-Client-ID>

To assign the role Storage Blob Data Contributor to the service principal:

az role assignment create --assignee <Service-Principal-ID> \
--role "Storage Blob Data Contributor"

Quick Start

Use this quick start to get up and running with the ADLS Gen2 Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to Azure storage.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Microsoft Azure.
  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
  • Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for this connector. Add access from All networks in Firewalls and virtual networks for the storage account. For more information about public Internet access to resources, see Networking, DNS, and service endpoints.
  • Authentication configured for Azure storage. For details, see Azure Storage Authentication.
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Caution

You can’t mix schema and schemaless records in storage using kafka-connect-storage-common. Attempting this causes a runtime exception. If you are using the self-managed version of this connector, this issue will be evident when you review the log files (only available for the self-managed connector).

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Azure Data Lake Storage Gen2 Sink connector card.

Azure Data Lake Storage Gen2 Sink Connector Card

Step 4: Enter the connector details.

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Azure Data Lake Storage Gen2 Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topic(s) you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check the Azure storage container.

  1. From the Azure portal, go to your Azure storage account.

  2. Open each folder until you see your messages displayed.

    Check the storage container

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe AzureDataLakeGen2Sink

Example output:

Following are the required configs:
connector.class: AzureDataLakeGen2Sink
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
topics
input.data.format
azure.datalake.gen2.account.name
azure.datalake.gen2.access.key
output.data.format
time.interval
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
    "name": "adls-sink-connector",
    "connector.class": "AzureDataLakeGen2Sink",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key": "<my-kafka-api-key>",
    "kafka.api.secret": "<my-kafka-api-secret>",
    "topics": "pageviews",
    "input.data.format": "AVRO",
    "azure.datalake.gen2.account.name": "<account-name>",
    "azure.datalake.gen2.access.key": "<access-key>",
    "topics.dir": "topics",
    "output.data.format": "AVRO",
    "time.interval": "HOURLY",
    "flush.size": "1000",
    "tasks.max": "1"
  }

Note the following property definitions:

  • "name": Sets a name for your new connector.
  • "connector.class": Identifies the connector plugin name.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • The configuration example above shows basic Azure authentication properties. For Active Directory (AD) authentication details, see Azure Storage Authentication. For the configuration properties to use, see Configuration Properties.

  • "topics": Identifies the topic name or a comma-separated list of topic names.

  • "input.data.format": Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    Note

    Input format JSON to output format AVRO does not work for the connector.

  • "topics.dir": The example above shows the default entry topics. In this example, the directory hierarchy created is topics/pageviews. Each Kafka topic will have a separate subdirectory based on the Kafka topic name. topics.dir shouldn’t start with /.

  • "output.data.format": Sets the output Kafka record value format (data coming from the connector). Valid entries are AVRO, PARQUET, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based output format (for example, Avro).

  • (Optional) flush.size: Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • "time.interval": Sets how your messages are grouped in the GCS bucket. Valid entries are DAILY or HOURLY.

Tip

The time.interval property above and the following optional properties topics.dir and path.format can be used to build a directory structure for stored data. For example: You set "time.interval" : "HOURLY", "topics.dir" : "json_logs/hourly", and "path.format" : "'dt'=YYYY-MM-dd/'hr'=HH". The result is the directory structure: //bucket-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>.

  • "topics.dir": A top-level directory path to use for stored data. Defaults to topics if not used.
  • "path.format": Configures the time-based partitioning path created. The property converts the UNIX timestamp to a date format string. If not used, this property defaults to 'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH if an Hourly time.interval was selected or 'year'=YYYY/'month'=MM/'day'=dd if a Daily Time interval was selected.
  • rotate.schedule.interval.ms and rotate.interval.ms: See Scheduled Rotation for details about using these properties.
  • "tasks.max": Enter the maximum number of connector tasks to use.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for all property values and definitions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config adls-sink-config.json

Example output:

Created connector adls-sink-connector lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID          |       Name                | Status  | Type
+-----------+---------------------------+---------+------+
lcc-ix4dl   | adls-sink-connector       | RUNNING | sink

Step 6: Check the Azure storage container.

  1. From the Azure portal, go to your Azure storage account.

  2. Open each folder until you see your messages displayed.

    Check the storage container

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Scheduled Rotation

Two optional properties are available that allow you to set up a rotation schedule. These properties are provided in the Cloud Console (shown below) and in the Confluent CLI.

Rotate Schedule and Rotate Interval
  • rotate.schedule.interval.ms (Scheduled rotation): This property allows you to configure a regular schedule for when files are closed and uploaded to storage. The default value is -1 (disabled). For example, when this is set for 600000 ms, you will see files available in the storage bucket at least every 10 minutes. rotate.schedule.interval.ms does not require a continuous stream of data.

    Note

    Using the rotate.schedule.interval.ms property results in a non-deterministic environment and invalidates exactly-once guarantees.

  • rotate.interval.ms (Rotation interval): This property allows you to specify the maximum time span (in milliseconds) that a file can remain open for additional records. When using this property, the time span interval for the file starts with the timestamp of the first record added to the file. The connector closes and uploads the file to storage when the timestamp of a subsequent record falls outside the time span set by the first file’s timestamp. The minimum value is 600000 ms (10 minutes). This property defaults to the interval set by the time.interval property. rotate.interval.ms requires a continuous stream of data.

    Important

    The start and end of the time span interval is determined using file timestamps. For this reason, a file could potentially remain open for a long time if a record does not arrive with a timestamp falling outside the time span set by the first file’s timestamp.

Configuration Properties

Use the following configuration properties with this connector.

Note

These are properties for the managed cloud connector. If you are installing the connector locally for Confluent Platform, see Azure Data Lake Storage Gen2 Sink connector for Confluent Platform.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Input messages

input.data.format

Sets the input message format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key
  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret
  • Type: password
  • Importance: high

Destination

topics.dir

Top-level directory where ingested data is stored. The topics.dir entry should not start with /.

  • Type: string
  • Default: topics
  • Importance: high

How should we connect to your ADLS Gen2 storage account?

azure.datalake.gen2.account.name

Must be between 3-23 alphanumeric characters.

  • Type: string
  • Importance: high
azure.datalake.gen2.access.key

Access Key for the storage account.

  • Type: password
  • Importance: high

How should we connect to your Active Directory?

azure.datalake.gen2.client.id

The client ID (GUID) of the client obtained from Azure Active Directory configuration.

  • Type: string
  • Importance: high
azure.datalake.gen2.client.key

The secret key of the client.

  • Type: password
  • Importance: high
azure.datalake.gen2.token.endpoint

The OAuth 2.0 token endpoint associated with the user’s directory (obtain from Active Directory configuration)

  • Type: string
  • Importance: high

Output messages

output.data.format

Set the output message format for values. Valid entries are AVRO, JSON, PARQUET or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO. Note that the output message format defaults to the value in the Input Message Format field. If either PROTOBUF or JSON_SR is selected as the input message format, you should select one explicitly. If no value for this property is provided, the value specified for the ‘input.data.format’ property is used.

  • Type: string
  • Importance: high
parquet.codec

Compression type for parquet files written to Azure.

  • Type: string
  • Importance: high

Organize my data by…

path.format

This configuration is used to set the format of the data directories when partitioning with TimeBasedPartitioner. The format set in this configuration converts the Unix timestamp to a valid directory string. To organize files like this example, https://<storage-account-name>.blob.core.windows.net/<container-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>, use the properties: topics.dir=json_logs/daily, and time.interval=HOURLY.

  • Type: string
  • Default: ‘year’=YYYY/’month’=MM/’day’=dd/’hour’=HH
  • Importance: high
time.interval

Partitioning interval of data, according to the time ingested to storage.

  • Type: string
  • Importance: high
rotate.schedule.interval.ms

Scheduled rotation uses rotate.schedule.interval.ms to close the file and upload to storage on a regular basis using the current time, rather than the record time. Setting rotate.schedule.interval.ms is nondeterministic and will invalidate exactly-once guarantees. Minimum value is 600000ms (10 minutes).

  • Type: int
  • Default: -1
  • Importance: medium
rotate.interval.ms

The connector’s rotation interval specifies the maximum timespan (in milliseconds) a file can remain open and ready for additional records. In other words, when using rotate.interval.ms, the timestamp for each file starts with the timestamp of the first record inserted in the file. The connector closes and uploads a file to the blob store when the next record’s timestamp does not fit into the file’s rotate.interval time span from the first record’s timestamp. If the connector has no more records to process, the connector may keep the file open until the connector can process another record (which can be a long time). Minimum value is 600000ms (10 minutes). If no value for this property is provided, the value specified for the ‘time.interval’ property is used.

  • Type: int
  • Importance: high
flush.size

Number of records written to storage before invoking file commits.

  • Type: int
  • Default: 1000
  • Importance: high
timestamp.field

Sets the field that contains the timestamp used for the TimeBasedPartitioner

  • Type: string
  • Default: “”
  • Importance: high
timezone

Sets the timezone used by the TimeBasedPartitioner.

  • Type: string
  • Default: UTC
  • Importance: high
locale

Sets the locale to use with TimeBasedPartitioner.

  • Type: string
  • Default: en
  • Importance: high
value.converter.connect.meta.data

Toggle for enabling/disabling connect converter to add its meta data to the output schema or not.

  • Type: boolean
  • Default: true
  • Importance: medium

Number of tasks for this connector

tasks.max
  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png