Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see Azure Data Lake Storage Gen2 Sink Connector for Confluent Platform.

You can use the Connect Azure Data Lake Storage Gen2 sink connector for Confluent Cloud to export Avro, JSON Schema, Protobuf, JSON (schemaless), or Bytes data from Apache Kafka® topics to Azure storage in Avro, JSON, or Bytes format. Depending on your configuration, the Azure Data Lake Storage Gen2 connector can export data by guaranteeing exactly-once delivery semantics to consumers of the Azure Data Lake Storage Gen2 files it produces.

The Azure Data Lake Storage Gen2 sink connector periodically polls data from Kafka and, in turn, uploads it to Azure Data Lake Storage Gen2. A partitioner is used to split the data of every Kafka partition into chunks. Each chunk of data is represented as an Azure Data Lake Storage Gen2 file. The key name encodes the topic, the Kafka partition, and the start offset of this data chunk. If no partitioner is specified in the configuration, the default partitioner which preserves Kafka partitioning is used. The size of each data chunk is determined by the number of records written to Azure Data Lake Storage Gen2 and by schema compatibility.

Important

After this connector moves from Preview to General Availability (GA), Confluent Cloud Enterprise customers must have a Confluent Cloud annual commitment to use this connector. Contact your Confluent Account Executive to learn more and to update your subscription, if necessary.

Features

The Azure DataLake Gen2 Sink connector provides the following features:

  • Exactly Once Delivery: Records that are exported using a deterministic partitioner are delivered with exactly-once semantics regardless of the eventual consistency of Azure DataLake Gen2 Storage.

  • Data formats with or without a schema: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats and Avro, JSON, and Bytes output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf).

  • Schema Evolution: schema.compatibility is set to NONE.

  • Time-Based Partitioner: The connector supports the TimeBasedPartitioner class based on the Kafka class TimeStamp. Time-based partitioning options are daily or hourly.

  • Flush size: flush.size defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

Refer to Cloud connector limitations for additional information.

Caution

Preview connectors are not currently supported and are not recommended for production use.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Azure DataLake Gen2 Storage sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to Azure storage.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Microsoft Azure.
  • The Confluent Cloud CLI installed and configured for the cluster. See Install and Configure the Confluent Cloud CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the preview version of this connector. Add access from All networks in Firewalls and virtual networks for the storage account.
  • An available access key for the storage account.
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, go to Kafka API keys in your cluster or you can autogenerate the API key and secret directly in the UI when setting up the connector.
    • Create a Confluent Cloud service account for the connector.

Caution

You can’t mix schema and schemaless records in storage using kafka-connect-storage-common. Attempting this causes a runtime exception. If you are using the self-managed version of this connector, this issue will be evident when you review the log files (only available for the self-managed connector).

Using the Confluent Cloud GUI

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

Click Connectors. If you already have connectors in your cluster, click Add connector.

Step 3: Select your connector.

Click the Azure Data Lake Storage Gen2 Sink connector icon.

Azure Data Lake Storage Gen2 Sink Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Select one or more topics.

  2. Enter a Connector Name.

  3. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.

  4. Select an Input message format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (schemaless), or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    Note

    Input format JSON to output format AVRO does not work for the preview connector.

  5. Provide your Azure Data Lake Gen2 details. The Topics Directory is a directory in Data Lake storage where messages are stored. This entry defaults to topics. The connector creates subdirectories in topics based on the Kafka topic names. topics.dir shouldn’t start with /.

  6. Select an Output message format (data coming from the connector): AVRO, BYTES, or JSON. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro).

  7. Select the Time interval that sets how you want your messages grouped in the topics directory. For example, if you select Hourly, messages are grouped into folders for each hour data is streamed to storage.

  8. Enter the Flush size. This value defaults to 1000. The default value can be increased or lowered if needed.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

  9. Enter the maximum number of tasks for the connector. Refer to Cloud connector limitations for additional information.

Configuration properties that are not shown in the Confluent Cloud UI use the default values. See Azure DataLake Gen2 Storage Sink Configuration Properties for default values and property definitions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Verify the connection details

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 7: Check the Azure storage container.

  1. From the Azure portal, go to your Azure storage account.

  2. Open each folder until you see your messages displayed.

    Check the storage container

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

For additional information about this connector see Azure Data Lake Storage Gen2 Sink Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL example. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe AzureDataLakeGen2Sink

Example output:

Following are the required configs:
connector.class: AzureDataLakeGen2Sink
name
kafka.api.key
kafka.api.secret
topics
input.data.format
azure.datalake.gen2.account.name
azure.datalake.gen2.sas.key
output.data.format
time.interval
tasks.max

Configuration properties that are not listed use the default values. See Azure DataLake Gen2 Storage Sink Configuration Properties for default values and property definitions.

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
    "name": "adls-sink-connector",
    "connector.class": "AzureDataLakeGen2Sink",
    "kafka.api.key": "<my-kafka-api-key>",
    "kafka.api.secret": "<my-kafka-api-secret>",
    "topics": "pageviews",
    "input.data.format": "AVRO",
    "azure.datalake.gen2.account.name": "<account-name>",
    "azure.datalake.gen2.sas.key": "<shared-access-key>",
    "topics.dir": "topics",
    "output.data.format": "AVRO",
    "time.interval": "HOURLY",
    "flush.size": "1000",
    "tasks.max": "1"
  }

Note the following property definitions:

  • "name": Sets a name for your new connector.

  • "connector.class": Identifies the connector plugin name.

  • "topics": Identifies the topic name or a comma-separated list of topic names.

  • "input.data.format": Sets the input message format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    Note

    Input format JSON to output format AVRO does not work for the preview connector.

  • "topics.dir": The example above shows the default entry topics. In this example, the directory hierarchy created is topics/pageviews. Each Kafka topic will have a separate subdirectory based on the Kafka topic name. topics.dir shouldn’t start with /.

  • "output.data.format": Sets the output message format (data coming from the connector). Valid entries are AVRO, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based output message format (for example, Avro).

  • "time.interval": Sets how your messages are grouped in the Azure container. Valid entries are DAILY or HOURLY.

  • (Optional) flush.size: This value defaults to 1000. The default value can be increased or lowered if needed.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

  • "tasks.max": Enter the maximum number of connector tasks to use. Refer to Cloud connector limitations for additional information.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config adls-sink-config.json

Example output:

Created connector adls-sink-connector lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |       Name                | Status  | Type
+-----------+---------------------------+---------+------+
lcc-ix4dl   | adls-sink-connector       | RUNNING | sink

Step 6: Check the Azure storage container.

  1. From the Azure portal, go to your Azure storage account.

  2. Open each folder until you see your messages displayed.

    Check the storage container

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

For additional information about this connector see Azure Data Lake Storage Gen2 Sink Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL example. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../../_images/topology.png