Databricks Delta Lake Sink Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see Databricks Delta Lake Sink Connector for Confluent Platform.

The Databricks Delta Lake Sink connector periodically polls data from Apache Kafka® and copies the data into an Amazon S3 staging bucket, and then commits these records to a Databricks Delta Lake instance.

Note the following considerations:

  • The connector is available only on Amazon Web Services (AWS).
  • The connector appends data only.
  • Data is staged in an Amazon S3 bucket. If you delete any files in this bucket, you will lose exactly-once semantics (EOS).
  • The Amazon S3 bucket, the Delta Lake instance, and the Kafka cluster must be in the same region.
  • The connector adds a field named partition. Your Delta Lake table must include a field named partition using type INT (partition INT).
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Refer to the Cloud connector limitations for additional information.

Features

The Databricks Delta Lake Sink connector provides the following features:

  • Exactly once delivery with a flush Interval: Records exported using a partitioner are delivered with exactly-once semantics. The timing for commits is based on the flush interval configuration property (flush.interval.ms).
  • Supported data formats: The connector supports input data from Kafka topics in Avro, JSON Schema, and Protobuf formats. You must enable Schema Registry to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
  • Automatically creates tables: If you do not provide a table name, the connector can create a table using the originating Kafka topic name (that is–the configuration property defaults to ${topic}).

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Refer to the Cloud connector limitations for additional information.

See Configuration Properties for configuration property values and descriptions.

Quick Start

Important

Be sure to review and complete the tasks in Set up Databricks Delta Lake (AWS) before configuring the connector.

Use this quick start to get up and running with the Confluent Cloud Databricks Delta Lake Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream data.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Databricks Delta Lake Sink connector card.

Databricks Delta Lake Sink Connector Card

Step 4: Enter the connector details.

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Databricks Delta Lake Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topic(s) you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check the S3 bucket.

Check that records are populating the staging Amazon S3 bucket and then populating the Databricks Delta Lake table.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../../_images/topology.png

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe DatabricksDeltaLakeSink

Example output:

Following are the required configs:
connector.class: DatabricksDeltaLakeSink
topics
input.data.format
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
delta.lake.host.name
delta.lake.http.path
delta.lake.token
staging.s3.access.key.id
staging.s3.secret.access.key
staging.bucket.name
flush.interval.ms

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties.

{
  "name": "DatabricksDeltaLakeSinkConnector_0",
  "config": {
    "topics": "clickstreams, pageviews",
    "input.data.format": "AVRO",
    "connector.class": "DatabricksDeltaLakeSink",
    "name": "DatabricksDeltaLakeSinkConnector_0",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key": "****************",
    "kafka.api.secret": "**************************************************",
    "delta.lake.host.name": "dbc-e12345cd-e12345ed.cloud.databricks.com",
    "delta.lake.http.path": "sql/protocolv1/o/1234567891811460/0000-01234-str6jlpz",
    "delta.lake.token": "************************************",
    "delta.lake.topic2table.map": "pageviews:pageviews,clickstreams:clickstreams-test",
    "delta.lake.table.auto.create": "false",
    "staging.s3.access.key.id": "********************",
    "staging.s3.secret.access.key": "****************************************",
    "staging.bucket.name": "databricks0",
    "flush.interval.ms": "100",
    "tasks.max": "1"
  }
}

Note the following required property definitions:

  • "name": Sets a name for your new connector.
  • "connector.class": Identifies the connector plugin name.
  • "topics": Enter the topic name or a comma-separated list of topic names.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, and PROTOBUF. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "delta.lake...."”: See the Databricks Delta Lake setup procedure for where you can get this information. See Configuration Properties for additional property values and descriptions.

  • "staging....": These properties use information you get from Databricks and AWS. See the Databricks Delta Lake setup procedure.

  • "flush.interval.ms": The time interval in milliseconds (ms) to periodically invoke file commits. This property ensures the connector invokes file commits at every configured interval. The commit time is adjusted to 00:00 UTC. The commit is performed at the scheduled time, regardless of the last commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of each hour. The default value used is 10000 ms (10 seconds).

  • "tasks.max": Enter the maximum number of tasks for the connector to use. The connector supports running one task per connector instance.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for configuration property values and descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config databricks-delta-lake-sink-config.json

Example output:

Created connector DatabricksDeltaLakeSinkConnector_0 lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID          |       Name                         | Status  | Type
+-----------+------------------------------------+---------+------+
lcc-ix4dl   | DatabricksDeltaLakeSinkConnector_0 | RUNNING | sink

Step 6: Check the S3 bucket.

Check that records are populating the staging Amazon S3 bucket and then populating the Databricks Delta Lake table.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Configuration Properties

The following connector configuration properties are used for this connector.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key
  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret
  • Type: password
  • Importance: high

How should we connect to your Databricks Delta Lake?

delta.lake.host.name

The host name used to connect to Delta Lake.

  • Type: string
  • Importance: high
delta.lake.http.path

The HTTP path used to connect to Delta Lake.

  • Type: string
  • Importance: high
delta.lake.token

The personal access token used to authenticate the user when connecting to Delta Lake via JDBC.

  • Type: password
  • Importance: high
delta.lake.catalog

The destination catalog under which the destination database and tables are located.

  • Type: string
  • Default: “”
  • Importance: low
delta.lake.database

The destination database under which the destination tables are located.

  • Type: string
  • Default: default
  • Importance: low
delta.lake.table.format

A format string for the destination table name, which may contain ‘${topic}’ as a placeholder for the originating topic name. For example, kafka_${topic} for the topic ‘orders’ will map to the table name ‘kafka_orders’.

  • Type: string
  • Default: ${topic}
  • Importance: medium
delta.lake.topic2table.map

Map of topics to tables (optional). Format: comma-seperated tuples, e.g. <topic-1>:<table-1>,<topic-2>:<table-2>,…

  • Type: string
  • Default: “”
  • Importance: low
delta.lake.table.auto.create

Whether to automatically create the destination table based on record schema if it does not exist.

  • Type: boolean
  • Default: false
  • Importance: medium
delta.lake.tables.location

The underlying location where the data in the Delta Lake table(s) is stored. If you set s3://<your-s3-bucket>/tmp/, Delta Lake data will be stored under s3://<your-s3-bucket>/tmp/. Make sure the AWS IAM for the Databricks Delta Lake instance has the permision to write records to the specified directory, and the specified directory exists (e.g. tmp)

  • Type: string
  • Default: “”
  • Importance: medium
delta.lake.table2partition.map

Map of tables to partition fields (optional). Format: comma-separated tuples, e.g. <table-1>:<partition-1>,<table-2>:<partition-2>,…

  • Type: string
  • Default: “”
  • Importance: low

Amazon S3 details

staging.s3.access.key.id
  • Type: password
  • Importance: high
staging.s3.secret.access.key
  • Type: password
  • Importance: high
flush.interval.ms

The time interval in milliseconds to periodically invoke file commits. This configuration ensures that file commits are invoked at every configured interval. Time of commit will be adjusted to 00:00 of selected timezone. The commit will be performed at the scheduled time, regardless of the previous commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, for example at the beginning of every hour.

  • Type: long
  • Default: 10000 (10 seconds)
  • Importance: medium
staging.bucket.name

The S3 staging bucket where files get written to from Kafka and subsequently copied into the Databricks Delta Lake table. Must be in the same region as your Confluent Cloud cluster.

  • Type: string
  • Importance: high

Number of tasks for this connector

tasks.max
  • Type: int
  • Valid Values: [1,…,1]
  • Importance: high

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../../_images/topology.png