Configure and Launch the Databricks Delta Lake Sink Connector

The Databricks Delta Lake Sink connector periodically polls data from Apache Kafka® and copies the data into an Amazon S3 staging bucket, and then commits these records to a Databricks Delta Lake instance.

Note the following considerations:

  • The connector is available only on Amazon Web Services (AWS).
  • The connector appends data only.
  • Data is staged in an Amazon S3 bucket. If you delete any files in this bucket, you will lose exactly-once semantics (EOS).
  • The Amazon S3 bucket, the Delta Lake instance, and the Kafka cluster must be in the same region.
  • The connector adds a field named partition. Your Delta Lake table must include a field named partition using type INT (partition INT).
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Refer to the Cloud connector limitations for additional information.

Caution

Preview connectors are not currently supported and are not recommended for production use.

Features

The Amazon S3 Sink connector provides the following features:

  • Exactly once delivery with a flush Interval: Records that are exported using a partitioner and are delivered with exactly-once semantics. Data is committed using a flush interval configuration property (flush.interval.ms).
  • Supported data formats: The connector supports input data from Kafka topics in Avro, JSON Schema, and Protobuf formats. You must enable Schema Registry to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Refer to the Cloud connector limitations for additional information.

See Configuration Properties for configuration property values and descriptions.

Quick Start

Important

Be sure to review and complete the tasks in Set up Databricks Delta Lake (AWS) before configuring the connector.

Use this quick start to get up and running with the Confluent Cloud S3 Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to an S3 bucket.

Prerequisites
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use the Confluent Cloud CLI or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Databricks Delta Lake Sink connector icon.

Databricks Delta Lake Sink Connector Icon

Step 4: Enter the connector details.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

Complete the following and click Continue.

  1. Select one or more topics.
  2. Enter a Connector Name.
  3. Select an Input message format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  4. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
  5. Enter the Databricks Delta Lake connection details. These fields use information you get from Databricks and AWS. See the Databricks Delta Lake setup procedure. Delta Lake Table Format is the Delta Lake table name.
  6. Enter the Amazon connection S3 details. These fields use information you get from Databricks and AWS. See the Databricks Delta Lake setup procedure. Flush interval (ms): The time interval in milliseconds (ms) to periodically invoke file commits. This property ensures that file commits are invoked at every configured interval. The commit time is adjusted to 00:00 UTC. The commit is performed at the scheduled time, regardless of the previous commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of each hour.
  7. Enter the maximum number of tasks for the connector to use. For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.
  8. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for configuration property values and descriptions.

Step 6: Launch the connector.

Verify that the configuration is correct and click Launch.

Step 7: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 8: Check the S3 bucket.

Check that records are populating the staging Amazon S3 bucket and then populating the Databricks Delta Lake table.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe DatabricksDeltaLakeSink

Example output:

Following are the required configs:
connector.class: DatabricksDeltaLakeSink
input.data.format
name
kafka.api.key
kafka.api.secret
delta.lake.host.name
delta.lake.http.path
delta.lake.token
delta.lake.table.format
staging.s3.access.key.id
staging.s3.secret.access.key
staging.bucket.name
flush.interval.ms
topics

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "name": "DatabricksDeltaLakeSinkConnector_0",
  "config": {
    "topics": "pageviews",
    "input.data.format": "AVRO",
    "connector.class": "DatabricksDeltaLakeSink",
    "name": "DatabricksDeltaLakeSinkConnector_0",
    "kafka.api.key": "****************",
    "kafka.api.secret": "************************************************",
    "delta.lake.host.name": "dbc-1d234d5df67-0e12.cloud.databricks.com",
    "delta.lake.http.path": "sql/protocolv1/o/1246002/1234-3456789-candy444",
    "delta.lake.token": "************************************",
    "delta.lake.table.format": "pageviews",
    "staging.s3.access.key.id": "********************",
    "staging.s3.secret.access.key": "*************************************",
    "staging.bucket.name": "confluent-databricks",
    "flush.interval.ms": "10000",
    "tasks.max": "1"
  }
}

Note the following required property definitions:

  • "name": Sets a name for your new connector.
  • "connector.class": Identifies the connector plugin name.
  • "topics": Enter the topic name or a comma-separated list of topic names.
  • "input.data.format": Sets the input message format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, and PROTOBUF. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "delta.lake...."”: These properties use information you get from Databricks and AWS. See the Databricks Delta Lake setup procedure. Delta Lake Table Format is the Delta Lake table name. "delta.lake.table.format" is the table name.
  • "staging.s3....": These properties use information you get from Databricks and AWS. See the Databricks Delta Lake setup procedure.
  • "flush.interval.ms": The time interval in milliseconds (ms) to periodically invoke file commits. This property ensures that file commits are invoked at every configured interval. The commit time is adjusted to 00:00 UTC. The commit is performed at the scheduled time, regardless of the previous commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of each hour.
  • "tasks.max": Enter the maximum number of tasks for the connector to use. For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for configuration property values and descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config databricks-delta-lake-sink-config.json

Example output:

Created connector DatabricksDeltaLakeSinkConnector_0 lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |       Name                         | Status  | Type
+-----------+------------------------------------+---------+------+
lcc-ix4dl   | DatabricksDeltaLakeSinkConnector_0 | RUNNING | sink

Step 6: Check the S3 bucket.

Check that records are populating the staging Amazon S3 bucket and then populating the Databricks Delta Lake table.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Configuration Properties

The following connector configuration properties are used with the Amazon CloudWatch Source connector for Confluent Cloud.

delta.lake.host.name

The Databricks Delta Lake server hostname.

  • Type: String
  • Importance: High
delta.lake.http.path

The HTTP path used to connect to the Databricks Delta Lake instance

  • Type: String
  • Importance: High
delta.lake.token

The personal access token is used to authenticate the user when connecting to the Databricks Delta Lake instance using JDBC.

  • Type: Password
  • Importance: High
delta.lake.table.format

The target Databricks Delta Lake table name.

  • Type: String
  • Importance: High
staging.s3.access.key.id

The AWS access key used to connect to the S3 staging bucket.

  • Type: Password
  • Importance: High
staging.s3.secret.access.key

The AWS secret access key used to connect to the S3 staging bucket.

  • Type: Password
  • Importance: High
staging.bucket.name

The S3 staging bucket where files get written to from Kafka, and that subsequently get copied into the Databricks Delta Lake table. The S3 staging bucket must be in the same region as your Confluent Cloud cluster.

  • Type: String
  • Importance: High
flush.interval.ms

The time interval in milliseconds (ms) to periodically invoke file commits. This property ensures file commits are invoked at every configured interval. The commit time is adjusted to 00:00 UTC. The commit is performed at the scheduled time, regardless of the previous commit time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of each hour.

  • Type: Long
  • Importance: Medium

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../../_images/topology.png