Configuration Reference for Azure Data Lake Storage Gen2 Sink Connector for Confluent Platform

To use this connector, specify the name of the connector class in the connector.class configuration property.

connector.class=io.confluent.connect.azure.datalake.gen2.AzureDataLakeGen2SinkConnector

Connector-specific configuration properties are described below.

Note

These are properties for the self-managed connector. If you are using Confluent Cloud, see Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud.

Connector

format.class

The format class to use when writing data to the store.

  • Type: class
  • Importance: high
flush.size

The number of records written to store before invoking file commits. More specifically, the maximum number of records to write in each output object before rolling over and writing a new object. The following section gives a more detailed description of the rotation process:

Rotation strategy logic: The logic to flush files to storage is triggered when a new record arrives, after the defined interval or scheduled interval time. Flushing files is also triggered periodically by the offset.flush.interval.ms setting defined in the Connect worker configuration. The offset.flush.interval.ms setting defaults to 60000 ms (60 seconds). If you enable rotate.interval.ms or rotate.schedule.interval.ms and the ingestion rate is low, you should set offset.flush.interval.ms to a smaller value so that records flush at the rotation interval (or close to the interval). Leaving the offset.flush.interval.ms set to the default value may cause records to stay in an open file for longer than expected–that is, if no new records get processed that trigger rotation.

  • Type: int
  • Importance: high
rotate.interval.ms

The time interval in milliseconds to invoke file commits. This setting ensures that file commits are invoked every configured interval. This setting is useful when the data ingestion rate is low and the connector didn’t write enough messages to commit files. The default value of -1 means that this feature is disabled.

  • Type: long
  • Default: -1
  • Importance: high
rotate.schedule.interval.ms

The time interval to invoke file commits periodically, in milliseconds. This setting ensures that file commits are invoked every configured interval. Time of commit is adjusted to 00:00 in the selected timezone. The commit is performed at the scheduled time, regardless of the previous time or number of messages. This configuration is useful when you have to commit your data based on current server time, like at the beginning of every hour. The default value of -1 means that this feature is disabled.

  • Type: long
  • Default: -1
  • Importance: medium

The following Avro converter properties can be used in the connector configuration:

schema.cache.config

The size of the schema cache used in the Avro converter.

  • Type: int
  • Default: 1000
  • Importance: low
enhanced.avro.schema.support

Enable enhanced Avro schema support in the Avro Converter. When set to true, this property preserves Avro schema package information and Enums when going from Avro schema to Connect schema. This information is added back in when going from Connect schema to Avro schema.

  • Type: boolean
  • Default: false
  • Importance: low
connect.meta.data

Allow the Connect converter to add its metadata to the output schema.

  • Type: boolean
  • Default: true
  • Importance: low

The connect.meta.data property preserves the following Connect schema metadata when going from Connect schema to Avro schema. The following metadata is added back in when going from Avro schema to Connect schema.

  • doc
  • version
  • parameters
  • default value
  • name
  • type

For detailed information and configuration examples for Avro converters listed above, see Using Kafka Connect with Schema Registry.

retry.backoff.ms

The retry backoff in milliseconds. This setting is used to notify Kafka Connect to retry delivering a message batch or to perform a recovery in case of transient exceptions.

  • Type: long
  • Default: 5000
  • Importance: low
filename.offset.zero.pad.width

Width to zero-pad offsets in store’s filenames if offsets are too short, to provide fixed-width filenames that can be ordered by simple lexicographic sorting.

  • Type: int
  • Default: 10
  • Valid Values: [0,…]
  • Importance: low
avro.codec

The Avro compression codec to be used for output files. Available values are: null, deflate, snappy, and bzip2. The CodecSource is org.apache.avro.file.CodecFactory.

  • Type: string
  • Default: null
  • Valid Values: [null, deflate, snappy, bzip2]
  • Importance: low
parquet.codec

The Parquet compression codec to be used for output files.

  • Type: string
  • Default: snappy
  • Valid Values: [none, snappy, gzip, brotli, lz4, lzo, zstd]
  • Importance: low

Azure Common

format.bytearray.extension

Output file extension for ByteArrayFormat. Defaults to .bin.

  • Type: string
  • Default: .bin
  • Importance: low
format.bytearray.separator

String inserted between records in the same file for ByteArrayFormat. Defaults to System.lineSeparator() and may contain escape sequences, like \n. An input record that contains the line separator appears as multiple records in the output Azure object.

  • Type: string
  • Default: null
  • Importance: low
az.compression.type

Compression type for file written to Azure. Applied when using JsonFormat or ByteArrayFormat. Available values are none, gzip.

  • Type: string
  • Default: none
  • Valid Values: [none, gzip]
  • Importance: low

Schema

schema.compatibility

The schema compatibility rule to use when the connector is observing schema changes. The supported configurations are NONE, BACKWARD, FORWARD, and FULL.

  • Type: string
  • Default: NONE
  • Valid Values: either one of [forward, backward, none, full], or one of [BACKWARD, FORWARD, NONE, FULL]
  • Importance: high

Azure

azure.datalake.gen2.client.id

The client ID (GUID) of the client obtained from the Azure Active Directory configuration. To establish a connection, you can set all of the following properties:

  • azure.datalake.gen2.client.id
  • azure.datalake.gen2.client.key
  • azure.datalake.gen2.token.endpoint

Note

SAS authentication is not currently supported. Only Access Key authentication is supported at this time.

  • Type: string
  • Default: null
  • Importance: high
azure.datalake.gen2.token.endpoint

The OAuth 2.0 token endpoint associated with the user’s directory (obtain from the Active Directory configuration)

  • Type: string
  • Default: null
  • Importance: high
azure.datalake.gen2.account.name

The account name: Must be between 3-23 alphanumeric characters.

  • Type: string
  • Valid Values: Matches regex [a-z0-9]{3,23}
  • Importance: high
azure.datalake.gen2.client.key

The secret key of the client.

  • Type: password
  • Default: null
  • Valid Values: null or password (password must be non-blank)
  • Importance: high
azure.datalake.gen2.sas.key

The access key for the Azure storage account.

  • Type: password
  • Default: null
  • Valid Values: null or password (password must be non-blank)
  • Importance: high
behavior.on.null.values

How to handle records with a null value (for example, Kafka tombstone records). Valid options are ignore and fail.

  • Type: string
  • Default: fail
  • Valid Values: [ignore, fail]
  • Importance: low

Storage

storage.class

The underlying storage layer. The default value works with Azure Data Lake Storage, but you can use this setting to specify an alternative custom storage implementation.

  • Type: class
  • Default: io.confluent.connect.azure.datalake.gen2.storage.AzureDataLakeGen2Storage
  • Importance: low
topics.dir

Top level directory to store the data ingested from Kafka.

  • Type: string
  • Default: topics
  • Importance: high
store.url

Storage URL for accessing blob data. For example: https://<accountname>.dfs.core.windows.net/<path> or abfs://<file_system><account_name>.dfs.core.windows.net/<path>.

  • Type: string
  • Default: null
  • Importance: low
directory.delim

Directory delimiter pattern.

  • Type: string
  • Default: /
  • Importance: medium
file.delim

File delimiter pattern.

  • Type: string
  • Default: +
  • Importance: medium

Partitioner

partitioner.class

The partitioner to use when writing data to the store. You can use``DefaultPartitioner``, which preserves the Kafka partitions, FieldPartitioner, which partitions the data to different directories according to the value of the partitioning field specified in partition.field.name, or TimeBasedPartitioner, which partitions data according to ingestion time.

  • Type: class
  • Default: io.confluent.connect.storage.partitioner.DefaultPartitioner
  • Importance: high
  • Dependents: partition.field.name, partition.duration.ms, path.format, locale, timezone
partition.field.name

The name of the partitioning field when FieldPartitioner is used.

  • Type: list
  • Default: “”
  • Importance: medium
partition.duration.ms

The duration of a partition used by TimeBasedPartitioner, in milliseconds. The default value of -1 means that TimeBasedPartitioner isn’t used.

  • Type: long
  • Default: -1
  • Importance: medium
path.format

Sets the format of the data directories when partitioning with TimeBasedPartitioner. The format set in this configuration converts the Unix timestamp to proper directories strings. For example, if you set path.format='year'=YYYY/'month'=MM/'day'=dd/'hour'=HH, the data directories have the format /year=2015/month=12/day=07/hour=15/.

  • Type: string
  • Default: “”
  • Importance: medium
locale

The locale to use when partitioning with TimeBasedPartitioner.

  • Type: string
  • Default: “”
  • Importance: medium
timezone

The timezone to use when partitioning with TimeBasedPartitioner. Used to format and compute dates and times. All timezone IDs must be specified in the long format, such as America/Los_Angeles, America/New_York, and Europe/Paris, or UTC. Alternatively a locale independent, fixed offset, datetime zone can be specified in form [+-]hh:mm. Support for these timezones may vary by Java version. See the available timezones within each locale, such as those within the US English locale.

  • Type: string
  • Default: “”
  • Importance: medium
timestamp.extractor

The extractor that gets the timestamp for records when partitioning with TimeBasedPartitioner. You can set it to Wallclock, Record or RecordField to use one of the built-in timestamp extractors. Also, you can use the fully-qualified class name of a user-defined class that extends the TimestampExtractor interface.

  • Type: string
  • Default: Wallclock
  • Importance: medium
timestamp.field

The record field to be used as a timestamp by the timestamp extractor.

  • Type: string
  • Default: timestamp
  • Importance: medium

Confluent Platform license

confluent.topic.bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).

  • Type: list
  • Importance: high

confluent.topic Name of the Kafka topic used for Confluent Platform configuration, including licensing information.

  • Type: string
  • Default: _confluent-command
  • Importance: low

confluent.topic.replication.factor The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).

  • Type: int
  • Default: 3
  • Importance: low

Confluent license properties

You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.

This connector is proprietary and requires a license. The license information is stored in the _confluent-command topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties as described below.

confluent.license

Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.

If you are a subscriber, contact Confluent Support for more information.

  • Type: string
  • Default: “”
  • Valid Values: Confluent Platform license
  • Importance: high
confluent.topic.ssl.truststore.location

The location of the trust store file.

  • Type: string
  • Default: null
  • Importance: high
confluent.topic.ssl.truststore.password

The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.

  • Type: password
  • Default: null
  • Importance: high
confluent.topic.ssl.keystore.location

The location of the key store file. This is optional for client and can be used for two-way authentication for client.

  • Type: string
  • Default: null
  • Importance: high
confluent.topic.ssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.

  • Type: password
  • Default: null
  • Importance: high
confluent.topic.ssl.key.password

The password of the private key in the key store file. This is optional for client.

  • Type: password
  • Default: null
  • Importance: high
confluent.topic.security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

  • Type: string
  • Default: “PLAINTEXT”
  • Importance: medium

License topic configuration

A Confluent enterprise license is stored in the _confluent-command topic. This topic is created by default and contains the license that corresponds to the license key supplied through the confluent.license property. No public keys are stored in Kafka topics.

The following describes how the default _confluent-command topic is generated under different scenarios:

  • A 30-day trial license is automatically generated for the _confluent command topic if you do not add the confluent.license property or leave this property empty (for example, confluent.license=).
  • Adding a valid license key (for example, confluent.license=<valid-license-key>) adds a valid license in the _confluent-command topic.

Here is an example of the minimal properties for development and testing.

You can change the name of the _confluent-command topic using the confluent.topic property (for instance, if your environment has strict naming conventions). The example below shows this change and the configured Kafka bootstrap server.

confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092

The example above shows the minimally required bootstrap server property that you can use for development and testing. For a production environment, you add the normal producer, consumer, and topic configuration properties to the connector properties, prefixed with confluent.topic..

License topic ACLs

The _confluent-command topic contains the license that corresponds to the license key supplied through the confluent.license property. It is created by default. Connectors that access this topic require the following ACLs configured:

  • CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.

  • DESCRIBE, READ, and WRITE on the _confluent-command topic.

    Important

    You can also use DESCRIBE and READ without WRITE to restrict access to read-only for license topic ACLs. If a topic exists, the LicenseManager will not try to create the topic.

You can provide access either individually for each principal that will use the license or use a wildcard entry to allow all clients. The following examples show commands that you can use to configure ACLs for the resource cluster and _confluent-command topic.

  1. Set a CREATE and DESCRIBE ACL on the resource cluster:

    kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
    --add --allow-principal User:<principal> \
    --operation CREATE --operation DESCRIBE --cluster
    
  2. Set a DESCRIBE, READ, and WRITE ACL on the _confluent-command topic:

    kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
    --add --allow-principal User:<principal> \
    --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
    

Override Default Configuration Properties

You can override the replication factor using confluent.topic.replication.factor. For example, when using a Kafka cluster as a destination with less than three brokers (for development and testing) you should set the confluent.topic.replication.factor property to 1.

You can override producer-specific properties by using the producer.override.* prefix (for source connectors) and consumer-specific properties by using the consumer.override.* prefix (for sink connectors).

You can use the defaults or customize the other properties as well. For example, the confluent.topic.client.id property defaults to the name of the connector with -licensing suffix. You can specify the configuration settings for brokers that require SSL or SASL for client connections using this prefix.

You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.