Azure Blob Storage Sink Connector for Confluent Cloud¶
You can use the fully-managed Azure Blob Storage Sink connector for Confluent Cloud to export Avro, JSON Schema, Protobuf, JSON (schemaless), or Bytes data from Apache Kafka® topics to Azure storage in Avro, JSON, or Bytes format. Additionally, for certain data layouts, the connector exports data by guaranteeing exactly-once delivery semantics to consumers of the objects it produces.
The Azure Blob Storage Sink connector periodically polls data from Kafka and then uploads the data to Azure Blob Storage. A partitioner is used to split the data of every Kafka partition into chunks. Each chunk of data is represented as an Azure Blob Storage object. The key name encodes the topic, the Kafka partition, and the start offset of this data chunk.
If no partitioner is specified in the configuration, the default partitioner which preserves Kafka partitioning is used. The size of each data chunk is determined by the number of records written to Azure Blob Storage and by schema compatibility.
Note
- This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see Azure Blob Storage Sink connector for Confluent Platform.
- If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The Azure Blob Storage Sink connector provides the following features:
Exactly Once Delivery: Records that are exported using a deterministic partitioner are delivered with exactly-once semantics regardless of the eventual consistency of Azure Blob Storage.
Data formats with or without a schema: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats and Avro, JSON, and Bytes output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). See Schema Registry Enabled Environments for additional information.
Schema Evolution:
schema.compatibility
is set toNONE
.Partitioner: The connector supports the
TimeBasedPartitioner
class based on the Kafka classTimeStamp
. Time-based partitioning options are daily or hourly.Scheduled Rotation and Rotation Interval: The connector supports a regularly scheduled interval for closing and uploading files to storage. See Scheduled Rotation for details.
Flush size: Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.
The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Azure Blob Storage Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud Azure Blob Storage Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to Azure storage.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Microsoft Azure.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
- An Azure Blob Storage Container created in the same region as your Confluent Cloud cluster. Provisioning the connector in a different region from the one where the storage container is located is unsupported. If you need to use Confluent Cloud and Azure Blob storage in different regions contact your Confluent representative.
- An Azure block blob storage account.
- An Azure storage account access key.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Caution
You can’t mix schema and schemaless records in storage using kafka-connect-storage-common. Attempting this causes a runtime exception.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
See the Quick Start for Confluent Cloud for installation instructions.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Set up the connection.¶
Complete the following and click Continue.
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add Azure Blob Storage Sink Connector screen, complete the following:
- Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
- Click Continue.
- Under the Azure Blob Storage details section, enter the following:
- Your storage account name in the Azure Blob Storage Name field.
- The Storage account key in the Azure Blob Storage Account Key field. For information about how to set these up, see Manage storage account access keys.
- The Azure Blob Storage container in the Container name field.
- Click Continue.
Note
Configuration properties that are not shown in the Cloud Console use the default values. See Configuration Properties for all property values and definitions.
Select an Input Kafka record value format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (schemaless), or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR, or Protobuf).
Note
Input format JSON to output format AVRO does not work for the connector.
Select an Output message format (data coming from the connector): AVRO, JSON, or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro).
Note
The following Topic directory, Path format, and Time interval properties can be used to build a directory structure for stored data. For example, you set Time interval to
HOURLY
, Topics directory tojson_logs/hourly
, and Path format to'dt'=YYYY-MM-dd/'hr'=HH
. The result is the directory structure:s3://<s3-bucket-name>/json_logs/hourly/<Topic-Name>/dt=2020-02-06/hr=09/<files>
.Select the Time interval that sets how you want your messages grouped in the bucket. For example, if you select
HOURLY
, messages are grouped into folders for each hour data is streamed to the bucket.Enter the Flush size. This value defaults to 1000. The default value can be raised (and lowered, if running a dedicated cluster).
The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Path format: This configures the time-based partitioning path created. The property converts the UNIX timestamp to a date format string. If not used, this property defaults to
'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH
if anHOURLY
Time interval was selected or'year'=YYYY/'month'=MM/'day'=dd
if aDAILY
Time interval was selected.Topic directory: Top-level directory where ingested data is stored. Defaults to
topics
if not used.Maximum span of record time (in ms before scheduled rotation): Field to configure a regular schedule for when files are closed and uploaded to storage. The default value is -1 (disabled). When this is set for 600000 ms, you will see files available in the storage bucket at least every 10 minutes. See Scheduled Rotation for details about Scheduled rotation properties.
Compression type: The type of compression to use when the connector writes files to Azure. Compression is applied for JSON and BYTES output message formats.
Maximum span of record time (in ms before rotation): Field to configure the maximum time span (in milliseconds) that a file can remain open for additional records. When using this property, the time span interval for the file starts with the timestamp of the first record added to the file. The connector closes and uploads the file to storage when the timestamp of a subsequent record falls outside the time span set by the first file’s timestamp. This property defaults to the interval set by the
time.interval
property. See Scheduled Rotation for details about Scheduled rotation properties.File delimiter pattern: Character used to separate fields that make up the file name. This property defaults to
+
.Timestamp field name: The record field used for the timestamp, which is then used with the time-base partitioner. If not used, this defaults to the timestamp when the Kafka record was produced or stored by the Kafka broker.
Behavior on null value: How to handle records with a non-null key and a null value (for example, Kafka tombstone records). Options are fail and ignore (default).
Timezone: Use a valid timezone. Defaults to
UTC
if not used.Locale: Used to format dates and times. For example, you can use
en-US
for English (USA),en-GB
for English (UK),en-IN
for English (India), orfr-FR
for French (France). Defaults toen
. For a list of locale IDs, see Java locales.Note
When using Parquet, only compression types
PARQUET - none
,PARQUET - gzip
, andPARQUET - snappy
are supported.Value Converter Connect Metadata: Used to enable or disable adding Connect metadata to the output schema. Defaults to
true
(enabled).
Auto-restart policy
Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to
true
, enabling the connector to automatically restart in case of user-actionable errors. Set this property tofalse
to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.
Consumer configuration
Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).
Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.
Transforms
Single Message Transforms: To add a new SMT, see Add transforms. For more information about unsupported SMTs, see Unsupported transformations.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks. One task can handle up to 100 partitions.
To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
For help with sizing your connector, click How many tasks do I need?.
Click Continue.
Note
See Single Message Transforms (SMT) for details. See Unsupported transformations for a list of SMTs that are not supported with this connector.
Step 5: Check the Azure storage container¶
From the Azure portal, go to the container in your Azure storage account.
Open each folder until you see your messages displayed.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name" : "confluent-azure-blob-sink",
"connector.class" : "AzureBlobSink",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key" : "<my-kafka-api-key>",
"kafka.api.secret" : "<my-kafka-api-secret>",
"topics" : "pageviews",
"input.data.format" : "AVRO",
"azblob.account.name" : "<storage-account-name>",
"azblob.account.key" : "<storage-account-key>",
"azblob.container.name" : "<container-name>",
"output.data.format" : "AVRO",
"topics.dir" : "json_logs/daily",
"time.interval" : "DAILY",
"flush.size": "1000",
"tasks.max" : "1"
}
Note the following property definitions:
"name"
: Sets a name for your new connector."connector.class"
: Identifies the connector plugin name.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"topics"
: Identifies the topic name or a comma-separated list of topic names."input.data.format"
: Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).Note
Input format JSON to output format AVRO does not work for the connector.
"output.data.format"
: Sets the output Kafka record value format (data coming from the connector). Valid entries are AVRO, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based output format (for example, Avro).(Optional)
flush.size
: Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
"time.interval"
: Sets how your messages are grouped in the GCS bucket. Valid entries areDAILY
orHOURLY
.
Tip
The time.interval
property above and the following optional properties
topics.dir
and path.format
can be used to build a directory structure
for stored data. For example: You set "time.interval" : "HOURLY"
,
"topics.dir" : "json_logs/hourly"
, and "path.format" :
"'dt'=YYYY-MM-dd/'hr'=HH"
. The result is the directory structure:
//bucket-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>
.
"topics.dir"
: A top-level directory path to use for stored data. Defaults totopics
if not used.""path.format"
: Configures the time-based partitioning path created. The property converts the UNIX timestamp to a date format string. If not used, this property defaults to'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH
if an Hourlytime.interval
was selected or'year'=YYYY/'month'=MM/'day'=dd
if a Daily Time interval was selected.rotate.schedule.interval.ms
androtate.interval.ms
: See Scheduled Rotation for details about using these properties.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.
See Configuration Properties for all property values and definitions.
Step 4: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file azure-blob-sink-config.json
Example output:
Created connector confluent-azure-blob-sink lcc-ix4dl
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type
+-----------+---------------------------+---------+------+
lcc-ix4dl | confluent-azure-blob-sink | RUNNING | sink
Step 6: Check the Azure storage container.¶
From the Azure portal, go to the container in your Azure storage account.
Open each folder until you see your messages displayed.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Scheduled Rotation¶
Two optional properties are available that enable you to set up a rotation schedule. These properties are provided in the Cloud Console (shown below) and in the Confluent CLI.
rotate.schedule.interval.ms
(Scheduled rotation): This property allows you to configure a regular schedule for when files are closed and uploaded to storage. The default value is-1
(disabled). For example, when this is set for 600000 ms, you will see files available in the storage bucket at least every 10 minutes.rotate.schedule.interval.ms
does not require a continuous stream of data.Note
Using the
rotate.schedule.interval.ms
property results in a non-deterministic environment and invalidates exactly-once guarantees.rotate.interval.ms
(Rotation interval): This property allows you to specify the maximum time span (in milliseconds) that a file can remain open for additional records. When using this property, the time span interval for the file starts with the timestamp of the first record added to the file. The connector closes and uploads the file to storage when the timestamp of a subsequent record falls outside the time span set by the first file’s timestamp. This property defaults to the interval set by thetime.interval
property.rotate.interval.ms
requires a continuous stream of data.Important
The start and end of the time span interval is determined using file timestamps. For this reason, a file could potentially remain open for a long time if a record does not arrive with a timestamp falling outside the time span set by the first file’s timestamp.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
Which topics do you want to get data from?¶
topics
Identifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
Schema Config¶
schema.context.name
Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
Input messages¶
input.data.format
Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Default: JSON
- Importance: high
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
Azure Blob Storage details¶
azblob.account.name
Must be between 3-23 alphanumeric characters.
- Type: password
- Importance: high
azblob.account.key
The Azure Storage account key.
- Type: password
- Importance: high
azblob.container.name
An Azure Blob Storage Container should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent support if you need to use Confluent Cloud and Azure Blob storage in different regions.
- Type: string
- Importance: high
Output messages¶
output.data.format
Set the output message format for values. Valid entries are AVRO, JSON, or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO. Note that the output message format defaults to the value in the Input Message Format field. If either PROTOBUF or JSON_SR is selected as the input message format, you should select one explicitly. If no value for this property is provided, the value specified for the ‘input.data.format’ property is used.
- Type: string
- Importance: high
Organize my data by…¶
path.format
This configuration is used to set the format of the data directories when partitioning with TimeBasedPartitioner. The format set in this configuration converts the Unix timestamp to a valid directory string. To organize files like this example, https://<storage-account-name>.blob.core.windows.net/<container-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>, use the properties: topics.dir=json_logs/daily, and time.interval=HOURLY.
- Type: string
- Default: ‘year’=YYYY/’month’=MM/’day’=dd/’hour’=HH
- Importance: high
time.interval
Partitioning interval of data, according to the time ingested to storage.
- Type: string
- Importance: high
topics.dir
Configures the directory to store the data ingested from Kafka. If you want to organize files like the following example, https://<storage-account-name>.blob.core.windows.net/<container-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>, please put topics.dir=json_logs/daily, and time.interval=HOURLY.
- Type: string
- Default: topics
- Importance: low
rotate.schedule.interval.ms
Scheduled rotation uses rotate.schedule.interval.ms to close the file and upload to storage on a regular basis using the current time, rather than the record time. Setting rotate.schedule.interval.ms is nondeterministic and will invalidate exactly-once guarantees.
- Type: int
- Default: -1
- Importance: medium
az.compression.type
Compression type for file written to Azure. Applied when using JsonFormat or ByteArrayFormat.
- Type: string
- Default: none
- Valid Values: gzip, none
- Importance: low
rotate.interval.ms
The connector’s rotation interval specifies the maximum timespan (in milliseconds) a file can remain open and ready for additional records. In other words, when using rotate.interval.ms, the timestamp for each file starts with the timestamp of the first record inserted in the file. The connector closes and uploads a file to the blob store when the next record’s timestamp does not fit into the file’s rotate.interval time span from the first record’s timestamp. If the connector has no more records to process, the connector may keep the file open until the connector can process another record (which can be a long time). If no value for this property is provided, the value specified for the ‘time.interval’ property is used.
- Type: int
- Importance: high
file.delim
File delimiter pattern.
- Type: string
- Default: +
- Importance: low
flush.size
Number of records written to storage before invoking file commits.
- Type: int
- Default: 1000
- Valid Values: [1000,…] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: high
behavior.on.null.values
How to handle records with a null value (i.e. Kafka tombstone records).
- Type: string
- Default: ignore
- Valid Values: fail, ignore
- Importance: low
timestamp.field
Sets the field that contains the timestamp used for the TimeBasedPartitioner
- Type: string
- Default: “”
- Importance: high
timezone
Sets the timezone used by the TimeBasedPartitioner.
- Type: string
- Default: UTC
- Importance: high
locale
Sets the locale to use with TimeBasedPartitioner.
- Type: string
- Default: en
- Importance: high
Consumer configuration¶
max.poll.interval.ms
The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
- Type: long
- Default: 300000 (5 minutes)
- Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
- Importance: low
max.poll.records
The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
- Type: long
- Default: 500
- Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: low
Number of tasks for this connector¶
tasks.max
Maximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Additional Configs¶
consumer.override.auto.offset.reset
Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset or the “latest” offset (the default). You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset
- Type: string
- Importance: low
consumer.override.isolation.level
Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level
- Type: string
- Importance: low
header.converter
The converter class for the headers. This is used to serialize and deserialize the headers of the messages.
- Type: string
- Importance: low
value.converter.allow.optional.map.keys
Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.auto.register.schemas
Specify if the Serializer should attempt to register the Schema.
- Type: boolean
- Importance: low
value.converter.connect.meta.data
Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.avro.schema.support
Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.protobuf.schema.support
Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.flatten.unions
Whether to flatten unions (oneofs). Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.index.for.unions
Whether to generate an index suffix for unions. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.struct.for.nulls
Whether to generate a struct variable for null values. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.int.for.enums
Whether to represent enums as integers. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.latest.compatibility.strict
Verify latest subject version is backward compatible when use.latest.version is true.
- Type: boolean
- Importance: low
value.converter.object.additional.properties
Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.nullables
Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.proto2
Whether proto2 optionals are supported. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.scrub.invalid.names
Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.use.latest.version
Use latest version of schema in subject for serialization when auto.register.schemas is false.
- Type: boolean
- Importance: low
value.converter.use.optional.for.nonrequired
Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.nullables
Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.raw.primitives
Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
errors.tolerance
Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.
- Type: string
- Default: all
- Importance: low
key.converter.key.subject.name.strategy
How to construct the subject name for key schema registration.
- Type: string
- Default: TopicNameStrategy
- Importance: low
value.converter.decimal.format
Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
- Type: string
- Default: BASE64
- Importance: low
value.converter.flatten.singleton.unions
Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.ignore.default.for.nullables
When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.reference.subject.name.strategy
Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
- Type: string
- Default: DefaultReferenceSubjectNameStrategy
- Importance: low
value.converter.replace.null.with.default
Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.
- Type: boolean
- Default: true
- Importance: low
value.converter.schemas.enable
Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.
- Type: boolean
- Default: false
- Importance: low
value.converter.value.subject.name.strategy
Determines how to construct the subject name under which the value schema is registered with Schema Registry.
- Type: string
- Default: TopicNameStrategy
- Importance: low
Auto-restart policy¶
auto.restart.on.user.error
Enable connector to automatically restart on user-actionable errors.
- Type: boolean
- Default: true
- Importance: medium
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.