Azure Log Analytics Sink V2 Connector for Confluent Cloud

The fully-managed Azure Log Analytics Sink V2 connector for Confluent Cloud streams records from Apache Kafka® topics to an Azure Log Analytics workspace using the Azure Logs Ingestion API. The connector routes records to custom Log Analytics tables using Data Collection Rules (DCRs) and authenticates with Azure using Entra ID service principal credentials.

This quick start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see Azure Log Analytics Sink Connector for Confluent Platform.

If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.

V2 improvements

The V2 connector includes the following improvements over the original Azure Log Analytics Sink connector:

  • Authenticates to Azure using Entra ID service principal credentials (OAuth2 client-credentials flow) instead of workspace shared keys.

  • Routes records to Log Analytics tables through Data Collection Rules (DCRs), supporting up to 20 tables per connector.

  • Supports the Dead Letter Queue (DLQ) for records that fail delivery after retries are exhausted.

  • Provides configurable retry behavior with exponential backoff and jitter, honoring Azure’s Retry-After header.

  • Supports Azure Egress Private Link via Azure Monitor Private Link Scope (AMPLS).

Features

The Azure Log Analytics Sink V2 connector for Confluent Cloud supports the following features:

  • At least once delivery: Guarantees that records from the Kafka topic are delivered to Azure Log Analytics at least once.

  • Multiple tasks: Supports running one or more tasks. More tasks may improve performance, bounded by the total partition count across subscribed topics.

  • Multi-table routing (topic-to-table mapping): Each configured table corresponds to a DCR stream named Custom-<table>_CL, which Azure maps to the destination table. Supports up to 20 tables per connector, matching Azure’s maximum of 20 streams per DCR. Multiple topics may map to the same table.

  • Multiple Data Collection Rule (DCR) support: Each Log Analytics table is mapped to its own DCR using the table.to.dcr.map property (for example, table1:dcr-id1,table2:dcr-id2). Your DCR must include a stream named Custom-<table>_CL for each table referenced by the connector. Multiple tables may share a DCR.

  • Azure Active Directory (Entra ID) authentication: Authenticates to Azure using the OAuth2 client-credentials flow with an app registration scoped to specific DCRs using the Monitoring Metrics Publisher role.

  • Multiple input data formats: Supports AVRO, JSON_SR (JSON Schema), PROTOBUF (using Schema Registry), and JSON, BYTES (schemaless) for the Kafka record value.

  • Azure Egress Private Link support: Allows traffic to the Azure Data Collection Endpoint to flow through a private endpoint using an Azure Monitor Private Link Scope (AMPLS).

  • Dead Letter Queue (DLQ) support: Routes records that fail HTTP delivery after retries are exhausted to a configurable error topic, with full HTTP request and response context preserved as headers.

  • Configurable retry behavior: Retries on 429 (throttling) and 5xx errors with exponential backoff and jitter. Honors Azure’s Retry-After header when present.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Quick start

Use this quick start to get up and running with the fully-managed Azure Log Analytics Sink V2 connector. The quick start provides the basics of selecting the connector and configuring it to stream events to Azure Log Analytics.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.

    • Enter an existing service account resource ID.

    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Azure Log Analytics Sink V2 connector card.

Azure Log Analytics Sink V2 Connector Card

Step 4: Enter the connector details

At the Add Azure Log Analytics Sink V2 Connector screen, complete the steps under the following tabs.

Note

  • Ensure you have all your prerequisites completed.

  • An asterisk ( * ) designates a required entry.

If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:

    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.

    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.

    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.

    Note

    Freight clusters support only service accounts for Kafka authentication.

  2. Click Continue.

  1. Configure the authentication properties:

    Authentication

    • Azure AD Tenant ID: Sets the directory (tenant) ID of the Azure Active Directory tenant used to authenticate ingestion requests.

    • Azure AD Client ID: Sets the Application (client) ID of the Azure AD app registration. The app must have the Monitoring Metrics Publisher role on each Data Collection Rule.

    • Azure AD Client Secret: Sets the client secret value for the Azure AD app registration.

    • Logs Ingestion Endpoint: Sets the Logs Ingestion endpoint URL (Data Collection Endpoint), for example, https://my-dce-5kyl.eastus-1.ingest.monitor.azure.com. Do not include a trailing slash.

  2. Click Continue.

Configuration properties not shown in the Cloud Console use the default values. For all property values and definitions, see Configuration properties.

  • Input Kafka record value format: Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must configure Confluent Cloud Schema Registry if you use a schema-based message format such as AVRO, JSON_SR, or PROTOBUF.

Tables

  • Topic to Table Mapping: Comma-separated list of topic-to-table mappings, for example, topic1:table1,topic2:table2,topic3:table1. Multiple topics can map to the same table. The number of unique tables determines how many APIs the connector creates (maximum 20).

  • Table to DCR Mapping: Comma-separated list of table-to-DCR mappings, for example, table1:dcr-immutable-id1,table2:dcr-immutable-id2. Each table referenced in the topic-to-table mapping must have a corresponding DCR entry.

Show advanced configurations
  • Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.

Additional Configs

  • Value Converter Replace Null With Default: Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.

  • Value Converter Schema ID Deserializer: The class name of the schema ID deserializer for values. This is used to deserialize schema IDs from the message headers.

  • Value Converter Reference Subject Name Strategy: Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Schema ID For Value Converter: The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema ID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Value Converter Schemas Enable: Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.

  • Errors Tolerance: Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Value Converter Ignore Default For Nullables: When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Key Converter Schema ID Deserializer: The class name of the schema ID deserializer for keys. This is used to deserialize schema IDs from the message headers.

  • Value Converter Decimal Format: Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals: BASE64 to serialize DECIMAL logical types as base64 encoded binary data and NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Schema GUID For Key Converter: The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema GUID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Schema GUID For Value Converter: The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema GUID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Value Converter Connect Meta Data: Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Value Converter Value Subject Name Strategy: Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Key Converter Key Subject Name Strategy: How to construct the subject name for key schema registration.

  • Schema ID For Key Converter: The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema ID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

Auto-restart policy

  • Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to true, enabling the connector to automatically restart in case of user-actionable errors. Set this property to false to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.

Consumer configuration

  • Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).

  • Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.

Behavior on error

  • Behavior On Errors: Error handling behavior for HTTP error responses. Valid values are FAIL and IGNORE.

Retry configurations

  • Retry Backoff Policy: The backoff policy to use for retries. Valid values are CONSTANT_VALUE or EXPONENTIAL_WITH_JITTER.

  • Retry Backoff (ms): The initial duration in milliseconds to wait before a retry attempt.

  • Retry HTTP Status Codes: Comma-separated list of HTTP status codes or ranges to retry on. Azure returns 429 for rate limiting, for example, 429,500- retries on rate limit responses and all 5xx server errors.

  • Maximum Retries: The maximum number of times to retry on errors before failing the task.

Batching

  • Batch Size: The number of records to batch per request for all tables. The Azure Logs Ingestion API enforces a 1 MB payload limit per request.

Transforms

For all property values and definitions, see Configuration properties.

  • Click Continue.

Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.

  1. To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.

  2. Click Continue.

  1. Verify the connection details.

  2. Click Launch.

    The connector status changes from Provisioning to Running.

Step 5: Check for records

Verify that data is exported from Kafka to the Azure Log Analytics workspace. There may be a slight delay due to data ingestion latency. For details, see Checking ingestion time.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "name": "AzureLogAnalyticsSinkV2_0",
  "config": {
    "connector.class": "AzureLogAnalyticsSinkV2",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key": "",
    "kafka.api.secret": "",
    "topics": "orders,events",
    "input.data.format": "AVRO",
    "tasks.max": "1",
    "azure.tenant.id": "",
    "azure.client.id": "",
    "azure.client.secret": "",
    "azure.logs.ingestion.endpoint": "",
    "topic.to.table.map": "orders:Orders_CL,events:Events_CL",
    "table.to.dcr.map": "Orders_CL:dcr-abc123,Events_CL:dcr-def456",
    "batch.size": "500"
   }
 }

Note the following property definitions:

  • "name": Sets a name for your new connector.

  • "topics": Enter the topic name or a comma-separated list of topic names.

  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, BYTES, JSON, JSON_SR (JSON Schema), PROTOBUF, or STRING. You must have Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "connector.class": Identifies the connector plugin name.

  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "azure.tenant.id": Enter the Azure Active Directory (Entra ID) tenant ID.

  • "azure.client.id": Enter the client ID (application ID) of the Azure app registration.

  • "azure.client.secret": Enter the client secret of the Azure app registration.

  • "azure.logs.ingestion.endpoint": Enter the Data Collection Endpoint URL for your Azure Monitor resource.

  • "table.to.dcr.map": Enter one or more comma-separated <table-name>:<dcr-immutable-id> mappings. For example: Orders:dcr-abc123,Events:dcr-def456. Each DCR must include a stream named Custom-<table>_CL.

  • "topic.to.table.map": Comma-separated list of topic-to-table mappings, for example, topic1:table1,topic2:table2,topic3:table1. Multiple topics can map to the same table. The number of unique tables determines how many APIs the connector creates (maximum 20).

  • "tasks.max": Enter the maximum number of tasks for the connector to use. More tasks may improve performance.

For information about adding SMTs using the CLI, see Single Message Transforms (SMT).

See Configuration properties for all property values and descriptions.

Step 4: Load the configuration file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file azure-log-analytics-sink-v2-config.json

Example output:

Created connector AzureLogAnalyticsSinkV2_0 lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |          Name                    | Status  | Type
+-----------+----------------------------------+---------+------+
lcc-ix4dl   | AzureLogAnalyticsSinkV2_0        | RUNNING | sink

Step 6: Check for records

Verify that data is exported from Kafka to the Azure Log Analytics workspace. There may be a slight delay due to data ingestion latency. For details, see Checking ingestion time.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Configuration properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics.regex

A regular expression that matches the names of the topics to consume from. This is useful when you want to consume from multiple topics that match a certain pattern without having to list them all individually.

  • Type: string

  • Importance: low

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list

  • Importance: high

errors.deadletterqueue.topic.name

The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. Defaults to ‘dlq-${connector}’ if not set. The DLQ topic will be created automatically if it does not exist. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: dlq-${connector}

  • Importance: low

reporter.result.topic.name

The name of the topic to produce records to after successfully processing a sink record. Defaults to ‘success-${connector}’ if not set. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: success-${connector}

  • Importance: low

reporter.error.topic.name

The name of the topic to produce records to after each unsuccessful record sink attempt. Defaults to ‘error-${connector}’ if not set. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: error-${connector}

  • Importance: low

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string

  • Default: default

  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string

  • Default: JSON_SR

  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string

  • Valid Values: A string at most 64 characters long

  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode, whenever possible.

  • Type: string

  • Valid Values: SERVICE_ACCOUNT, KAFKA_API_KEY

  • Importance: high

kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string

  • Importance: high

kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long

  • Default: 300000 (5 minutes)

  • Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters

  • Importance: low

max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long

  • Default: 500

  • Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters

  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int

  • Valid Values: [1,…]

  • Importance: high

Authentication

azure.tenant.id

The Directory (tenant) ID of the Azure Active Directory tenant used to authenticate ingestion requests.

  • Type: string

  • Importance: high

azure.client.id

The Application (client) ID of the Azure AD app registration. The app must have the Monitoring Metrics Publisher role on each Data Collection Rule.

  • Type: string

  • Importance: high

azure.client.secret

The client secret value for the Azure AD app registration.

  • Type: password

  • Importance: high

azure.logs.ingestion.endpoint

The Logs Ingestion endpoint URL (Data Collection Endpoint), for example, https://my-dce-5kyl.eastus-1.ingest.monitor.azure.com. Do not include a trailing slash.

  • Type: string

  • Importance: high

Behavior on error

behavior.on.error

Error handling behavior for HTTP error responses. Valid values are FAIL and IGNORE.

  • Type: string

  • Default: FAIL

  • Importance: low

Tables

topic.to.table.map

Comma-separated list of topic-to-table mappings, for example, topic1:table1,topic2:table2,topic3:table1. Multiple topics can map to the same table. The number of unique tables determines how many APIs the connector creates (maximum 20).

  • Type: string

  • Importance: high

table.to.dcr.map

Comma-separated list of table-to-DCR mappings, for example, table1:dcr-immutable-id1,table2:dcr-immutable-id2. Each table referenced in the topic-to-table mapping must have a corresponding DCR entry.

  • Type: string

  • Importance: high

Batching

batch.size

The number of records to batch per request for all tables. The Azure Logs Ingestion API enforces a 1 MB payload limit per request.

  • Type: int

  • Default: 500

  • Valid Values: [1,…,1000]

  • Importance: medium

Retry configurations

retry.backoff.policy

The backoff policy to use for retries. Valid values are CONSTANT_VALUE or EXPONENTIAL_WITH_JITTER.

  • Type: string

  • Default: EXPONENTIAL_WITH_JITTER

  • Importance: medium

retry.backoff.ms

The initial duration in milliseconds to wait before a retry attempt.

  • Type: int

  • Default: 3000 (3 seconds)

  • Valid Values: [100,…]

  • Importance: medium

retry.on.status.codes

Comma-separated list of HTTP status codes or ranges to retry on. Azure returns 429 for rate limiting, for example, 429,500- retries on rate limit responses and all 5xx server errors.

  • Type: string

  • Default: 429,500-

  • Importance: medium

max.retries

The maximum number of times to retry on errors before failing the task.

  • Type: int

  • Default: 3

  • Valid Values: [1,…,10]

  • Importance: medium

Additional Configs

consumer.override.auto.offset.reset

Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset (the default) or the “latest” offset. You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset

  • Type: string

  • Importance: low

consumer.override.isolation.level

Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level

  • Type: string

  • Importance: low

header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string

  • Importance: low

key.converter.use.schema.guid

The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema GUID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: string

  • Importance: low

key.converter.use.schema.id

The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema ID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: int

  • Importance: low

value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean

  • Importance: low

value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean

  • Importance: low

value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.scrub.invalid.names

Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean

  • Importance: low

value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.use.schema.guid

The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema GUID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: string

  • Importance: low

value.converter.use.schema.id

The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema ID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: int

  • Importance: low

value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

errors.tolerance

Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Type: string

  • Default: all

  • Importance: low

key.converter.key.schema.id.deserializer

The class name of the schema ID deserializer for keys. This is used to deserialize schema IDs from the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.DualSchemaIdDeserializer

  • Importance: low

key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string

  • Default: BASE64

  • Importance: low

value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.ignore.default.for.nullables

When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string

  • Default: DefaultReferenceSubjectNameStrategy

  • Importance: low

value.converter.replace.null.with.default

Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.

  • Type: boolean

  • Default: true

  • Importance: low

value.converter.schemas.enable

Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.value.schema.id.deserializer

The class name of the schema ID deserializer for values. This is used to deserialize schema IDs from the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.DualSchemaIdDeserializer

  • Importance: low

value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean

  • Default: true

  • Importance: medium

Next steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png