Azure Log Analytics Sink Connector for Confluent Cloud¶
The Azure Log Analytics Sink connector extracts records from Apache Kafka® topics and sends the records as JSON to an Azure Log Analytics workspace.
Note
If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The Azure Log Analytics Sink connector supports the following features:
- At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
- Supports multiple topics-to-tables: The connector can process data from multiple topics and send the data to the respective tables in the Azure Log Analytics workspace.
- Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
- Supported input data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, JSON, STRING, and BYTES input formats. Schema Registry must be enabled to use these Schema Registry-based formats.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Azure Log Analytics Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Quick Start¶
Use this quick start to get up and running with the fully-managed Azure Log Analytics Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Microsoft Azure (Azure).
- The Azure Log Analytics workspace ID and shared keys.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
- At least one Kafka topic must exist in your Confluent Cloud cluster before creating the sink connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
See the Quick Start for Confluent Cloud for installation instructions.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Ensure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add Azure Log Analytics Sink Connector screen, complete the steps under the following tabs.
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
- Select the way you want to provide Kafka Cluster credentials. You can
choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
Note
Freight clusters support only service accounts for Kafka authentication.
- Click Continue.
- Enter the Azure Log Analytics Workspace ID. For more information, see Workspaces.
- Enter the Azure Log Analytics Shared Key. For more information, see Workspace Shared Keys.
Select the Input Kafka record value format (data coming from the Kafka topic): AVRO, BYTES, JSON, JSON_SR (JSON Schema), PROTOBUF, or STRING. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema) or Protobuf).
Enter the Azure Log Analytics Topic2Table Map. This is an optional map for topics to tables. Use comma-separated tuples. For example,
<topic-1>:<table-1>,<topic-2>:<table-2>,...
. If the topic2table map doesn’t contain the topic for a record, the connector creates a table using the topic name. A valid table name must start with letters. A table name must not exceed 100 characters and can contain only letters, numbers, and the underscore character (_). Note that if this optional property is used, the topic name must not be modified using a Single Message Transform (SMT).Enter a Timestamp field. The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for
TimeGenerated
. If you don’t specify a timestamp field, the default value used forTimeGenerated
is the time that the message is ingested. The contents of the message field must follow the ISO 8601 format:YYYY-MM-DDThh:mm:ssZ
. Note that if theTimeGenerated
value is older than two days before the received time, the row is dropped.Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Behavior on error: The connector’s behavior if an error occurs when extracting data from the Kafka topic. Valid options are
log
(the default) andfail
. Thelog
option logs the error message in theerror-<connector-id>
topic and continues processing.fail
stops the connector.Maximum batch size: The maximum number of Kafka records to combine when sending a batch of records to the Azure Log Analytics workspace. Defaults to
500
. The minimum value allowed is1
.Maximum Pending Requests: The maximum number of concurrent pending requests the connector can make to Azure Log Analytics. Defaults to
1
, which is the minimum value allowed. Maximum allowed is128
.Request Timeout (ms): The maximum time, in milliseconds, that the connector attempts to request Azure Log Analytics before timing out (socket timeout). Defaults to
10000
ms (10 seconds).Retry Timeout (ms): The amount of time the connector retries a request if it receives a retriable response (for example, response codes 429, 500, or 503). Defaults to
10000
ms (10 seconds) Entering a value of-1
results in indefinite retries.
Auto-restart policy
Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to
true
, enabling the connector to automatically restart in case of user-actionable errors. Set this property tofalse
to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.
Consumer configuration
Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).
Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.
Transforms
Single Message Transforms: To add a new SMT, see Add transforms. For more information about unsupported SMTs, see Unsupported transformations.
See Configuration Properties for all property values and definitions.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
- To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
- Click Continue.
Verify the connection details.
Click Launch.
The status for the connector should go from Provisioning to Running.
Step 5: Check for records¶
Verify that data is exported from Kafka to the Azure Log Analytics workspace. There may be a slight delay due to data ingestion latency. For details, see Checking ingestion time.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name": "AzureLogAnalyticsSink_0",
"config": {
"topics": "orders",
"input.data.format": "AVRO",
"connector.class": "AzureLogAnalyticsSink",
"name": "AzureLogAnalyticsSink_0",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"azure.loganalytics.workspace.id": "<log-analytics-workspace-ID>",
"azure.loganalytics.shared.key": "<log-analyticsshared-key>",
"tasks.max": "1"
}
}
Note the following property definitions:
"name"
: Sets a name for your new connector."topics"
: Enter the topic name or a comma-separated list of topic names."input.data.format"
: Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, BYTES, JSON, JSON_SR (JSON Schema), PROTOBUF, or STRING. You must have Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf)."connector.class"
: Identifies the connector plugin name.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"azure.loganalytics.workspace.id"
: Enter the workspace ID. For more information, see Workspaces."azure.loganalytics.shared.key"
: Enter with workspace shared key. For more information, see Workspace Shared Keys."tasks.max"
: Enter the maximum number of tasks for the connector to use. More tasks may improve performance.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and descriptions.
Step 4: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file azure-log-analytics-sink-config.json
Example output:
Created connector AzureLogAnalyticsSink_0 lcc-do6vzd
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type | Trace
+------------+----------------------------+---------+------+-------+
lcc-do6vzd | AzureLogAnalyticsSink_0 | RUNNING | sink | |
Step 6: Check for records.¶
Verify that data is exported from Kafka to the Azure Log Analytics workspace. There may be a slight delay due to data ingestion latency. For details, see Checking ingestion time.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
Which topics do you want to get data from?¶
topics
Identifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
Schema Config¶
schema.context.name
Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
Input messages¶
input.data.format
Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, STRING or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Importance: high
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
How should we connect to Azure Log Analytics Workspace?¶
azure.loganalytics.workspace.id
Workspace Id for Azure Log Analytics.
- Type: string
- Importance: high
azure.loganalytics.shared.key
Shared key for Azure Log Analytics.
- Type: password
- Importance: high
Azure Log Analytics Details¶
azure.loganalytics.topic2table.map
Map of topics to tables (optional). Format: comma-separated tuples, e.g. <topic-1>:<table-1>,<topic-2>:<table-2>,… Note that topic name should not be modified using regex SMT while using this option. Lastly, if the topic2table map doesn’t contain the topic for a record, a table with the same name as the topic name would be created. A valid table name can’t exceed 100 characters.
It should contain only letters, numbers and (_)underscore character.
It must start with letters.
- Type: string
- Default: “”
- Importance: medium
azure.loganalytics.timestamp.field
The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for TimeGenerated. If you don’t specify this field, the default for TimeGenerated is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. Note: the Time Generated value cannot be older than 2 days before received time or the row will be dropped.
- Type: string
- Default: “”
- Importance: low
max.batch.size
The maximum number of records sent in a single request to Azure Log Analytics Workspace. Values must at least 1.
- Type: int
- Default: 500
- Valid Values: [1,…]
- Importance: medium
max.pending.requests
The maximum number of pending requests allowed at a time. Values must be at least 1.
- Type: int
- Default: 1
- Valid Values: [1,…,128]
- Importance: low
request.timeout.ms
The amount of time the connector tries to request the Azure log analytics system if it cannot reach it before it stops trying (socket timeout). A timeout of 10s will be Azure Log Analytics default timeout.
- Type: long
- Default: 10000 (10 seconds)
- Valid Values: [0,…,120000]
- Importance: low
retry.timeout.ms
The amount of time the connector tries to retry the request if receives a retriable response i.e 429, 500, 503. A timeout of -1 is considered as indefinite.
- Type: long
- Default: 10000 (10 seconds)
- Valid Values: [-1,…,120000]
- Importance: low
How should we handle errors?¶
behavior.on.error
Error handling behavior setting when an error occurs while extracting metric from Kafka record value. Valid options are ‘log’ and ‘fail’. ‘log’ logs the error message in error-<connector-id> topic and continues processing, ‘fail’ stops the connector in case of an error.
- Type: string
- Default: log
- Valid Values: fail, log
- Importance: low
Consumer configuration¶
max.poll.interval.ms
The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
- Type: long
- Default: 300000 (5 minutes)
- Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
- Importance: low
max.poll.records
The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
- Type: long
- Default: 500
- Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: low
Number of tasks for this connector¶
tasks.max
Maximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Auto-restart policy¶
auto.restart.on.user.error
Enable connector to automatically restart on user-actionable errors.
- Type: boolean
- Default: true
- Importance: medium
Additional Configs¶
consumer.override.auto.offset.reset
Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset or the “latest” offset (the default). You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset
- Type: string
- Importance: low
consumer.override.isolation.level
Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level
- Type: string
- Importance: low
header.converter
The converter class for the headers. This is used to serialize and deserialize the headers of the messages.
- Type: string
- Importance: low
value.converter.allow.optional.map.keys
Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.auto.register.schemas
Specify if the Serializer should attempt to register the Schema.
- Type: boolean
- Importance: low
value.converter.connect.meta.data
Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.avro.schema.support
Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.protobuf.schema.support
Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.flatten.unions
Whether to flatten unions (oneofs). Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.index.for.unions
Whether to generate an index suffix for unions. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.struct.for.nulls
Whether to generate a struct variable for null values. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.int.for.enums
Whether to represent enums as integers. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.latest.compatibility.strict
Verify latest subject version is backward compatible when use.latest.version is true.
- Type: boolean
- Importance: low
value.converter.object.additional.properties
Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.nullables
Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.proto2
Whether proto2 optionals are supported. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.use.latest.version
Use latest version of schema in subject for serialization when auto.register.schemas is false.
- Type: boolean
- Importance: low
value.converter.use.optional.for.nonrequired
Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.nullables
Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.raw.primitives
Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
key.converter.key.subject.name.strategy
How to construct the subject name for key schema registration.
- Type: string
- Default: TopicNameStrategy
- Importance: low
value.converter.decimal.format
Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
- Type: string
- Default: BASE64
- Importance: low
value.converter.flatten.singleton.unions
Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.reference.subject.name.strategy
Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
- Type: string
- Default: DefaultReferenceSubjectNameStrategy
- Importance: low
value.converter.value.subject.name.strategy
Determines how to construct the subject name under which the value schema is registered with Schema Registry.
- Type: string
- Default: TopicNameStrategy
- Importance: low
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.