Azure Synapse Analytics Sink Connector for Confluent Cloud¶
The fully-managed Azure Synapse Analytics Sink connector for Confluent Cloud allows you to export data from Apache Kafka® topics to Azure Synapse Analytics. The connector polls data from Kafka and writes data to the data warehouse based on a topic subscription. Auto-creation of tables and limited auto-evolution are also supported. This connector is compatible with Azure Synapse Analytics SQL pool.
Note
- This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see Azure Synapse Analytics Sink connector for Confluent Platform.
- If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The Azure Synapse Analytics Sink connector supports the following features:
At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
Supports auto-creation and auto-evolution:
If Auto create table (
auto.create
) is enabled, the connector can create the destination table if it is missing. The connector uses the record schema as the basis for the table definition, and the table is created with records consumed from the topic.If Auto add columns (
auto.evolve
) is enabled, the connector can perform limited auto-evolution by issuing thealter
command on the destination table for a new record with a missing column. The connector will only add a column to a new record. Existing records will have"null"
as the value for the new column.Important
For backward-compatible schema evolution, new fields in record schemas must be optional or have a default value.
Supported data formats: The connector supports Avro, JSON Schema (JSON_SR), and Protobuf input formats. Schema Registry must be enabled to use these Schema Registry-based formats.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Azure Synapse Analytics Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Quick Start¶
Use this quick start to get up and running with the fully-managed Azure Synapse Analytics Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Microsoft Azure (Azure).
- An authorized SQL data warehouse user and password for the connector configuration.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
- At least one source Kafka topic must exist in your Confluent Cloud cluster before creating the sink connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
See the Quick Start for Confluent Cloud for installation instructions.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Ensure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add Azure Synapse Analytics Sink Connector screen, complete the following:
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
- Select the way you want to provide Kafka Cluster credentials. You can
choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
- Click Continue.
- Enter your Azure SQL Server Name. The SQL data warehouse server
name is in the following format:
<my_server_name>.database.windows.net
. - Enter the login for the dedicated SQL pool (or SQL database) in the SQL login field.
- For Login password, enter the password associated with the SQL login.
- Enter the name of the dedicated SQL pool in the Dedicated SQL pool field.
Note
Configuration properties that are not shown in the Cloud Console use the default values. See Configuration Properties for all property values and definitions.
Select the Input Kafka record value format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema) or Protobuf).
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Table name format: A format string for the destination table name, which may contain
${topic}
as a placeholder for the originating topic name. For example, to create a table namedkafka-orders
based on a Kafka topic namedorders
, you would enterkafka-${topic}
in this field.Database timezone: Name of the JDBC timezone that should be sed in the connector when inserting time-based values.
Batch size: Specifies how many records to attempt to batch together for insertion into the destination table, when possible.
Auto create table: Whether to automatically create the destination table if it is found to be missing by issuing
CREATE
.Auto add columns: Designates whether to automatically add columns in the table schema when found to missing relative to the record schema by issuing
ALTER
.When to quote SQL identifiers: When to quote table names, column names, and other identifiers in SQL statements.
Fields included: List of comma-separated record value field names. If empty, all fields from the record value are used. Otherwise, used to filter to the desired fields.
See Configuration Properties for all property values and definitions.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
- To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
- Click Continue.
Verify the connection details.
Click Launch.
The status for the connector should go from Provisioning to Running.
Step 5: Check for records¶
Verify that data is exported from Kafka to the data warehouse.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name": "AzureSqlDwSinkConnector_0",
"config": {
"topics": "pageviews",
"input.data.format": "AVRO",
"connector.class": "AzureSqlDwSink",
"name": "AzureSqlDwSinkConnector_0",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"azure.sql.dw.server.name": "azure-sql-dw-sink.db.windows.net",
"azure.sql.dw.user": "<db_user>",
"azure.sql.dw.password": "**************",
"azure.sql.dw.database.name": "<db_name>",
"db.timezone": "UTC",
"auto.create": "true",
"auto.evolve": "true",
"tasks.max": "1"
}
}
Note the following property definitions:
"name"
: Sets a name for your new connector."topics"
: Enter the topic name or a comma-separated list of topic names."input.data.format"
: Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, and PROTOBUF. You must have Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf)."connector.class"
: Identifies the connector plugin name.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"azure.sql.<>""
: Enter the Azure SQL data warehouse connection details. Note that the Azure SQL data warehouse server name is in this format:<my_server_name>.db.windows.net
."db.timezone""
: Enter a valid database timezone. Defaults toUTC
."auto.create"
: If set totrue
, the connector creates the destination table if it is missing. The connector uses the record schema as the basis for the table definition. The table is created with records consumed from the topic."auto.evolve"
: If set totrue
, the connector can perform limited auto-evolution. The connector issues thealter
command on the destination table for a new record with a missing column. The connector will only add a column to a new record. Existing records will have"null"
as the value for the new column."tasks.max"
: Enter the maximum number of tasks for the connector to use. More tasks may improve performance.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and descriptions.
Step 4: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file azure-synapse-analytics-sink-config.json
Example output:
Created connector AzureSqlDwSinkConnector_0 lcc-do6vzd
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type | Trace
+------------+----------------------------+---------+------+-------+
lcc-do6vzd | AzureSqlDwSinkConnector_0 | RUNNING | sink | |
Step 6: Check for records.¶
Verify that data is exported from Kafka to the data warehouse.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
Which topics do you want to get data from?¶
topics
Identifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
Schema Config¶
schema.context.name
Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
Input messages¶
input.data.format
Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Importance: high
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
Azure SQL Data Warehouse¶
azure.sql.dw.server.name
Full Azure SQL server name in a valid format. For example, <server-name>.database.windows.net.
- Type: string
- Importance: high
azure.sql.dw.user
Login for the dedicated SQL pool (or SQL database).
- Type: string
- Importance: high
azure.sql.dw.password
Password associated with the SQL login.
- Type: password
- Importance: high
azure.sql.dw.database.name
Name of the dedicated SQL pool (or SQL database).
- Type: string
- Importance: high
Data Mapping¶
table.name.format
A format string for the destination table name, which may contain ‘${topic}’ as a placeholder for the originating topic name.
For example,
kafka_${topic}
for the topic ‘orders’ will map to the table name ‘kafka_orders’.- Type: string
- Default: ${topic}
- Importance: medium
fields.whitelist
List of comma-separated record value field names. If empty, all fields from the record value are utilized, otherwise used to filter to the desired fields.
- Type: list
- Importance: medium
db.timezone
Name of the JDBC timezone that should be used in the connector when inserting time-based values. Defaults to UTC.
- Type: string
- Default: UTC
- Importance: medium
Writes¶
batch.size
Specifies how many records to attempt to batch together for insertion into the destination table, when possible.
- Type: int
- Default: 3000
- Valid Values: [1,…,3000]
- Importance: medium
SQL/DDL Support¶
auto.create
Whether to automatically create the destination table based on record schema if it is found to be missing by issuing
CREATE
.- Type: boolean
- Default: false
- Importance: medium
auto.evolve
Whether to automatically add columns in the table schema when found to be missing relative to the record schema by issuing
ALTER
.- Type: boolean
- Default: false
- Importance: medium
quote.sql.identifiers
When to quote table names, column names, and other identifiers in SQL statements. For backward compatibility, the default is ‘always’.
- Type: string
- Default: ALWAYS
- Valid Values: ALWAYS, NEVER
- Importance: medium
Consumer configuration¶
max.poll.interval.ms
The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
- Type: long
- Default: 300000 (5 minutes)
- Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
- Importance: low
max.poll.records
The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
- Type: long
- Default: 500
- Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: low
Number of tasks for this connector¶
tasks.max
Maximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.