Azure Cosmos DB Source V2 Connector for Confluent Cloud¶
The fully-managed Azure Cosmos Source V2 connector for Confluent Cloud reads records from an Azure Cosmos database and writes data to Apache Kafka® topics in Confluent Cloud.
Note
If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
V2 improvements¶
The V2 connector includes the following improvements:
- Supports multiple containers per task, making read performance more efficient.
- Utilizes the change feed pull model, which is simpler and improves efficiency in the Kafka environment.
- Supports enhanced throughput control for managing data ingestion rates.
- Integrated metrics collection to enable better monitoring and debugging, utilizing the capabilities of the Azure SDK.
- Offers improved metadata handling for accurate offset tracking and seamless scalability.
- Supports service principal authentication using client secrets.
Features¶
The Azure Cosmos DB Source V2 connector supports the following features:
- Topic to Container mapping: The connector can map a container (table) to an individual Kafka topic (that is,
topic1#con1,topic2#con2
). - At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
- Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance. Note that one container (table) can be handled by one task.
- Offset management capabilities: Supports offset management. For more information, see Manage custom offsets.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Azure Cosmos DB Source Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Manage custom offsets¶
You can manage the offsets for this connector. Offsets provide information on the point in the system from which the connector is accessing data. For more information, see Manage Offsets for Fully-Managed Connectors in Confluent Cloud.
To manage offsets:
- Manage offsets using Confluent Cloud APIs. For more information, see Cluster API reference.
Note
The Azure Cosmos DB Source V2 connector allows reading from multiple containers using a single connector. In the following examples, the connector is reading from two different containers and writing to two different topics. Therefore, the offset is an array with two elements, each of which specifies a container and database name.
To get the current offset, make a GET
request that specifies the environment, Kafka cluster, and connector name.
GET /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets
Host: https://api.confluent.cloud
Response:
Successful calls return HTTP 200
with a JSON payload that describes the offset.
{
"id": "lcc-example123",
"name": "{connector_name}",
"offsets": [
{
"partition": {
"Container": "container2",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"24764\""
}
},
{
"partition": {
"Container": "container1",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"18460\""
}
}
],
"metadata": {
"observed_at": "2024-03-28T17:57:48.139635200Z"
}
}
Responses include the following information:
- The position of latest offset.
- The observed time of the offset in the metadata portion of the payload. The
observed_at
time indicates a snapshot in time for when the API retrieved the offset. A running connector is always updating its offsets. Useobserved_at
to get a sense for the gap between real time and the time at which the request was made. By default, offsets are observed every minute. Calling get repeatedly will fetch more recently observed offsets. - Information about the connector.
To update the offset, make a POST
request that specifies the environment, Kafka cluster, and connector
name. Include a JSON payload that specifies new offset and a patch type.
POST /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request
Host: https://api.confluent.cloud
{
"type": "PATCH",
"offsets": [
{
"partition": {
"Container": "container2",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"20000\""
}
},
{
"partition": {
"Container": "container1",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"18000\""
}
}
]
}
Considerations:
- You can only make one offset change at a time for a given connector.
- This is an asynchronous request. To check the status of this request, you must use the check offset status API. For more information, see Get the status of an offset request.
- For source connectors, the connector attempts to read from the position defined by the requested offsets.
Response:
Successful calls return HTTP 202 Accepted
with a JSON payload that describes the offset.
{
"id": "lcc-example123",
"name": "{connector_name}",
"offsets": [
{
"partition": {
"Container": "container2",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"20000\""
}
},
{
"partition": {
"Container": "container1",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"18000\""
}
}
],
"requested_at": "2024-03-28T17:58:45.606796307Z",
"type": "PATCH"
}
Responses include the following information:
- The requested position of the offsets in the source.
- The time of the request to update the offset.
- Information about the connector.
To delete the offset, make a POST
request that specifies the environment, Kafka cluster, and connector
name. Include a JSON payload that specifies the delete type.
POST /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request
Host: https://api.confluent.cloud
{
"type": "DELETE"
}
Considerations:
- This is an asynchronous request. To check the status of this request, you must use the check offset status API. For more information, see Get the status of an offset request.
- Do not issue delete and patch requests at the same time.
- If the offset you intend to delete is not found, the connector continues from where it left off.
- If you want to start reading from the beginning with Azure Cosmos DB Source V2 connector, you must update the
offset to set
recordContinuationToken
to0
.
Response:
Successful calls return HTTP 202 Accepted
with a JSON payload that describes the result.
{
"id": "lcc-example123",
"name": "{connector_name}",
"offsets": [],
"requested_at": "2024-03-28T17:59:45.606796307Z",
"type": "DELETE"
}
Responses include the following information:
- Empty offsets.
- The time of the request to delete the offset.
- Information about Kafka cluster and connector.
- The type of request.
To get the status of a previous offset request, make a GET
request that specifies the environment, Kafka cluster, and connector
name.
GET /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request/status
Host: https://api.confluent.cloud
Considerations:
- The status endpoint always shows the status of the most recent PATCH/DELETE operation.
Response:
Successful calls return HTTP 200
with a JSON payload that describes the result. The following is an example
of an applied patch.
{
"request": {
"id": "lcc-example123",
"name": "{connector_name}",
"offsets": [
{
"partition": {
"Container": "container2",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"20000\""
}
},
{
"partition": {
"Container": "container1",
"DatabaseName": "smy-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"18000\""
}
}
],
"requested_at": "2024-03-28T17:58:45.606796307Z",
"type": "PATCH"
},
"status": {
"phase": "APPLIED",
"message": "The Connect framework-managed offsets for this connector have been altered successfully. However, if this connector manages offsets externally, they will need to be manually altered in the system that the connector uses."
},
"previous_offsets": [
{
"partition": {
"Container": "container2",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"24764\""
}
},
{
"partition": {
"Container": "container1",
"DatabaseName": "my-cosmos-db"
},
"offset": {
"recordContinuationToken": "\"18460\""
}
}
],
"applied_at": "2024-03-28T17:58:48.079141883Z"
}
Responses include the following information:
- The original request, including the time it was made.
- The status of the request: applied, pending, or failed.
- The time you issued the status request.
- The previous offsets. These are the offsets that the connector last updated prior to updating the offsets. Use these to try to restore the state of your connector if a patch update causes your connector to fail or to return a connector to its previous state after rolling back.
JSON payload¶
The table below offers a description of the unique fields in the JSON payload for managing offsets of the CosmosDB Source V2 connector.
Field | Definition | Required/Optional |
---|---|---|
Container |
The value from azure.cosmos.source.containers.topicMap in the connector configuration, which is the format topic#container .
For example, topic2#container2 |
Required |
DatabaseName |
The value from azure.cosmos.source.database.name in the connector configuration. |
Required |
recordContinuationToken |
The last processed changeFeed or the point in the changeFeed to begin processing. | Required |
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud Azure Cosmos DB Source V2 connector. The quick start provides the basics of selecting the connector and configuring it to stream events from a database to Kafka.
- Prerequisites
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud.
The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
Authorized access to read data Azure Cosmos. For more information, see Secure access to data in Azure Cosmos DB.
The Azure Cosmos DB is configured to use the Core (SQL) API.
Core (SQL) API selection¶
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
See the Quick Start for Confluent Cloud for installation instructions.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add Azure Cosmos DB Source V2 Connector screen, complete the following:
- Select the way you want to provide Kafka Cluster credentials. You can
choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
Note
Freight clusters support only service accounts for Kafka authentication.
- Click Continue.
- Add the following database connection details:
- Cosmos Endpoint: The Azure Cosmos database endpoint URL. For
example,
https://confluent-azure-cosmosdb.documents.azure.com:443/
. - Cosmos Database name: The name of your Cosmos database.
- Cosmos Connection Auth Type: Authentication details for the Cosmos endpoint. Defaults to primary
MasterKey
and authenticate using theCosmos DB Account Key
. If you useServicePrincipal
, enter the following details: * ClientID/ApplicationID: Enter the clientID/applicationID of the account. * Client secret/password: Enter the client secret/password of the account. * TenantID: Enter the tenantID of the account. - Cosmos DB Account Key: The Cosmos connection master (primary) key.
- Cosmos Endpoint: The Azure Cosmos database endpoint URL. For
example,
- Click Continue.
Add the following details:
- Select the output record value format (data going to the Kafka topic): AVRO, JSON, JSON_SR (JSON Schema), or PROTOBUF. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). See Schema Registry Enabled Environments for additional information.
- Topic-Container map: A comma-delimited list of Kafka topics mapped
to Cosmos containers. For example:
topic1#con1,topic2#con2
. The field accepts regex pattern*[\\w.-]+ *#[^,]+(, *[\\w.-]+ *#[^,]+)*
.
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Whether to ignore the default for nullable fields when the value is null: When set to
True
, this property ensures that the corresponding record in Kafka isNULL
, instead od showing the default column value.
Auto-restart policy
Enable Connector Auto-restart: When set to
True
, it enables the connector to restart automatically on user actionable errors.
Schema Configuration
Value Subject Name Strategy: Select a value from the dropdown to determine how to construct the subject name under which the value schema is registered with Schema Registry.
Account details
The Azure environment of the Cosmos DB account: Specify the particular Azure cloud environment in which your Cosmos DB account resides. Defaults to
azure
.Use gateway mode: A flag to indicate whether to use the gateway mode. The default value is
false
, which means the SDK uses direct mode.Preferred regions list: Enter the preferred regions list to be used for a multi-region Cosmos DB account. This is a comma separated value (e.g., [East US, West US] or East US, West US) provided preferred regions will be used as hint. You should use a collocated kafka cluster with your Cosmos DB account and pass the Kafka cluster region as preferred region. For more information, see list of azure regions.
Container details
Containers included: Enter the name of the containers that you want to include in the connectors. This property is ignored if the Include all containers property is se to
True
.Include all containers: A flag to indicate whether to read from all containers. The default value is se to
False
. If set toTrue
, the Containers included property is ignored.
Metadata details
Metadata polling delay in ms: Enter the duration (in milliseconds) to set the frequency for checking the metadata changes (including container split/merge, adding/removing/recreated containers). When changes are detected, it will reconfigure the tasks. Default is 5 minutes (300000 ms).
The metadata storage name: Enter the resource name of the metadata storage. If metadata storage type is Kafka topic, then this configuration refers to Kafka topic name, the metadata topic will be created if it does not already exist, else it will use the pre-created topic. If metadata storage type is
Cosmos
, then this config refers to container name, forMasterKey
auth, this container will be created withAutoScale
with 4000 RU if not already exists, forServicePrincipal
auth, it requires the container to be created ahead of time.The storage source of the metadata: Choose the storage type of the metadata. Two types are supported- Cosmos, Kafka.
Message key details
Kafka record message key enabled: Whether or not to set a Kafka record message key. Defaults to
True
.Kafka message key field: The document field to use for the Kafka message key if the default key
id
is not used.
ChangeFeed details
ChangeFeed mode (LatestVersion or AllVersionsAndDeletes): Select the ChangeFeed mode -
LatestVersion
orAllVersionsAndDeletes
.Change feed start from: Enter the time to start the ChangeFeed (
Now
,Beginning
or a certain point in time (UTC) for example2020-02-10T14:15:03
). The default value is set toBeginning
.The maximum number hint of documents returned in a single request: Enter The maximum number of documents to be returned in a single change feed request. The number of items received might be higher than the specified value if multiple items are changed by the same transaction. The default is
1000
.
Throughput control details
A flag to indicate whether throughput control is enabled: When set to
True
, the throughput control is enabled. The default value is set toFalse
.
For all property and value definitions, see Configuration Properties.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
- To change the number of tasks, use the Range Slider to select the desired number of tasks.
- Click Continue.
Verify the connection details by previewing the running configuration.
After you’ve validated that the properties are configured to your satisfaction, click Launch.
The status for the connector should go from Provisioning to Running.
Step 5: Check for files.¶
Verify that data is being produced in Kafka.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Using the Confluent CLI¶
To set up and run the connector using the Confluent CLI, complete the following steps.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name": "CosmosDbSourceV2Connector_0",
"config": {
"connector.class": "CosmosDbSourceV2",
"name": "CosmosDbSourceV2Connector_0",
"tasks.max": "1",
"output.data.format": "JSON_SR",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "****************",
"kafka.api.secret": "**********************************",
"azure.cosmos.account.endpoint":"{endpoint}",
"azure.cosmos.account.key":"{masterKey}",
"azure.cosmos.source.database.name":"{database}",
"azure.cosmos.source.containers.includedList":"{container}",
"azure.cosmos.source.containers.includeAll": "false",
"azure.cosmos.source.containers.topicMap":"{topic}#{container}"
}
}
Note the following property definitions:
"connector.class"
: Identifies the connector plugin name."name"
: Sets a name for your new connector."connect.cosmos.containers.topicmap"
: Enter a comma-delimited list of Kafka topics mapped to Cosmos containers. For example:topic1#con1,topic2#con2
. The field accepts regex pattern*[\\w.-]+ *#[^,]+(, *[\\w.-]+ *#[^,]+)*
."output.data.format"
(data going to the Kafka topic): Supports AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information."connect.cosmos.messagekey.enabled"
: Whether or not to set a Kafka message key. Defaults toid
. To set a different field for the message key, add the configuration propertyconnect.cosmos.messagekey.field
.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"tasks.max"
: Enter the maximum number of tasks for the connector to use. More tasks may improve performance.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and descriptions.
Step 3: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file azure-cosmos-source-v2-config.json
Example output:
Created connector CosmosDbSourceV2Connector_0 lcc-do6vzd
Step 4: Check the connector status.¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type | Trace
+------------+----------------------------+---------+--------+-------+
lcc-do6vzd |CosmosDbSourceV2Connector_0 | RUNNING | Source | |
Step 5: Check for files.¶
Verify that data is being produced in Kafka.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Schema Config¶
schema.context.name
Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
Connect to your Cosmos DB database¶
azure.cosmos.account.endpoint
Cosmos endpoint URL. For example: https://connect-cosmosdb.documents.azure.com:443/.
- Type: string
- Importance: high
azure.cosmos.source.database.name
Name of the database to read from.
- Type: string
- Importance: high
Account details¶
azure.cosmos.account.environment
The azure environment of the Cosmos DB account: Azure, AzureChina, AzureUsGovernment, AzureGermany.
- Type: string
- Default: AZURE
- Valid Values: AZURE, AZURE_CHINA, AZURE_CHINA, AZURE_GERMANY, AZURE_US_GOVERNMENT
- Importance: medium
azure.cosmos.mode.gateway
Flag to indicate whether to use gateway mode. By default it is false, means SDK uses direct mode. https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes
- Type: boolean
- Default: false
- Importance: low
azure.cosmos.preferredRegionList
Preferred regions list to be used for a multi region Cosmos DB account. This is a comma separated value (e.g., [East US, West US] or East US, West US) provided preferred regions will be used as hint. You should use a collocated kafka cluster with your Cosmos DB account and pass the kafka cluster region as preferred region. See list of azure regions - https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.locationnames?view=azure-dotnet&preserve-view=true.
- Type: string
- Importance: low
azure.cosmos.auth.type
Cosmos connection auth type
- Type: string
- Default: MasterKey
- Valid Values: MasterKey, ServicePrincipal
- Importance: high
azure.cosmos.account.key
Cosmos DB account key (only required in case of auth.type as MasterKey).
- Type: password
- Importance: medium
azure.cosmos.auth.aad.clientId
The clientId/ApplicationId of the service principal. Required for ServicePrincipal authentication.
- Type: string
- Importance: medium
azure.cosmos.auth.aad.clientSecret
The client secret/password of the service principal. Required for ServicePrincipal authentication.
- Type: password
- Importance: medium
azure.cosmos.account.tenantId
The tenantId of the Cosmos DB account. Required for ServicePrincipal authentication.
- Type: string
- Default: “”
- Importance: medium
Output messages¶
output.data.format
Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured when using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Default: JSON_SR
- Valid Values: AVRO, JSON_SR, PROTOBUF
- Importance: high
Container details¶
azure.cosmos.source.containers.topicMap
A comma delimited list of Kafka topics mapped to Cosmos containers. For example: topic1#con1,topic2#con2. By default, the container name is used as the name of the Kafka topic to publish data to, but you can use this property to override the default configuration.
- Type: string
- Valid Values: Must match the regex
\s*[\w.-]+ *#[^,]+(, *[\w.-]+ *#[^,]+)*
- Importance: medium
azure.cosmos.source.containers.includeAll
Flag to indicate whether reading from all containers.
- Type: boolean
- Default: false
- Importance: high
azure.cosmos.source.containers.includedList
Containers included. This config will be ignored if kafka.connect.cosmos.source.containers.includeAll is true.
- Type: string
- Importance: medium
Throughput control details¶
azure.cosmos.throughputControl.enabled
A flag to indicate whether throughput control is enabled.
- Type: boolean
- Default: false
- Importance: medium
azure.cosmos.throughputControl.auth.type
There are two auth types are supported currently: MasterKey`(PrimaryReadWriteKeys, SecondReadWriteKeys, PrimaryReadOnlyKeys, SecondReadWriteKeys), `ServicePrincipal
- Type: string
- Default: MasterKey
- Valid Values: MasterKey, ServicePrincipal
- Importance: low
azure.cosmos.throughputControl.account.key
Cosmos DB throughput control account key (only required in case of throughputControl.auth.type as MasterKey)
- Type: password
- Importance: low
azure.cosmos.throughputControl.auth.aad.clientId
The clientId/applicationId of the service principal. Required for ServicePrincipal authentication.
- Type: string
- Importance: low
azure.cosmos.throughputControl.auth.aad.clientSecret
The client secret/password of the service principal. Required for ServicePrincipal authentication.
- Type: password
- Importance: low
azure.cosmos.throughputControl.account.tenantId
The tenantId of the Cosmos DB account. Required for ServicePrincipal authentication.
- Type: string
- Importance: low
azure.cosmos.throughputControl.account.environment
The azure environment of the Cosmos DB account: Azure, AzureChina, AzureUsGovernment, AzureGermany.
- Type: string
- Default: AZURE
- Valid Values: AZURE, AZURE_CHINA, AZURE_GERMANY, AZURE_US_GOVERNMENT
- Importance: low
azure.cosmos.throughputControl.account.endpoint
Cosmos DB throughput control account endpoint uri.
- Type: string
- Importance: low
azure.cosmos.throughputControl.mode.gateway
Flag to indicate whether to use gateway mode. By default it is false, means SDK uses direct mode. https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes
- Type: boolean
- Default: false
- Importance: low
azure.cosmos.throughputControl.preferredRegionList
Preferred regions list to be used for a multi region Cosmos DB account. This is a comma separated value (e.g., [East US, West US] or East US, West US) provided preferred regions will be used as hint. You should use a collocated kafka cluster with your Cosmos DB account and pass the kafka cluster region as preferred region. See list of azure regions - https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.locationnames?view=azure-dotnet&preserve-view=true
- Type: string
- Importance: low
azure.cosmos.throughputControl.group.name
Throughput control group name. Since customer is allowed to create many groups for a container, the name should be unique.
- Type: string
- Importance: medium
azure.cosmos.throughputControl.targetThroughput
Throughput control group target throughput. The value should be larger than 0.
- Type: int
- Valid Values: [1,…]
- Importance: medium
azure.cosmos.throughputControl.targetThroughputThreshold
Throughput control group target throughput threshold. The value should be between (0,1].
- Type: double
- Importance: medium
azure.cosmos.throughputControl.priorityLevel
Throughput control group priority level. The value can be None, High or Low.
- Type: string
- Default: None
- Valid Values: High, Low, None
- Importance: medium
azure.cosmos.throughputControl.globalControl.database.name
Database which will be used for throughput global control.
- Type: string
- Importance: medium
azure.cosmos.throughputControl.globalControl.container.name
Container which will be used for throughput global control.
- Type: string
- Importance: medium
azure.cosmos.throughputControl.globalControl.renewIntervalInMS
This controls how often the client is going to update the throughput usage of itself and adjust its own throughput share based on the throughput usage of other clients. Default is 5s, the allowed min value is 5s.
- Type: int
- Default: 5000
- Valid Values: [5000,…]
- Importance: low
azure.cosmos.throughputControl.globalControl.expireIntervalInMS
This controls how quickly we will detect the client has been offline and hence allow its throughput share to be taken by other clients. Default is 11s, the allowed min value is 2 * renewIntervalInMS + 1
- Type: int
- Importance: low
Number of tasks for this connector¶
tasks.max
Maximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Metadata details¶
azure.cosmos.source.metadata.poll.delay.ms
Indicates how often to check the metadata changes (including container split/merge, adding/removing/recreated containers). When changes are detected, it will reconfigure the tasks. Default is 5 minutes
- Type: int
- Default: 300000 (5 minutes)
- Valid Values: [1,…]
- Importance: medium
azure.cosmos.source.metadata.storage.name.prefix
The resource name of the metadata storage prefix. If metadata storage type is Kafka topic, then this config refers to kafka topic name, the metadata topic will be created if it does not already exist, else it will use the pre-created topic. If metadata storage type is
Cosmos
, then this config refers to container name, forMasterKey
auth, this container will be created withAutoScale
with 4000 RU if not already exists, forServicePrincipal
auth, it requires the container to be created ahead of time .- Type: string
- Default: cosmos.metadata.topic
- Importance: medium
azure.cosmos.source.metadata.storage.type
The storage type of the metadata. Two types are supported: Cosmos, Kafka.
- Type: string
- Default: Kafka
- Valid Values: Cosmos, Kafka
- Importance: medium
Message key details¶
azure.cosmos.source.messageKey.enabled
Whether to set the kafka record message key.
- Type: boolean
- Default: true
- Importance: medium
azure.cosmos.source.messageKey.field
The document field to use as the message key.
- Type: string
- Default: id
- Importance: high
ChangeFeed details¶
azure.cosmos.source.changeFeed.mode
ChangeFeed mode (LatestVersion or AllVersionsAndDeletes)
- Type: string
- Default: LatestVersion
- Valid Values: AllVersionsAndDeletes, LatestVersion
- Importance: high
azure.cosmos.source.changeFeed.startFrom
ChangeFeed Start from settings (Now, Beginning or a certain point in time (UTC) for example 2020-02-10T14:15:03) - the default value is
Beginning
.- Type: string
- Default: Beginning
- Importance: high
azure.cosmos.source.changeFeed.maxItemCountHint
The maximum number of documents returned in a single change feed request. But the number of items received might be higher than the specified value if multiple items are changed by the same transaction. The default is 1000.
- Type: int
- Default: 1000
- Valid Values: [1,…]
- Importance: medium
Additional Configs¶
header.converter
The converter class for the headers. This is used to serialize and deserialize the headers of the messages.
- Type: string
- Importance: low
producer.override.compression.type
The compression type for all data generated by the producer.
- Type: string
- Importance: low
producer.override.linger.ms
The producer groups together any records that arrive in between request transmissions into a single batched request. More details can be found in the documentation: https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#linger-ms.
- Type: long
- Valid Values: [100,…,1000]
- Importance: low
value.converter.allow.optional.map.keys
Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.auto.register.schemas
Specify if the Serializer should attempt to register the Schema.
- Type: boolean
- Importance: low
value.converter.connect.meta.data
Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.avro.schema.support
Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.protobuf.schema.support
Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.flatten.unions
Whether to flatten unions (oneofs). Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.index.for.unions
Whether to generate an index suffix for unions. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.struct.for.nulls
Whether to generate a struct variable for null values. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.int.for.enums
Whether to represent enums as integers. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.latest.compatibility.strict
Verify latest subject version is backward compatible when use.latest.version is true.
- Type: boolean
- Importance: low
value.converter.object.additional.properties
Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.nullables
Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.proto2
Whether proto2 optionals are supported. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.scrub.invalid.names
Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.use.latest.version
Use latest version of schema in subject for serialization when auto.register.schemas is false.
- Type: boolean
- Importance: low
value.converter.use.optional.for.nonrequired
Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.nullables
Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.raw.primitives
Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
errors.tolerance
Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.
- Type: string
- Default: none
- Importance: low
key.converter.key.subject.name.strategy
How to construct the subject name for key schema registration.
- Type: string
- Default: TopicNameStrategy
- Importance: low
value.converter.decimal.format
Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
- Type: string
- Default: BASE64
- Importance: low
value.converter.flatten.singleton.unions
Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.ignore.default.for.nullables
When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.reference.subject.name.strategy
Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
- Type: string
- Default: DefaultReferenceSubjectNameStrategy
- Importance: low
value.converter.replace.null.with.default
Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.
- Type: boolean
- Default: true
- Importance: low
value.converter.schemas.enable
Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.
- Type: boolean
- Default: false
- Importance: low
value.converter.value.subject.name.strategy
Determines how to construct the subject name under which the value schema is registered with Schema Registry.
- Type: string
- Default: TopicNameStrategy
- Importance: low
Auto-restart policy¶
auto.restart.on.user.error
Enable connector to automatically restart on user-actionable errors.
- Type: boolean
- Default: true
- Importance: medium
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.