Microsoft SQL Server Source (JDBC) Connector for Confluent Cloud¶
Note
If you are installing the connector locally for Confluent Platform, see JDBC Connector (Source and Sink) for Confluent Platform.
The Kafka Connect Microsoft SQL Server Source connector for Confluent Cloud can obtain a snapshot of the existing data in a Microsoft SQL Server database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services. Note that deleted records are not captured.
Features¶
The Microsoft SQL Server Source connector provides the following features:
Topics created automatically: The connector can automatically create Kafka topics. When creating topics, the connector uses the naming convention:
<topic.prefix><tableName>
. The tables are created with the properties:topic.creation.default.partitions=1
andtopic.creation.default.replication.factor=3
.Insert modes:
timestamp mode is enabled when only a timestamp column is specified when you enter database details.
timestamp+incrementing mode is enabled when both a timestamp column and incrementing column are specified when you enter database details.
Important
- A timestamp column must not be nullable.
- A timestamp column must use datetime2 and not datetime. If the timestamp column uses datetime, the topic may receive numerous duplicates.
Database authentication: password authentication.
SSL support: Supports one-way SSL.
Data formats: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
Select configuration properties:
db.timezone
poll.interval.ms
batch.max.rows
timestamp.delay.interval.ms
topic.prefix
schema.pattern
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Microsoft SQL Server Source Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Environment Limitations.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud Microsoft SQL Server Source connector. The quick start provides the basics of selecting the connector and configuring it to obtain a snapshot of the existing data in a Microsoft SQL Server database and then monitoring and recording all subsequent row-level changes.
- Prerequisites
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
The connector automatically creates Kafka topics using the naming convention:
<prefix>.<table-name>
. The tables are created with the properties:topic.creation.default.partitions=1
andtopic.creation.default.replication.factor=3
. If you want to create topics with specific settings, please create the topics before running this connector.Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
Public access may be required for your database. See Network Access for details. The example below shows the AWS Management Console when setting up a Microsoft SQL Server database.
Public access enabled¶
Public inbound traffic access (
0.0.0.0/0
) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. For networking considerations, see Networking and DNS Considerations. To use static egress IPs, see Static Egress IP Addresses. The example below shows the AWS Management Console when setting up security group rules for the VPC.Open inbound traffic¶
Note
See your specific cloud platform documentation for how to configure security rules for your VPC.
A database table timestamp column must not be nullable and must use datetime2 and not datetime.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster.¶
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Step 2: Add a connector.¶
In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Set up the connection.¶
Complete the following and click Continue.
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
Enter a connector name.
Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
Enter a topic prefix. The connector automatically creates Kafka topics using the naming convention:
<prefix>.<table-name>
. The tables are created with the properties:topic.creation.default.partitions=1
andtopic.creation.default.replication.factor=3
. If you want to create topics with specific settings, please create the topics before running this connector.Add the connection details for the database.
Important
Do not include
jdbc:xxxx://
in the connection hostname property. The example below shows a sample host address.Note that the default option
prefer
is enabled for SSL mode if no option is selected. Whenprefer
is enabled, the connector attempts to use an encrypted connection to the database server. Options include:prefer
andrequire
: The connector uses a secure (encrypted) connection. The connector fails if a secure connection cannot be established. These modes do not do Certification Authority (CA) validation.verify-ca
: This option is similar torequire
, but additionally verifies the server TLS certificate against the configured Certificate Authority (CA) certificates. Fails if no valid matching CA certificates are found.verify-full
: similar toverify-ca
, but also verifies that the server certificate matches the host to which the connection is attempted.- You use the Trust store button to upload the truststore file that contains the CA information. You must add the Truststore password.
Add the Database details for your database. Review the following notes for more information about field selections.
- Enter a Timestamp column name to enable timesamp mode. This mode uses a timestamp (or timestamp-like) column to detect new and modified rows. This assumes the column is updated with each write, and that values are monotonically incrementing, but not necessarily unique.
- Enter both a Timestamp column name and an Incrementing column name to enable timestamp+incrementing mode. This mode uses two columns, a timestamp column that detects new and modified rows, and a strictly incrementing column which provides a globally unique ID for updates so each row can be assigned a unique stream offset.
- By default, the connector only detects tables with type TABLE from the source database. Use VIEW for virtual tables created from joining one or more tables. Use ALIAS for tables with a shortened or temporary name.
- If you define a schema pattern in your database, you need to enter the Schema pattern to fetch table metadata from the database.
""
retrieves table metadata for tables not using a schema.null
(default) indicates that the schema name is not used to narrow the search and that all table metadata is fetched, regardless of the schema.
Select the Output Kafka record value format (data coming from the connector): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
Enter the number of tasks in use by the connector. Refer to Confluent Cloud connector limitations for additional information.
Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.
See Configuration Properties for all property values and definitions.
Step 5: Launch the connector.¶
Verify the connection details by previewing the running configuration. Once you’ve validated that the properties are configured to your satisfaction, click Launch.
Tip
For information about previewing your connector output, see Connector Data Previews.
Step 6: Check the connector status.¶
The status for the connector should go from Provisioning to Running. It may take a few minutes.
Step 7: Check the Kafka topic.¶
After the connector is running, verify that messages are populating your Kafka topic.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
- Make sure you have all your prerequisites completed.
- The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.
Step 1: List the available connectors.¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: Show the required connector configuration properties.¶
Enter the following command to show the required connector properties:
confluent connect plugin describe <connector-catalog-name>
For example:
confluent connect plugin describe MicrosoftSqlServerSource
Example output:
Following are the required configs:
connector.class
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
topic.prefix
connection.host
connection.port
connection.user
connection.password
db.name
table.whitelist
timestamp.column.name
output.data.format
tasks.max
Step 3: Create the connector configuration file.¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name" : "confluent-microsoft-sql-source",
"connector.class": "MicrosoftSqlServerSource",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret" : "<my-kafka-api-secret>",
"topic.prefix" : "microsoftsql_",
"connection.host" : "<my-database-endpoint>",
"connection.port" : "1433",
"connection.user" : "<database-username>",
"connection.password": "<database-password>",
"db.name": "ms-sql-test",
"table.whitelist": "passengers",
"timestamp.column.name": "created_at",
"output.data.format": "JSON",
"db.timezone": "UCT",
"tasks.max" : "1"
}
Note the following property definitions:
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"topic.prefix"
: Enter a topic prefix. The connector automatically creates Kafka topics using the naming convention:<prefix>.<table-name>
. The tables are created with the properties:topic.creation.default.partitions=1
andtopic.creation.default.replication.factor=3
. If you want to create topics with specific settings, please create the topics before running this connector."topic.prefix"
: Enter a topic prefix. The connector automatically creates Kafka topics using the naming convention:<prefix>.<table-name>
. The tables are created with the properties:topic.creation.default.partitions=1
andtopic.creation.default.replication.factor=3
. If you want to create topics with specific settings, please create the topics before running this connector.The following provides more information about how to use the
ssl.mode
property:- The default option
prefer
is enabled ifssl.mode
is not added to the connector configuration. Whenprefer
is enabled, the connector attempts to use an encrypted connection to the database server. prefer
andrequire
: use a secure (encrypted) connection. The connector fails if a secure connection cannot be established. These modes do not do Certification Authority (CA) validation.verify-ca
: similar torequire
, but also verifies the server TLS certificate against the configured Certificate Authority (CA) certificates. Fails if no valid matching CA certificates are found.verify-full
: similar toverify-ca
, but also verifies that the server certificate matches the host to which the connection is attempted.
If you choose
verify-ca
orverify-full
, use the propertyssl.rootcertfile
and add the contents of the text certificate file for the property value. For example,"ssl.rootcertfile": "<certificate-text>"
.- The default option
The following provides more information about how to use the
timestamp.column.name
andincrementing.column.name
properties.- Enter a
timestamp.column.name
to enable timesamp mode. This mode uses a timestamp (or timestamp-like) column to detect new and modified rows. This assumes the column is updated with each write, and that values are monotonically incrementing, but not necessarily unique. - Enter both a
timestamp.column.name
and anincrementing.column.name
to enable timestamp+incrementing mode. This mode uses two columns, a timestamp column that detects new and modified rows, and a strictly incrementing column which provides a globally unique ID for updates so each row can be assigned a unique stream offset. By default, the connector only detectstable.types
with typeTABLE
from the source database. EnterVIEW
for virtual tables created from joining one or more tables. EnterALIAS
for tables with a shortened or temporary name.
- Enter a
If you define a schema pattern in your database, you need to enter the
schema.pattern
property to fetch table metadata from the database.""
retrieves table metadata for tables not using a schema.null
(default) indicates that the schema name is not used to narrow the search and that all table metadata is fetched, regardless of the schema."output.data.format"
: Sets the output Kafka record value format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf)."db.timezone"
: Identifies the database timezone. This can be any valid database timezone. The default is UTC. For more information, see this list of database timezones.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and definitions.
Step 4: Load the properties file and create the connector.¶
Enter the following command to load the configuration and start the connector:
confluent connect create --config <file-name>.json
For example:
confluent connect create --config microsoft-sql-source.json
Example output:
Created connector confluent-microsoft-sql-source lcc-ix4dl
Step 5: Check the connector status.¶
Enter the following command to check the connector status:
confluent connect list
Example output:
ID | Name | Status | Type
+-----------+--------------------------------+---------+-------+
lcc-ix4dl | confluent-microsoft-sql-source | RUNNING | source
Step 6: Check the Kafka topic.¶
After the connector is running, verify that messages are populating your Kafka topic.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Configuration Properties¶
Use the following configuration properties with this connector.
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
- Type: password
- Importance: high
How do you want to prefix table names?¶
topic.prefix
Prefix to prepend to table names to generate the name of the Apache Kafka® topic to publish data to.
- Type: string
- Importance: high
How should we connect to your database?¶
connection.host
While this connector is in preview, public access must be enabled for your database and public inbound traffic (0.0.0.0/0) must be allowed to the database VPC, unless the environment is configured for VPC peering.
- Type: string
- Importance: high
connection.port
JDBC connection port.
- Type: int
- Valid Values: [0,…,65535]
- Importance: high
connection.user
JDBC connection user.
- Type: string
- Importance: high
connection.password
JDBC connection password.
- Type: password
- Importance: high
db.name
JDBC database name.
- Type: string
- Importance: high
ssl.mode
What SSL mode should we use to connect to your database. prefer and require allows for the connection to be encrypted but does not do certificate validation on the server. verify-ca and verify-full require a file containing SSL CA certificate to be provided. The server’s certificate will be verified to be signed by one of these authorities.`verify-ca` will verify that the server certificate is issued by a trusted CA. verify-full will verify that the server certificate is issued by a trusted CA and that the server hostname matches that in the certificate. Client authentication is not performed.
- Type: string
- Default: prefer
- Importance: high
ssl.truststorefile
The trust store containing server CA certificate. Only required if using verify-ca or verify-full ssl mode.
- Type: password
- Default: [hidden]
- Importance: low
ssl.truststorepassword
The trust store password containing server CA certificate. Only required if using verify-ca or verify-full ssl mode.
- Type: password
- Default: [hidden]
- Importance: low
Database details¶
table.whitelist
List of tables to include in copying. Use a comma-separated list to specify multiple tables (for example: “User, Address, Email”).
- Type: list
- Importance: medium
timestamp.column.name
Comma separated list of one or more timestamp columns to detect new or modified rows using the COALESCE SQL function. Rows whose first non-null timestamp value is greater than the largest previous timestamp value seen will be discovered with each poll. At least one column should not be nullable.
- Type: list
- Importance: medium
incrementing.column.name
The name of the strictly incrementing column to use to detect new rows. Any empty value indicates the column should be autodetected by looking for an auto-incrementing column. This column may not be nullable.
- Type: string
- Default: “”
- Importance: medium
table.types
By default, the JDBC connector will only detect tables with type TABLE from the source Database. This config allows a command separated list of table types to extract.
- Type: list
- Default: TABLE
- Importance: medium
schema.pattern
Schema pattern to fetch table metadata from the database.
- Type: string
- Importance: medium
db.timezone
Name of the JDBC timezone used in the connector when querying with time-based criteria. Defaults to UTC.
- Type: string
- Default: UTC
- Importance: medium
numeric.mapping
Map NUMERIC values by precision and optionally scale to integral or decimal types. Use
none
if all NUMERIC columns are to be represented by Connect’s DECIMAL logical type. Usebest_fit
if NUMERIC columns should be cast to Connect’s INT8, INT16, INT32, INT64, or FLOAT64 based upon the column’s precision and scale. Usebest_fit_eager_double
if, in addition to the properties of best_fit described above, it is desirable to always cast NUMERIC columns with scale to Connect FLOAT64 type, despite potential of loss in accuracy. Useprecision_only
to map NUMERIC columns based only on the column’s precision assuming that column’s scale is 0. Thenone
option is the default, but may lead to serialization issues with Avro since Connect’s DECIMAL type is mapped to its binary representation, andbest_fit
will often be preferred since it maps to the most appropriate primitive type.- Type: string
- Default: none
- Importance: low
Mode¶
mode
The mode for updating a table each time it is polled. BULK: perform a bulk load of the entire table each time it is polled. TIMESTAMP: use a timestamp (or timestamp-like) column to detect new and modified rows. This assumes the column is updated with each write, and that values are monotonically incrementing, but not necessarily unique. INCREMENTING: use a strictly incrementing column on each table to detect only new rows. Note that this will not detect modifications or deletions of existing rows. TIMESTAMP AND INCREMENTING: use two columns, a timestamp column that detects new and modified rows and a strictly incrementing column which provides a globally unique ID for updates so each row can be assigned a unique stream offset.
- Type: string
- Default: “”
- Importance: medium
quote.sql.identifiers
When to quote table names, column names, and other identifiers in SQL statements. For backward compatibility, the default value is ALWAYS.
- Type: string
- Default: ALWAYS
- Valid Values: ALWAYS, NEVER
- Importance: medium
timestamp.initial
The epoch timestamp used for initial queries that use timestamp criteria. The value -1 sets the initial timestamp to the current time. If not specified, the connector retrieves all data. Once the connector has managed to successfully record a source offset, this property has no effect even if changed to a different value later on.
- Type: long
- Valid Values: [-1,…]
- Importance: medium
Connection details¶
poll.interval.ms
Frequency in ms to poll for new data in each table.
- Type: int
- Default: 5000 (5 seconds)
- Valid Values: [100,…]
- Importance: high
batch.max.rows
Maximum number of rows to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.
- Type: int
- Default: 100
- Valid Values: [1,…,5000]
- Importance: low
timestamp.delay.interval.ms
How long to wait after a row with a certain timestamp appears before we include it in the result. You may choose to add some delay to allow transactions with an earlier timestamp to complete. The first execution will fetch all available records (starting at timestamp 0) until current time minus the delay. Every following execution will get data from the last time we fetched until current time minus the delay.
- Type: int
- Default: 0
- Valid Values: [0,…]
- Importance: high
Output messages¶
output.data.format
Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF
- Type: string
- Importance: high
Number of tasks for this connector¶
tasks.max
- Type: int
- Valid Values: [1,…]
- Importance: high
Next Steps¶
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.