Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
SQL Server Source Connector (Debezium) Configuration Properties¶
The SQL Server Source Connector can be configured using a variety of configuration properties.
database.hostname
IP address or hostname of the SQL Server database server.
- Type: String
- Importance: High
database.port
Integer port number of the SQL Server database server.
- Type: Integer
- Importance: Low
- Default: 1433
database.user
Username to use when when connecting to the SQL Server database server.
- Type: String
- Importance: High
database.password
Password to use when when connecting to the SQL Server database server.
- Type: Password
- Importance: High
database.dbname
The name of the SQL Server database from which to stream the changes.
- Type: String
- Importance: High
database.server.name
Logical name that identifies and provides a namespace for the particular SQL Server database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names coming from this connector.
- Type: String
- Importance: High
database.history.kafka.topic
The full name of the Kafka topic where the connector will store the database schema history.
- Type: String
- Importance: High
database.history.kafka.bootstrap.servers
A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process.
- Type: List of Strings
- Importance: High
Note
If the Kafka cluster is secured, you must add the security properties prefixed with
database.history.consumer.*
anddatabase.history.producer.*
to the connector configuration, as shown below:"database.history.consumer.security.protocol": "SASL_SSL", "database.history.consumer.sasl.mechanism": "PLAIN", "database.history.consumer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"key\" password=\"secret\";", "database.history.producer.security.protocol": "SASL_SSL", "database.history.producer.sasl.mechanism": "PLAIN", "database.history.producer.sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"key\" password=\"secret\";",
table.whitelist
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored. Any table not included in the whitelist will be excluded from monitoring. Each identifier is of the form
schemaName.tableName
. By default the connector will monitor every non-system table in each monitored schema. May not be used withtable.blacklist
.- Type: List of Strings
- Importance: Low
table.blacklist
An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring. Any table not included in the blacklist will be monitored. Each identifier is of the form
schemaName.tableName
. May not be used withtable.whitelist
.- Type: List of Strings
- Importance: Low
column.blacklist
An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values. Fully-qualified names for columns are of the form
schemaName.tableName.columnName
. Note that primary key columns are always included in the event’s key, also if blacklisted from the value.- Type: List of Strings
- Importance: Low
- Default: Empty string
column.propagate.source.type
An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages. The schema parameters
__debezium.source.column.type
,__debezium.source.column.length
and__debezium.source.column.scale
will be used to propagate the original type name and length (for variable-width types), respectively. Useful to properly size corresponding columns in sink databases. Fully-qualified names for columns are of the form`` schemaName.tableName.columnName``.- Type: List of Strings
- Importance: Low
- Default: n/a
database.history.kafka.bootstrap.servers
The kafka topic to write the data to.
- Type: String
- Importance: High
More details can be found in the Debezium connector properties documentation.
Note
Portions of the information provided here derives from documentation originally produced by the Debezium Community. Work produced by Debezium is licensed under Creative Commons 3.0.