MySQL CDC Source V2 (Debezium) Connector for Confluent Cloud

The fully-managed MySQL Change Data Capture (CDC) Source V2 (Debezium) connector for Confluent Cloud can obtain a snapshot of the existing data in a MySQL database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services.

Note

V2 Improvements

Note the following improvements made to the V2 connector.

  • Supports reading binlog entries that were written with compression enabled.
  • Supports parsing JSON_TABLE table functions.
  • Can stop or pause an in-progress incremental snapshot. Can resume the incremental snapshot if it was previously been paused.
  • Supports regular expressions to specify table names for incremental snapshots.
  • Supports SQL-based predicates to control the subset of records to be included in the incremental snapshot.
  • Supports specifying a single column as a surrogate key for performing incremental snapshots.
  • Supports the additional-condition option of the signaling feature for incremental snapshots.
  • Can perform ad-hoc blocking snapshots.
  • Indices that rely on hidden, auto-generated columns, or columns wrapped in database functions are no longer considered primary key alternatives for tables that do not have a primary key defined.
  • Configuration options to specify how topic and schema names should be adjusted for compatibility.

Features

The MySQL CDC Source V2 (Debezium) connector provides the following features:

  • Topics created automatically: The connector automatically creates Kafka topics using the naming convention: <topic.prefix>.<schemaName>.<tableName>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3. For more information, see Maximum message size.
  • Database authentication: Uses password authentication.
  • SSL support: Supports SSL encryption.
  • Databases included and Databases excluded: Sets whether a database is or is not monitored for changes. By default, the connector monitors every database on the server.
  • Tables included and Tables excluded: Sets whether a table is or is not monitored for changes. By default, the connector monitors every non-system table.
  • Tombstones on delete: Sets whether a tombstone event is generated after a delete event. Default is true.
  • Output formats: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output Kafka record value format. It supports Avro, JSON Schema, Protobuf, JSON (schemaless), and String output record key format. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
  • Incremental snapshot: Supports incremental snapshotting via signaling.
  • Offset management capabilities: Supports offset management. For more information, see Manage custom offsets.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Supported database versions

The MySQL CDC Source V2 (Debezium) connector is compatible with the following MySQL versions: 8.0.x.

Limitations

Be sure to review the following information.

Maximum message size

This connector creates topics automatically. When it creates topics, the internal connector configuration property max.message.bytes is set to the following:

  • Basic cluster: 8 MB
  • Standard cluster: 8 MB
  • Enterprise cluster: 8 MB
  • Dedicated cluster: 20 MB

For more information about Confluent Cloud clusters, see Kafka Cluster Types in Confluent Cloud.

Log retention during snapshot

When launched, the CDC connector creates a snapshot of the existing data in the database to capture the nominated tables. To do this, the connector executes a “SELECT *” statement. Completing the snapshot can take a while if one or more of the nominated tables is very large.

During the snapshot process, the database server must retain transaction logs so that when the snapshot is complete, the CDC connector can start processing database changes that have completed since the snapshot process began. These logs are retained in a binary log (binlog) on the database server.

If one or more of the tables are very large, the snapshot process could run longer than the binlog retention time set on the database server (that is, expire_logs_days = <number-of-days>). To capture very large tables, you should temporarily retain the binlog for longer than normal by increasing the expire_logs_days value.

Manage custom offsets

You can manage the offsets for this connector. Offsets provide information on the point in the system from which the connector is accessing data. For more information, see Manage Offsets for Fully-Managed Connectors in Confluent Cloud.

To manage offsets:

To get the current offset, make a GET request that specifies the environment, Kafka cluster, and connector name.

GET /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets
Host: https://api.confluent.cloud

Response:

Successful calls return HTTP 200 with a JSON payload that describes the offset.

{
    "id": "lcc-example123",
    "name": "{connector_name}",
    "offsets": [
        {
            "partition": {
                "server": "server_01"
            },
            "offset": {
                "event": 2,
                "file": "mysql-bin.000598",
                "pos": 2326,
                "row": 1,
                "server_id": 1,
                "transaction_id": null,
                "ts_sec": 1711648627
            }
        }
    ],
    "metadata": {
        "observed_at": "2024-03-28T17:57:48.139635200Z"
    }
}

Responses include the following information:

  • The position of latest offset.
  • The observed time of the offset in the metadata portion of the payload. The observed_at time indicates a snapshot in time for when the API retrieved the offset. A running connector is always updating its offsets. Use observed_at to get a sense for the gap between real time and the time at which the request was made. By default, offsets are observed every minute. Calling get repeatedly will fetch more recently observed offsets.
  • Information about the connector.

JSON payload

The table below offers a description of the unique fields in the JSON payload for managing offsets of the MySQL Change Data Capture (CDC) Source connector.

Field Definition Required/Optional
event The number of rows and events to skip while starting from this file and position. Use event and row together but only use these fields if you understand which rows and events to skip. For most cases, you only need to provide file and position. Optional
file The file from the last processed binlog. Use file and pos together. Required
pos The position from the last processed binlog. Use pos and file together. Required
row The number of rows and events to skip while starting from this file and position. Use event and row together but only use these fields if you understand which rows and events to skip. For most cases, you only need to provide file and position. Optional
server_id The id of the server from which the event originated. For more information, see MySQL documentation. Optional
transaction_id Mostly null, provided only when provide.transaction.metadata is set to true in the config. For more information, see Debezium Documentation. Optional
ts_sec The timestamp at which the event at this pos was executed in the database. Optional

Important

Do not reset the offset to an arbitrary number. Use only offsets found in the binlog file. To find offsets in a binlog file, use the mysqlbinlog utility. Offsets appear in this format: # at <offset>

Migrate connectors

Considerations:

  • The configurations of the self-managed connector must match the configurations of the fully-managed connector.
  • The self-managed connector must be operating in streaming mode. If the self-managed connector is still in the process of making a snapshot, you can either create a new connector on Confluent Cloud which starts the snapshot process from the beginning or wait for the snapshot process to complete and follow the migration guidance.

Create fully-managed connectors with offsets

Considerations:

  • Schema history topic of the self-managed connector must be reused while creating the fully-managed connector. This can be done by specifying the Database schema history topic name config value in the Confluent Cloud Console or schema.history.internal.kafka.topic config value using Confluent CLI.
  • To reuse schema history topic, the specified value of Topic prefix should be same as that of the self-managed connector.
  • If schema history topic of the self-managed connector is not available or can not be reused, you can start the connector with schema_only_recovery snapshot mode. This populates the schema history topic first and then starts the connector from the specified offset. The connector will fail if there have been schema changes in the included tables after the specified offset.

Quick Start

Use this quick start to get up and running with the MySQL CDC Source V2 (Debezium) connector. The quick start provides the basics of selecting the connector and configuring it to obtain a snapshot of the existing data in a MySQL database and then monitoring and recording all subsequent row-level changes.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.

    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
  • Update the following settings for the MySQL database.

    1. Turn on backup for the database.

    2. Create a new parameter group and set the following parameters:

      binlog_format=ROW
      binlog_row_image=full
      
    3. Apply the new parameter group to the database.

    4. Reboot the database.

    The following example screens are from Amazon RDS:

    Set database backup
    Set database binlog
    Set database binlog row image

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the MySQL CDC Source V2 connector card.

MySQL CDC Source V2 Connector Card

Step 4: Enter the connector details

Note

At the MySQL CDC Source V2 (Debezium) Connector screen, complete the following:

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:
    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
  2. Click Continue.

Step 5: Check the Kafka topic

After the connector is running, verify that messages are populating your Kafka topic.

Note

A topic named dbhistory.<topic.prefix>.<connect-id> is automatically created for schema.history.internal.kafka.topic with one partition.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "MySqlCdcSourceV2",
  "name": "MySqlCdcSourceV2Connector_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "****************************************************************",
  "database.hostname": "database-2.<host-id>.us-west-2.rds.amazonaws.com",
  "database.port": "3306",
  "database.user": "admin",
  "database.password": "**********",
  "topic.prefix": "mysql",
  "table.include.list":"employees.departments,
  "output.data.format": "JSON",
  "tasks.max": "1"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "database.hostname": IP address or hostname of the MySQL database server.

  • "database.port": Port number of the MySQL database server.

  • "database.user": The name of the MySQL database user that has the required authorization.

  • "database.password": Password of the MySQL database user that has the required authorization.

  • "topic.prefix": Provides a namespace for the particular database server/cluster that the connector is capturing changes from.

  • "table.include.list": An optional, comma-separated list of fully-qualified table identifiers for the connector to monitor. By default, the connector monitors all non-system tables. A fully-qualified table name is in the form databaseName.tableName. This property cannot be used with the property table.exclude.list.

  • "output.data.format": Sets the output Kafka record value format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based record format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "tasks.max": Enter the number of tasks in use by the connector. Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. For additional information about the Debezium SMTs ExtractNewRecordState and EventRouter (Debezium), see Debezium transformations.

See Configuration Properties for all properties and definitions.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file mysql-cdc-source-v2.json

Example output:

Created connector MySqlCdcSourceV2Connector_0 lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |            Name               | Status  |  Type
+-----------+-------------------------------+---------+-------+
lcc-ix4dl   | MySqlCdcSourceV2Connector_0   | RUNNING | source

Step 6: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

Note

A topic named dbhistory.<topic.prefix>.<connect-id> is automatically created for schema.history.internal.kafka.topic with one partition.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Upgrading MySQL Database

When you upgrade the MySQL database that the connector uses, you must take specific steps to protect against data loss and to ensure that the connector continues to operate. In general, the connector is resilient to interruptions caused by network failures and other outages. For example, when a database server that a connector monitors stops or crashes, after the connector re-establishes communication with the database server, it continues to read from the last position recorded in the offset. The connector retrieves information about the last recorded offset from the Connect offsets topic and queries the configured MySQL server to get events from the position recorded in the offset.

There are various methods for upgrading a MySQL database. This section focuses on the in-place upgrade. For more details, please refer to the MySQL Upgrade documentation. The Debezium MySQL CDC connector captures changes from the MySQL database by reading the binlog. During an in-place upgrade, the binlog files remain unaffected, allowing the connector to continue capturing changes from the upgraded server seamlessly. can continue to capture changes from the the upgraded server seamlessly.

For guidance about how to perform an in-place MySQL database upgrade so that the connector can continue to capture events while minimizing the risk of data loss, see the following procedure:

  1. Temporarily stop applications that write to the database, or put them into a read-only mode.
  2. Back up the database.
  3. Temporarily disable write access to the database.
  4. Provide the connector with enough time to capture all event records that are written to the binlog. To verify that the connector has finished consuming entries from the binlog, check that the latest offset in the Kafka offsets topic is the same as the latest binlog position. Use the command - SHOW MASTER STATUS; to obtain the latest binlog position. To get the latest offset, refer to Manage custom offsets. This step ensures that all change events that occurred before the downtime are accounted for, and that they are saved to Kafka.
  5. Pause the connector.
  6. Perform the in-place upgrade on the database.
  7. Log in to the upgraded server and verify that the binlog which connector requires is present and that the server is setup for change data capture.
  8. Resume the connector.
  9. Restore write access to the database and restart any applications that write to the database.

Moving from V1 to V2

Version 2 of this connector supports new features and has breaking changes that are not backward compatible with version 1 of the connector. To understand these changes and to plan for moving to version 2, see Backward Incompatible Changes in Debezium CDC V2 Connectors.

Given the backward-incompatible changes between version 1 and 2 of the CDC connectors, version 2 is being provided in a new set of CDC connectors on Confluent Cloud. You can provision either version 1 or version 2. However, note that eventually version 1 will be deprecated and no longer supported.

Before exploring your options for moving from version 1 to 2, be sure to make the required changes documented in Backward Incompatible Changes in Debezium CDC V2 Connectors. To get the offset in the following section, use the Confluent Cloud APIs. For more information, see Cluster API reference, Manage custom offsets, and Manage Offsets for Fully-Managed Connectors in Confluent Cloud.

To move from version 1 to 2 (v1 to v2)

Use the following steps to migrate to version 2. Implement and validate any connector changes in a pre-production environment before promoting to production.

  1. Pause the v1 connector.

  2. Get the offset for the v1 connector.

  3. Create the v2 connector using the offset from the previous step.

    confluent connect cluster create [flags]
    

    For example:

    Create a configuration file with connector configs and offsets.

    {
      "name": "(connector-name)",
      "config": {
          ... // connector specific configuration
      },
      "offsets": [
          {
              "partition": {
          ... // connector specific configuration
              },
              "offset": {
          ... // connector specific configuration
              }
          }
      ]
    }
    

    Create a connector in the current or specified Kafka cluster context.

    confluent connect cluster create --config-file config.json
    

    For connectors that maintain a schema history topic, you must configure the schema history topic name in v2 to match the schema history topic name from the v1 connector.

  4. Delete the v1 connector.

For more information, see Manage Offsets for Fully-Managed Connectors in Confluent Cloud.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

How should we connect to your database?

database.hostname

IP address or hostname of the MySQL database server.

  • Type: string
  • Importance: high
database.port

Port number of the MySQL database server.

  • Type: int
  • Valid Values: [0,…,65535]
  • Importance: high
database.user

The name of the MySQL database user that has the required authorization.

  • Type: string
  • Importance: high
database.password

Password for the MySQL database user that has the required authorization.

  • Type: password
  • Importance: high
database.ssl.mode

Whether to use an encrypted connection to the MySQL server. Possible settings are: disabled, preferred, and required.

disabled specifies the use of an unencrypted connection.

preferred establishes an encrypted connection if the server supports secure connections. If the server does not support secure connections, falls back to an unencrypted connection.

required establishes an encrypted connection or fails if one cannot be made for any reason.

  • Type: string
  • Default: preferred
  • Importance: high

Output messages

output.data.format

Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Importance: high
output.key.format

Sets the output Kafka record key format. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: JSON
  • Valid Values: AVRO, JSON, JSON_SR, PROTOBUF, STRING
  • Importance: high
json.output.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string
  • Default: BASE64
  • Importance: low
after.state.only

Controls whether the generated Kafka record should contain only the state of the row after the event occurred.

  • Type: boolean
  • Default: false
  • Importance: low
tombstones.on.delete

Controls whether a tombstone event should be generated after a delete event.

true - a delete operation is represented by a delete event and a subsequent tombstone event.

false - only a delete event is emitted.

After a source record is deleted, emitting the tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case log compaction is enabled for the topic.

  • Type: boolean
  • Default: true
  • Importance: medium

How should we name your topic(s)?

topic.prefix

Topic prefix that provides a namespace (logical server name) for the particular MySQL database server or cluster in which Debezium is capturing changes. The prefix should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector. Only alphanumeric characters, hyphens, dots and underscores must be used. The connector automatically creates Kafka topics using the naming convention: <topic.prefix>.<databaseName>.<tableName>.

  • Type: string
  • Importance: high
schema.history.internal.kafka.topic

The name of the topic for the database schema history. A new topic with provided name is created, if it doesn’t already exist. If the topic already exists, ensure that it has a single partition, infinite retention period and is not in use by any other connector. If no value is provided, the name defaults to dbhistory.<topic-prefix>.<lcc-id>.

  • Type: string
  • Default: dbhistory.${topic.prefix}.{{.logicalClusterId}}
  • Importance: high

Database config

signal.data.collection

Fully-qualified name of the data collection that needs to be used to send signals to the connector. Use the following format to specify the fully-qualified collection name: databaseName.tableName

  • Type: string
  • Importance: medium

Connector config

snapshot.mode

Specifies the criteria for running a snapshot when the connector starts. Possible settings are: initial, initial_only, when_needed, never, schema_only, schema_only_recovery.

initial - the connector runs a snapshot only when no offsets have been recorded for the logical server name.

initial_only - the connector runs a snapshot only when no offsets have been recorded for the logical server name and then stops; i.e. it will not read change events from the binlog.

when_needed - the connector runs a snapshot upon startup whenever it deems it necessary. That is, when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server.

never - the connector never uses snapshots. Upon first startup with a logical server name, the connector reads from the beginning of the binlog. Configure this behavior with care. It is valid only when the binlog is guaranteed to contain the entire history of the database.

schema_only - the connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started.

schema_only_recovery - this is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database schema history topic. You might set it periodically to “clean up” a database schema history topic that has been growing unexpectedly. Database schema history topics require infinite retention.

  • Type: string
  • Default: initial
  • Valid Values: initial, initial_only, never, schema_only, schema_only_recovery, when_needed
  • Importance: medium
snapshot.locking.mode

Controls whether and how long the connector holds the global MySQL read lock, which prevents any updates to the database, while the connector is performing a snapshot. Possible settings are: minimal, minimal_percona, extended, and none.

minimal - the connector holds the global read lock for just the initial portion of the snapshot, while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table. This is accomplished using a REPEATABLE READ transaction, even when the lock is no longer held and other MySQL clients are updating the database.

minimal_percona - similar to minimal mode except the connector uses a (Percona-specific) backup lock. This mode does not flush tables to disk, is not blocked by long-running reads, and is available only in Percona Server.

extended - blocks all writes for the duration of the snapshot. Use this setting if there are clients that are submitting operations that MySQL excludes from REPEATABLE READ semantics.

none - prevents the connector from acquiring any table locks during the snapshot. While this setting is allowed with all snapshot modes, it is safe to use if and only if no schema changes are happening while the snapshot is running. For tables defined with MyISAM engine, the tables would still be locked despite this property being set as MyISAM acquires a table lock. This behavior is unlike InnoDB engine, which acquires row level locks.

  • Type: string
  • Default: minimal
  • Valid Values: extended, minimal, minimal_percona, none
  • Importance: low
database.include.list

An optional, comma-separated list of regular expressions that match the names of the databases for which to capture changes. The connector does not capture changes in any database whose name is not in this list. By default, the connector captures changes in all databases.

To match the name of a database, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the database; it does not match substrings that might be present in a database name.

  • Type: list
  • Importance: medium
table.include.list

An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you want to capture. When this property is set, the connector captures changes only from the specified tables. Each identifier is of the form database.tableName. By default, the connector captures changes in every non-system table in each schema whose changes are being captured.

To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the table; it does not match substrings that might be present in a table name.

If you include this property in the configuration, do not also set the table.exclude.list property.

  • Type: list
  • Importance: medium
table.exclude.list

An optional, comma-separated list of regular expressions that match fully-qualified table identifiers for tables whose changes you do not want to capture. Each identifier is of the form database.tableName. When this property is set, the connector captures changes from every table that you do not specify.

To match the name of a table, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire identifier for the table; it does not match substrings that might be present in a table name.

If you include this property in the configuration, do not set the table.include.list property.

  • Type: list
  • Importance: medium
column.exclude.list

An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are of the form databaseName.tableName.columnName.

To match the name of a column, Debezium applies the regular expression that you specify as an anchored regular expression. That is, the specified expression is matched against the entire name string of the column; it does not match substrings that might be present in a column name.

  • Type: list
  • Importance: medium
event.processing.failure.handling.mode

Specifies how the connector should react to exceptions during processing of events. Possible settings are: fail, skip, and warn.

fail propagates the exception, indicates the offset of the problematic event, and causes the connector to stop.

warn logs the offset of the problematic event, skips that event, and continues processing.

skip skips the problematic event and continues processing.

  • Type: string
  • Default: fail
  • Valid Values: fail, skip, warn
  • Importance: low
schema.name.adjustment.mode

Specifies how schema names should be adjusted for compatibility with the message converter used by the connector. Possible settings are: none, avro, and avro_unicode.

none does not apply any adjustment.

avro replaces the characters that cannot be used in the Avro type name with underscore.

avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java.

  • Type: string
  • Default: none
  • Valid Values: avro, avro_unicode, none
  • Importance: medium
field.name.adjustment.mode

Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings are: none, avro, and avro_unicode.

none does not apply any adjustment.

avro replaces the characters that cannot be used in the Avro type name with underscore.

avro_unicode replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java.

  • Type: string
  • Default: none
  • Valid Values: avro, avro_unicode, none
  • Importance: medium
heartbeat.interval.ms

Controls how frequently the connector sends heartbeat messages to a Kafka topic. The behavior of default value 0 is that the connector does not send heartbeat messages. Heartbeat messages are useful for monitoring whether the connector is receiving change events from the database. Heartbeat messages might help decrease the number of change events that need to be re-sent when a connector restarts. To send heartbeat messages, set this property to a positive integer, which indicates the number of milliseconds between heartbeat messages.

  • Type: int
  • Default: 0
  • Valid Values: [0,…]
  • Importance: low
inconsistent.schema.handling.mode

Specifies how the connector should react to binlog events that belong to a table missing from internal schema representation. Possible settings are: fail, skip, and warn.

fail - throws an exception that indicates the problematic event and its binlog offset, and causes the connector to stop.

skip - passes over the problematic event and does not log anything.

warn - logs the problematic event and its binlog offset and skips the event.

  • Type: string
  • Default: fail
  • Valid Values: fail, skip, warn
  • Importance: medium
schema.history.internal.skip.unparseable.ddl

A Boolean value that specifies whether the connector should ignore malformed or unknown database statements (true), or stop processing so a human can fix the issue (false). Defaults to false. Consider setting this to true to ignore unparseable statements.

  • Type: boolean
  • Default: false
  • Importance: low
schema.history.internal.store.only.captured.tables.ddl

A Boolean value that specifies whether the connector records schema structures from all tables in a schema or database, or only from tables that are designated for capture. Defaults to false.

false - During a database snapshot, the connector records the schema data for all non-system tables in the database, including tables that are not designated for capture. It’s best to retain the default setting. If you later decide to capture changes from tables that you did not originally designate for capture, the connector can easily begin to capture data from those tables, because their schema structure is already stored in the schema history topic.

true - During a database snapshot, the connector records the table schemas only for the tables from which Debezium captures change events. If you change the default value, and you later configure the connector to capture data from other tables in the database, the connector lacks the schema information that it requires to capture change events from the tables.

  • Type: boolean
  • Default: false
  • Importance: low

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium
value.converter.ignore.default.for.nullables

Specifies whether the default value should be ignored for nullable fields. If set to true, the default value is ignored for nullable fields. If set to false, the default value is used for nullable fields.

  • Type: boolean
  • Default: false
  • Importance: medium
key.converter.reference.subject.name.strategy

Set the subject reference name strategy for key. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: high
value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: high

How should we handle data types?

decimal.handling.mode

Specifies how the connector should handle values for DECIMAL and NUMERIC columns. Possible settings are: precise, double, and string.

precise represents values by using java.math.BigDecimal to represent values in binary form in change events. double represents values by using double values, which might result in a loss of precision but which is easier to use. string encodes values as formatted strings, which are easy to consume but semantic information about the real type is lost.

  • Type: string
  • Default: precise
  • Valid Values: double, precise, string
  • Importance: medium
time.precision.mode

Time, date, and timestamps can be represented with different kinds of precisions:

adaptive_time_microseconds captures the date, datetime and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column’s type. An exception is TIME type fields, which are always captured as microseconds.

connect always represents time and timestamp values by using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which use millisecond precision regardless of the database columns’ precision.

  • Type: string
  • Default: adaptive_time_microseconds
  • Valid Values: adaptive_time_microseconds, connect
  • Importance: medium

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Valid Values: [1,…,1]
  • Importance: high

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../../_images/topology.png