MySQL CDC Source (Debezium) Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see Debezium MySQL Source connector for Confluent Platform.

The Kafka Connect MySQL Change Data Capture (CDC) Source (Debezium) connector for Confluent Cloud can obtain a snapshot of the existing data in a MySQL database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The MySQL CDC Source (Debezium) connector provides the following features:

  • Topics created automatically: The connector automatically creates Kafka topics using the naming convention: <database.server.name>.<schemaName>.<tableName>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3.
  • Databases included and Databases excluded: Sets whether a database is or is not monitored for changes. By default, the connector monitors every database on the server.
  • Tables included and Tables excluded: Sets whether a table is or is not monitored for changes. By default, the connector monitors every non-system table.
  • Tasks per connector: Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").
  • Snapshot mode: Specifies the criteria for running a snapshot.
  • Tombstones on delete: Sets whether a tombstone event is generated after a delete event. Default is true.
  • Database authentication: Uses password authentication.
  • SSL support: Supports one-way SSL.
  • Data formats: Supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

Note

database.server.id is set to a random number between 5400 and 6400.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

See the MySQL CDC Source connector configuration properties for properties and definitions. See the Confluent Cloud connector limitations for additional information

Quick Start

Use this quick start to get up and running with the MySQL CDC Source (Debezium) connector. The quick start provides the basics of selecting the connector and configuring it to obtain a snapshot of the existing data in a MySQL database and then monitoring and recording all subsequent row-level changes.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).

  • The Confluent Cloud CLI installed and configured for the cluster. See Install the Confluent Cloud CLI.

  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • Public access may be required for your database. See Internet Access to Resources for details. The example below shows the AWS Management Console when setting up a MySQL database.

    AWS example showing public access for MySQL

    Public access enabled

  • For networking considerations, see Internet access to resources. To use static egress IPs, see Static Egress IP Addresses. The example below shows the AWS Management Console when setting up security group rules for the VPC.

    AWS example showing security group rules

    Open inbound traffic

    Note

    See your specific cloud platform documentation for how to configure security rules for your VPC.

  • Kafka cluster credentials. You can use one of the following ways to get credentials:

    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use the Confluent Cloud CLI or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
  • Update the following settings for the MySQL database.

    1. Turn on backup for the database.

    2. Create a new parameter group and set the following parameters:

      binlog_format=ROW
      binlog_row_image=full
      
    3. Apply the new parameter group to the database.

    4. Reboot the database.

    The following example screens are from Amazon RDS:

    Set database backup
    Set database binlog
    Set database binlog row image

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the MySQL CDC Source connector icon.

MySQL CDC Source Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector name.

  2. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.

  3. Add the connection details for the database.

    Important

    Do not include jdbc:xxxx:// in the Connection host field. The example below shows a sample host address.

    ../_images/ccloud-postgresql-source-connect-to-data.png

    The default SSL mode is prefer. When prefer is enabled, the connector attempts to use an encrypted connection to the database server. The options prefer and require use a secure (encrypted) connection. The connector fails if a secure connection cannot be established. These modes do not do Certification Authority (CA) validation.

  4. Add the Database details for your database. Review the following notes for more information about field selections.

    • Databases included: Enter a comma-separated list of fully-qualified database identifiers for the connector to monitor. By default, the connector monitors all databases on the server. A fully-qualified database name is in the form <database-name>. This can’t be used with Databases excluded.
    • Databases excluded: Enter a comma-separated list of fully-qualified database identifiers for the connector to ignore. By default, the connector monitors all databases on the server. A fully-qualified database name is in the form <database-name>. This can’t be used with Databases included.
    • Tables included: Enter a comma-separated list of fully-qualified table identifiers for the connector to monitor. By default, the connector monitors all non-system tables. A fully-qualified table name is in the form schemaName.tableName. This can’t be used with Tables excluded.
    • Tables excluded: Enter a comma-separated list of fully-qualified table identifiers for the connector to ignore. A fully-qualified table name is in the form schemaName.tableName. This can’t be used Tables included.
    • Columns Excluded: An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are in the form databaseName.tableName.columnName.
    • Snapshot mode: Specifies the criteria for performing a database snapshot when the connector starts.
      • The default option is initial. When selected, the connector takes a snapshot of the structure and data from captured tables. This is useful if you want the topics populated with a complete representation of captured table data when the connector starts.
      • never: Specifies that the connector should never perform snapshots, and that when starting for the first time, the connector starts reading from where it last left off.
      • schema_only: The connector completes a snapshot of the schemas and not the data. This option is useful when you do not need the topics to contain a consistent snapshot of the data, but need them to have only the changes since the connector was started.
      • schema_only_recovery: A recovery option for a connector that has already been capturing changes. When you restart the connector, it recovers a corrupted or lost database history topic. You might use this option periodically to clean up a database history topic that has been growing unexpectedly. Database history topics require infinite retention.
      • when_needed specifies that the connector performs a snapshot when it considers a snapshot is needed.
    • Snapshot locking mode: Controls how long the connector holds a global read lock as it performs a snapshot. The default is minimal, which means the connector holds the global read lock (preventing any updates) for just the initial portion of the snapshot, while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table. This is accomplished using a REPEATABLE READ transaction, even when the read lock is off and other operations are updating the database. In some cases it may be desirable to block all writes for the entire duration of the snapshot. In a case like this, set this property to extended. The option none prevents the connector from acquiring any table locks during the snapshot. While this setting is allowed with all snapshot modes, it is safe to use only if no schema changes are happening while the snapshot is running. Note that for tables defined with the MyISAM engine, the tables are locked regardless of this property’s setting, since MyISAM acquires a table lock. This behavior is unlike the InnoDB engine, which acquires row level locks.
    • Tombstones on delete: Configure whether a tombstone event should be generated after a delete event. The default is true.
    • Columns Excluded: An optional, comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are in the form databaseName.tableName.columnName.
  5. Enter values for the following properties:

    • Poll interval (ms): The time in milliseconds that the connector waits before polling for new CDC events. Defaults to 1000 ms (1 second).
    • Max batch size: Integer that defines the maximum batch size to process each iteration. Defaults to 1000 events.
  6. Select the values for the following properties:

    • Output message format: (data coming from the connector): AVRO, JSON (schemaless), JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    • After-state only: (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Select false to maintain the prior record states after applying the change events.

    • JSON output decimal format: (Optional) Defaults to BASE64.

      JSON output decimal format property
  7. Enter the number of tasks in use by the connector. Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

  8. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See the MySQL CDC Source connector configuration properties for properties and definitions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running. It may take a few minutes.

Step 7: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

Note

A topic named dbhistory.<database.server.name>.<connect-id> is automatically created. This topic is created based on the database.history.kafka.topic property (that may be configured). This topic has one partition.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe MySqlCdcSource

Example output:

Following are the required configs:
connector.class: MySqlCdcSource
name
kafka.api.key
kafka.api.secret
database.hostname
database.user
database.server.name
output.data.format
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "MySqlCdcSource",
  "name": "MySqlCdcSourceConnector_0",
  "kafka.api.key": "****************",
  "kafka.api.secret": "****************************************************************",
  "database.hostname": "database-2.<host-ID>.us-west-2.rds.amazonaws.com",
  "database.port": "3306",
  "database.user": "admin",
  "database.password": "**********",
  "database.server.name": "mysql",
  "database.whitelist": "employee",
  "table.includelist":"employees.departments,
  "snapshot.mode": "initial",
  "output.data.format": "AVRO",
  "tasks.max": "1"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "database.ssl.mode": The default option prefer is enabled if database.ssl.mode is not added to the connector configuration. When prefer is enabled, the connector attempts to use an encrypted connection to the database server. The options prefer and require use a secure (encrypted) connection. The connector fails if a secure connection cannot be established. These modes do not do Certification Authority (CA) validation.
  • "table.includelist": (Optional) Enter a comma-separated list of fully-qualified table identifiers for the connector to monitor. By default, the connector monitors all non-system tables. A fully-qualified table name is in the form schemaName.tableName.
  • "column.exclude.list": (Optional) A comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are in the form databaseName.tableName.columnName.
  • "snapshot.mode": (Optional) Specifies the criteria for performing a database snapshot when the connector starts.
    • The default option is initial. When selected, the connector takes a snapshot of the structure and data from captured tables. This is useful if you want the topics populated with a complete representation of captured table data when the connector starts.
    • never: Specifies that the connector should never perform snapshots, and that when starting for the first time, the connector starts reading from where it last left off.
    • when_needed: Specifies that the connector performs a snapshot when it considers a snapshot is needed.
    • schema_only: Specifies that the connector completes a snapshot of the schemas and not the data. This option is useful when you do not need the topics to contain a consistent snapshot of the data, but need them to have only the changes since the connector was started.
    • schema_only_recovery: A recovery option for a connector that has already been capturing changes. When you restart the connector, it recovers a corrupted or lost database history topic. You might use this option periodically to clean up a database history topic that has been growing unexpectedly.
  • "snapshot.locking.mode": (Optional) Controls how long the connector holds a global read lock as it performs a snapshot. The default is minimal, which means the connector holds the global read lock (preventing any updates) for just the initial portion of the snapshot, while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table. This is accomplished using a REPEATABLE READ transaction, even when the read lock is off and other operations are updating the database. In some cases it may be desirable to block all writes for the entire duration of the snapshot. In a case like this, set this property to extended. The option none prevents the connector from acquiring any table locks during the snapshot. While this setting is allowed with all snapshot modes, it is safe to use only if no schema changes are happening while the snapshot is running. Note that for tables defined with the MyISAM engine, the tables are locked regardless of this property’s setting, since MyISAM acquires a table lock. This behavior is unlike the InnoDB engine, which acquires row level locks.
  • "output.data.format": Sets the output message format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "after.state.only": (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Enter false to maintain the prior record states after applying the change events.
  • "json.output.decimal.format": (Optional) Defaults to BASE64. Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
    • BASE64 to serialize DECIMAL logical types as base64 encoded binary data.
    • NUMERIC to serialize Connect DECIMAL logical type values in JSON or JSON_SR as a number representing the decimal value.
  • "tasks.max": Enter the number of tasks in use by the connector. Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See the MySQL CDC Source connector configuration properties for properties and definitions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config mysql-cdc-source.json

Example output:

Created connector MySqlCdcSourceConnector_0 lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |            Name             | Status  |  Type
+-----------+-----------------------------+---------+-------+
lcc-ix4dl   | MySqlCdcSourceConnector_0   | RUNNING | source

Step 6: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

Note

A topic named dbhistory.<database.server.name>.<connect-id> is automatically created. This topic is created based on the database.history.kafka.topic property (that may be configured). This topic has one partition.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration Properties

The following connector configuration properties can be used with the MySQL CDC Source connector for Confluent Cloud.

database.hostname

IP address or hostname of the MySQL database server.

  • Type: String
  • Importance: High
database.port

Integer port number of the MySQL database server.

  • Type: Integer
  • Importance: Low
  • Default: 3306
database.user

Username to use when connecting to the MySQL database server.

  • Type: String
  • Importance: High
database.password

Password to use when connecting to the MySQL database server.

  • Type: Password
  • Importance: High
database.dbname

The name of the MySQL database from which to stream changes.

  • Type: String
  • Importance: High
database.server.name

The logical name of the MySQL database server cluster. This logica name forms the namespace and is used in all Kafka topic names and Connect schema names. If Avro data format is used, the logical name is also used for the namespaces of the corresponding Avro schema. Kafka topics are created with the prefix database.server.name. Only alphanumeric characters, underscores, hyphens, and periods (dots) are allowed.

  • Type: String
  • Importance: High
database.ssl.mode

The default option prefer is enabled if database.ssl.mode is not added to the connector configuration. When prefer is enabled, the connector attempts to use an encrypted connection to the database server. prefer and require: use a secure (encrypted) connection. The connector fails if a secure connection cannot be established. These modes do not do Certification Authority (CA) validation.

  • Type: String
  • Default: prefer
  • Importance: high
database.include.list

An optional comma-separated list of strings that match database names to be monitored. Any database name not included in the list is excluded from monitoring. By default all databases are monitored. May not be used with database.exclude.list.

  • Type: List of strings
  • Importance: Medium
database.exclude.list

An optional comma-separated list of strings that match database names to be excluded from monitoring. Any database name not included in the list is monitored. May not be used with database.include.list.

  • Type: List of strings
  • Importance: Medium
table.include.list

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored. Any table not included is excluded from monitoring. Each identifier is in the form schemaName.tableName. By default the connector monitors every non-system table in each monitored schema. May not be used with table.exclude.list.

  • Type: List of strings
  • Importance: Low
table.exclude.list

An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring. An excluded table is not monitored. Each identifier is in the form schemaName.tableName. May not be used with table.include.list.

  • Type: List of Strings
  • Importance: Low
snapshot.mode

Specifies the criteria for running a snapshot when the connector starts. The default setting is initial. The initial option specifies that the connector runs a snapshot only when no offsets have been recorded for the logical server name. The when_needed option specifies that the connector runs a snapshot upon startup whenever necessary. This is typically when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server. The never option specifies that the connector should never use snapshots and that when the connector starts with a logical server name, the connector should read from the beginning of the binlog. Use the never option with care, as it is only valid when the binlog is guaranteed to contain the entire history of the database.

The schema_only option performs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started. The schema_only_recovery option is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database history topic. You might set it periodically to “clean up” a database history topic that has been growing unexpectedly. Database history topics require infinite retention.

  • Type: String
  • Importance: Medium
  • Default: initial
  • Valid values: [initial, when_needed, never, schema_only, schema_only_recovery]
snapshot.locking.mode
Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is minimal, which means the connector holds the global read lock (preventing any updates) for just the initial portion of the snapshot, while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table. This is accomplished using a REPEATABLE READ transaction, even when the lock is no longer held, and other operations are updating the database. However, in some cases it may be desirable to block all writes for the entire duration of the snapshot. For this situation, set this property to extended. The option none prevents the connector from acquiring any table locks during the snapshot. While this setting is allowed with all snapshot modes, it is safe to use only if no schema changes are happening while the snapshot is running. Note that for tables defined with the MyISAM engine, the tables are locked regardless of this property’s setting, since MyISAM acquires a table lock. This behavior is unlike the InnoDB engine, which acquires row level locks.
tombstones.on.delete

Controls whether a tombstone event should be generated after a delete event. When set to true (default), the delete operations are represented by a delete event and a subsequent tombstone event. When set to false, only a delete event is sent. Emitting a tombstone event allows Kafka to completely delete all events pertaining to the given key when the source record is deleted.

  • Type: String
  • Importance: High
  • Default: true
column.exclude.list

An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values.

  • Type: List of strings
  • Importance: Low
poll.interval.ms

Positive integer value that specifies the number of milliseconds (ms) the connector should wait before polling for new change events. Defaults to 1000 ms.

  • Type: Integer
  • Importance: Low
  • Default: 1000
max.batch.size

Positive integer value that specifies the maximum size of each batch of change events that may be processed.

  • Type: Integer
  • Importance: Low
  • Default: 1000
event.processing.failure.handling.mode

Specifies how the connector should react to exceptions during deserialization of binlog events.

  • Type: String
  • Importance: Low
  • Default: fail
  • Valid values: [fail, skip, warn]
heartbeat.interval.ms

Controls how frequently the connector sends heartbeat messages to a Kafka topic. The default value 0 specifies that the connector does not send heartbeat messages.

  • Type: Int
  • Importance: Low
  • Default: 0
heartbeat.action.query

Adds a query that the connector runs on the source database when the connector sends a heartbeat message.

  • Type: String
  • Importance: Low
database.history.skip.unparseable.ddl

A Boolean value that specifies whether the connector should ignore malformed or unknown database statements (true), or stop processing so a human can fix the issue (false). Defaults to false. Consider setting this to true to ignore unparseable statements.

  • Type: String
  • Importance: Low
  • Default: false
  • Valid values: [true, false]
after.state.only

Defaults to true, which results in the Kafka record having only the record state from change events applied. Enter false to maintain the prior record states after applying the change events.

  • Type: String
  • Importance: Low
  • Default: true
  • Valid values: [true, false]
json.output.decimal.format

Specify the JSON or JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals. The BASE64 option designates that the connector serializes DECIMAL logical types as base64 encoded binary data and. The NUMERIC option designates that the connector serialize Connect DECIMAL logical type values in JSON or JSON_SR as a number that represents the decimal value.

  • Type: String
  • Importance: Low
  • Default: BASE64
  • Valid values: [BASE64, NUMERIC]

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png