PostgreSQL CDC Source Connector (Debezium) for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see Debezium PostgreSQL Source Connector for Confluent Platform.

The Kafka Connect PostgreSQL Change Data Capture (CDC) Source connector (Debezium) for Confluent Cloud can obtain a snapshot of the existing data in a PostgreSQL database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services.

Note

See the Debezium docs for more information.

Features

The PostgreSQL CDC Source connector (Debezium) provides the following features:

  • Topics created automatically: The connector automatically creates Kafka topics using the naming convention: <database.server.name>.<schemaName>.<tableName>. The tables are created with the properties: topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3. For more information, see Maximum message size.
  • Logical decoding plugins supported: wal2json, wal2json_rds, wal2json_streaming, wal2json_rds_streaming, pgoutput, decoderbufs. The default used is pgoutput.
  • Database authentication: Uses password authentication.
  • SSL support: Supports one-way SSL.
  • Output data formats: The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output Kafka record value format. It supports Avro, JSON Schema, Protobuf, JSON (schemaless), and String output record key format. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  • Tasks per connector: Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").
  • Select configuration properties:
    • Tables included and Tables excluded: Allows you to set whether a table is or is not monitored for changes. By default, the connector monitors every non-system table.
    • Snapshot mode: Allows you to specify the criteria for running a snapshot.
    • Tombstones on delete: Allows you to configure whether a tombstone event should be generated after a delete event. Default is true.
    • Other configuration properties:
      • poll.interval.ms
      • max.batch.size
      • max.queue.size

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Limitations

Be sure to review the following information.

Maximum message size

This connector creates topics automatically. When it creates topics, the internal connector configuration property max.message.size is set to the following:

  • Basic cluster: 2 MB
  • Standard cluster: 2 MB
  • Dedicated cluster: 20 MB

For more information about Confluent Cloud clusters, see Confluent Cloud Features and Limits by Cluster Type.

Quick Start

Use this quick start to get up and running with the Confluent Cloud PostgreSQL CDC Source (Debezium) connector. The quick start provides the basics of selecting the connector and configuring it to obtain a snapshot of the existing data in a PostgreSQL database and then monitoring and recording all subsequent row-level changes.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).

  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.

  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.

  • You cannot use a basic database with Azure. You must use a general purpose or memory-optimized PostgreSQL database.

  • The PostgreSQL database must be configured for CDC. For details, see PostgreSQL in the Cloud.

  • Clients from Azure Virtual Networks are not allowed to access the server by default. Please ensure your Azure Virtual Network is correctly configured and that Allow access to Azure Services is enabled.

  • Public access may be required for your database. See Network Access for details. The following example shows the AWS Management Console when setting up a PostgreSQL database.

    AWS example showing public access for PostgreSQL

    Public access enabled

  • A parameter group with the property rds.logical_replication=1 is required. An example is shown below. Once created, you must reboot the database.

    Parameter Group

    Parameter group

    RDS Logical Replication

    RDS logical replication

  • For networking considerations, see Networking and DNS Considerations. To use static egress IPs, see Static Egress IP Addresses. The following example shows the AWS Management Console when setting up security group rules for the VPC.

    AWS example showing security group rules

    Open inbound traffic

    Note

    See your specific cloud platform documentation for how to configure security rules for your VPC.

  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the PostgreSQL CDC Source connector icon.

PostgreSQL Source Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector name.

  2. Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).

  3. Add the database connection details.

    Important

    Do not include jdbc:xxxx:// in the connection hostname property. The example below shows a sample host address.

    ../_images/ccloud-postgresql-source-connect-to-data.png

    If you do not choose an SSL mode, disable is the default option used. When require is selected, the connector uses a secure (encrypted) connection. The connector fails if a secure connection cannot be established. This mode does not do Certification Authority (CA) validation.

  4. Add optional Database details.

    • Tables included: Enter a comma-separated list of fully-qualified table identifiers for the connector to monitor. By default, the connector monitors all non-system tables. A fully-qualified table name is in the form schemaName.tableName. This property cannot be used with the property Tables excluded.
    • Tables excluded: Enter a comma-separated list of fully-qualified table identifiers for the connector to ignore. A fully-qualified table name is in the form schemaName.tableName. This property cannot be used with the property Tables included.
    • Snapshot mode: Specifies the criteria for performing a database snapshot when the connector starts.
      • The default setting is initial. When selected, the connector takes a snapshot of the structure and data from captured tables. This is useful if you want the topics populated with a complete representation of captured table data when the connector starts.
      • never specifies that the connector should never perform snapshots, and that when starting for the first time, the connector starts reading from where it last left off.
      • exported specifies that the database snapshot is based on the point in time when a replication slot was created. Note that this is a good way to perform a lock-free snapshot (see Snapshot isolation).
    • Propagate Source Types by Data Type: Enter a comma-separated list of regular expressions matching database-specific data types. The property adds the data type’s original type and original length (as parameters) to the corresponding field schemas in the emitted change records.
    • Tombstones on delete: Set whether a tombstone event should be generated after a delete event. The default is true.
    • Columns Excluded: Enter a comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are in the form databaseName.tableName.columnName.
    • Plugin name: Select the plugin to use. Options are wal2json, wal2json_rds, wal2json_streaming, wal2json_rds_streaming, pgoutput, and decoderbufs. The default is pgoutput. For more information, see PostgreSQL logical decoding plugin.
    • Slot name: Enter the name of the PostgreSQL logical decoding slot created for streaming changes from a plugin and for a database. The slot name can contain only lower-case letters, numbers, and the underscore character. The default value is debezium.
  5. Add optional Connection details.

    • Poll interval (ms): Enter the number of milliseconds (ms) the connector should wait during each iteration for new change events to appear. Defaults to 1000 ms (1 second).
    • Max batch size: Enter the maximum number of events the connector batches at the end of each iteration. Defaults to 1000 events.
    • Event processing failure handling mode: Specify how the connector reacts to exceptions when processing binlog events. Defaults to fail. Select skip or warn to skip the event or issue a warning, respectively.
    • Heartbeat interval (ms): Set the interval time in milliseconds (ms) between heartbeat messages that the connector sends to a Kafka topic. Defaults to 0, which means the connector does not send heartbeat messages.
    • Heartbeat action query: Add a query that the connector executes on the source database when the connector sends a heartbeat message. For more information, see the Debezium docs
  6. Add optional Connector details:

    • Provide transaction metadata: Select whether transaction metadata is enabled. Transaction metadata is stored in a dedicated Kafka topic. Defaults to false.
    • Decimal handling mode: Specify how DECIMAL and NUMERIC columns are represented in change events. precise (the default) uses java.math.BigDecimal to represent values. The values are encoded in change events using a binary representation and the Connect org.apache.kafka.connect.data.Decimal data type. Select string to use string type to represent values. double represents values using Java’s double data type. double does not provide precision, but is much easier to use in consumers.
    • Binary handling mode: Specify how binary (blob, binary) columns are represented in change events. Select bytes (the default) to represent binary data in byte array format. Select base64 to represent binary data in base64-encoded string format. Select hex to represent binary data in hex-encoded (base16) string format.
    • Time precision mode: Time, date, and timestamps can be represented with different kinds of precision. Select adaptive (the default) to base the precision for time, date, and timestamp values on the database column’s precision. adaptive_time_microseconds is essentially the same as adaptive mode, with the exception that TIME fields always use microseconds precision. connect always represents time, date, and timestamp values using Connect’s built-in representations for Time, Date, and Timestamp. connect uses millisecond precision regardless of what precision is used for the database columns. For more information, see Temporal types.
    • Topic cleanup policy: Set the topic retention cleanup policy. Select delete (the default) to discard old topics. Select compact to enable log compaction on the topic.
    • HStore handling mode: Specify how HSTORE columns are represented in change events. Select json (the default) to represent these columns as JSON. Select map to represent these columns as map data.
    • Interval handling mode: Specify how INTERVAL columns are represented in change events. Select numeric (the default) to represent these columns as numeric data. Select string to represent these columns as string data.
  7. Select values for the following Output properties:

    • Output Kafka record value format: (coming from the connector): AVRO, JSON (schemaless), JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based record format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    • After-state only: (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Select false to maintain the prior record states after applying the change events. For additional details, see After-state only output limitation.

    • Output Kafka record key format (coming from the connector): AVRO, JSON (schemaless), JSON_SR (JSON Schema), PROTOBUF, or String. A valid schema must be available in Schema Registry to use a schema-based record format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    • JSON output decimal format: (Optional) Defaults to BASE64.

      JSON output decimal format property
  8. Enter the number of tasks in use by the connector. Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

  9. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for all property values and definitions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running. It may take a few minutes.

Step 7: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe PostgresCdcSource

Example output:

Following are the required configs:
connector.class: PostgresCdcSource
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
database.hostname
database.user
database.dbname
database.server.name
output.data.format
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "PostgresCdcSource",
  "name": "PostgresCdcSourceConnector_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "****************************************************************",
  "database.hostname": "debezium-1.<host-id>.us-east-2.rds.amazonaws.com",
  "database.port": "5432",
  "database.user": "postgres",
  "database.password": "**************",
  "database.dbname": "postgres",
  "database.server.name": "cdc",
  "table.include.list":"public.passengers",
  "plugin.name": "pgoutput",
  "output.data.format": "JSON",
  "tasks.max": "1"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "table.includelist": (Optional) Enter a comma-separated list of fully-qualified table identifiers for the connector to monitor. By default, the connector monitors all non-system tables. A fully-qualified table name is in the form schemaName.tableName.

  • "database.sslmode": If not entered, disable is the default option used. If you enter "database.sslmode" : "require", the connector uses a secure (encrypted) connection. The connector fails if a secure connection cannot be established. This mode does not do Certification Authority (CA) validation.

  • "output.data.format": Sets the output record format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based record format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "after.state.only": (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Enter false to maintain the prior record states after applying the change events. For additional details, see After-state only output limitation.

  • "json.output.decimal.format": (Optional) Defaults to BASE64. Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

    • BASE64 to serialize DECIMAL logical types as base64 encoded binary data.
    • NUMERIC to serialize Connect DECIMAL logical type values in JSON or JSON_SR as a number representing the decimal value.
  • "column.exclude.list": (Optional) A comma-separated list of regular expressions that match the fully-qualified names of columns to exclude from change event record values. Fully-qualified names for columns are in the form databaseName.tableName.columnName.

  • "plugin.name": (Optional) Sets the plugin to use. Options are wal2json, wal2json_rds, wal2json_streaming, wal2json_rds_streaming, pgoutput, and decoderbufs. The default is pgoutput.

  • "slot.name": (Optional) The name of the PostgreSQL logical decoding slot created for streaming changes from a plugin and for a database. The slot name can contain only lower-case letters, numbers, and the underscore character. The default value is debezium.

  • "snapshot.mode": (Optional) Specifies the criteria for performing a database snapshot when the connector starts.

    • The default setting is initial. When selected, the connector takes a snapshot of the structure and data from captured tables. This is useful if you want the topics populated with a complete representation of captured table data when the connector starts.
    • never specifies that the connector should never perform snapshots, and that when starting for the first time, the connector starts reading from where it last left off.
    • exported specifies that the database snapshot is based on the point in time when a replication slot was created. Note that this is a good way to perform a lock-free snapshot (see Snapshot isolation).
  • "tasks.max": Enter the number of tasks in use by the connector. Organizations can run multiple connectors with a limit of one task per connector (that is, "tasks.max": "1").

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for all property values and definitions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config postgresql-cdc-source.json

Example output:

Created connector PostgresCdcSourceConnector_0 lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID          |            Name              | Status  |  Type
+-----------+------------------------------+---------+-------+
lcc-ix4dl   | PostgresCdcSourceConnector_0 | RUNNING | source

Step 6: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration Properties

Use the following configuration properties with this connector.

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key
  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret
  • Type: password
  • Importance: high

How should we connect to your database?

database.hostname

The address of the PostgreSQL server.

  • Type: string
  • Importance: high
database.port

Port number of the PostgreSQL server.

  • Type: int
  • Valid Values: [0,…,65535]
  • Importance: high
database.user

The name of the PostgreSQL user that has the required authorization.

  • Type: string
  • Importance: high
database.password

The password for the PostgreSQL user that has the required authorization.

  • Type: password
  • Importance: high
database.dbname

The name of the PostgreSQL database to connect to.

  • Type: string
  • Importance: high
database.server.name

The logical name of the PostgreSQL server/cluster. This logical name forms a namespace and is used in all the names of the Kafka topics and the Kafka Connect schema names. The logical name is also used for the namespaces of the corresponding Avro schema, if Avro data format is used. Kafka topics should/will be created with the prefix database.server.name. Only alphanumeric characters, underscores, hyphens and dots are allowed.

  • Type: string
  • Importance: high
database.sslmode

What SSL mode should we use to connect to your database. require establishes an encrypted connection or fails if one cannot be made for any reason. If your database server enforces SSL, please use require. disable uses an unencrypted connection.

  • Type: string
  • Default: disable
  • Importance: low

Database details

table.include.list

An optional comma-separated list of strings that match fully-qualified table identifiers for tables to be monitored. Any table not included in this config property is excluded from monitoring. Each identifier is in the form schemaName.tableName. By default the connector monitors every non-system table in each monitored schema. May not be used with “Table excluded”.

  • Type: list
  • Importance: medium
table.exclude.list

An optional comma-separated list of strings that match fully-qualified table identifiers for tables to be excluded from monitoring. Any table not included in this config property is monitored. Each identifier is in the form schemaName.tableName. May not be used with “Table included”.

  • Type: list
  • Importance: medium
snapshot.mode

Specifies the criteria for running a snapshot upon startup of the connector. The default setting is initial, and specifies the connector can run a snapshot only when no offsets have been recorded for the logical server name. The never option specifies that the connect should never use snapshots and that, upon first startup with a logical server name, the connector should read from where it last left off (last LSN position) or start from the beginning, based on the point of the view of the logical replication slot. The exported option specifies that the database snapshot will be based on the point in time when the replication slot was created and is an excellent way to perform the snapshot in a lock-free way.

  • Type: string
  • Default: initial
  • Importance: low
datatype.propagate.source.type

A comma-separated list of regular expressions matching the database-specific data type names that adds the data type’s original type and original length as parameters to the corresponding field schemas in the emitted change records.

  • Type: list
  • Importance: low
tombstones.on.delete

Controls whether a tombstone event should be generated after a delete event. When set to true, the delete operations are represented by a delete event and a subsequent tombstone event. When set to false, only a delete event is sent. Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key, once the source record got deleted.

  • Type: boolean
  • Default: true
  • Importance: high
column.exclude.list

Regular expressions matching columns to exclude from change events

  • Type: list
  • Importance: medium
plugin.name

The name of the Postgres logical decoding plugin installed on the server. Note: pgoutput is only available in PostgreSQL 10+.

  • Type: string
  • Default: pgoutput
  • Importance: high
slot.name

The name of the PostgreSQL logical decoding slot that was created for streaming changes from a particular plug-in and for a particular database/schema. The server uses this slot to stream events to the connector.

  • Type: string
  • Default: debezium
  • Valid Values: Must match the regex ^[a-z0-9_]+$
  • Importance: low

Connection details

poll.interval.ms

Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.

  • Type: int
  • Default: 1000 (1 second)
  • Valid Values: [1,…]
  • Importance: low
max.batch.size

Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector.

  • Type: int
  • Default: 1000
  • Valid Values: [1,…,5000]
  • Importance: low
event.processing.failure.handling.mode

Specifies how the connector should react to exceptions during processing of binlog events.

  • Type: string
  • Default: fail
  • Valid Values: fail, skip, warn
  • Importance: low
heartbeat.interval.ms

Controls how frequently the connector sends heartbeat messages to a Kafka topic. The behavior of default value 0 is that the connector does not send heartbeat messages.

  • Type: int
  • Default: 0
  • Valid Values: [0,…]
  • Importance: low
heartbeat.action.query

Specifies a query that the connector executes on the source database when the connector sends a heartbeat message.

  • Type: string
  • Importance: low

Connector details

provide.transaction.metadata

Stores transaction metadata information in a dedicated topic and enables the transaction metadata extraction together with event counting.

  • Type: boolean
  • Default: false
  • Importance: low
decimal.handling.mode

Specifies how DECIMAL and NUMERIC columns should be represented in change events, including: ‘precise’ (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect’s ‘org.apache.kafka.connect.data.Decimal’ type; ‘string’ uses string to represent values; ‘double’ represents values using Java’s ‘double’, which may not offer the precision but will be far easier to use in consumers.

  • Type: string
  • Default: precise
  • Valid Values: double, precise, string
  • Importance: medium
binary.handling.mode

Specifies how binary (blob, binary, etc.) columns should be represented in change events, including: ‘bytes’ (the default) represents binary data as byte array; ‘base64’ represents binary data as base64-encoded string; ‘hex’ represents binary data as hex-encoded (base16) string.

  • Type: string
  • Default: bytes
  • Valid Values: base64, bytes, hex
  • Importance: low
time.precision.mode

Time, date, and timestamps can be represented with different kinds of precisions, including: ‘adaptive’ (the default) bases the precision of time, date, and timestamp values on the database column’s precision; ‘adaptive_time_microseconds’ like ‘adaptive’ mode, but TIME fields always use microseconds precision; ‘connect’ always represents time, date, and timestamp values using Kafka Connect’s built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns’ precision.

  • Type: string
  • Default: adaptive
  • Valid Values: adaptive, adaptive_time_microseconds, connect
  • Importance: medium
cleanup.policy

Set the topic cleanup policy

  • Type: string
  • Default: delete
  • Valid Values: compact, delete
  • Importance: medium
hstore.handling.mode

Specifies how HSTORE columns should be represented in change events.

  • Type: string
  • Default: json
  • Valid Values: json, map
  • Importance: medium
interval.handling.mode

Specifies how INTERVAL columns should be represented in change events.

  • Type: string
  • Default: numeric
  • Valid Values: numeric, string
  • Importance: medium
schema.refresh.mode

Specify the conditions that trigger a refresh of the in-memory schema for a table. columns_diff (the default) is the safest mode, ensuring the in-memory schema stays in-sync with the database table’s schema at all times. columns_diff_exclude_unchanged_toast instructs the connector to refresh the in-memory schema cache if there is a discrepancy between it and the schema derived from the incoming message, unless unchanged TOASTable data fully accounts for the discrepancy.

  • Type: string
  • Default: columns_diff
  • Valid Values: columns_diff, columns_diff_exclude_unchanged_toast
  • Importance: medium

Output messages

output.data.format

Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Importance: high
output.key.format

Sets the output Kafka record key format. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: JSON
  • Valid Values: AVRO, JSON, JSON_SR, PROTOBUF, STRING
  • Importance: high
after.state.only

Controls whether the generated Kafka record should contain only the state after applying change events.

  • Type: boolean
  • Default: true
  • Importance: low
json.output.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string
  • Default: BASE64
  • Importance: low

Number of tasks for this connector

tasks.max
  • Type: int
  • Valid Values: [1,…,1]
  • Importance: high

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png

After-state only output limitation

When a connector is configured with the property After-state only=false, you expect to see the previous values of all columns under before in the record. However, for certain conditions before will contain null or partially displayed columns. If Protobuf is used, the record may not contain the before field at all. The following example shows this issue and provides a corrective action to take.

For example, the connector is configured with JSON and After-state only is set to false. When a record is updated in the PostgreSQL database, you may see a record similar to the following sample, where "before" is null.

{
  "before": null,
  "after": {
    "id": 5,
    "name": "Allen William Henry",
    "sex": "male",
    "age": 25,
    "sibsp": 0,
    "parch": 0,
    "created_at": "2018-01-02T15:22:14.831461Z"
   },
   "source": {
     "version": "1.3.1.Final",
     "connector": "postgresql",
     "name": "test",
     "ts_ms": 1621389097781,
     "snapshot": "false",
     "db": "postgres",
     "schema": "public",
     "table": "passengers",
     "txId": 572,
     "lsn": 872429856,
     "xmin": null
   },
   "op": "u",
   "ts_ms": 1621389098688,
   "transaction": null
}

For an updated record to contain the previous (before) values of all columns in the row, you need to modify the passengers table by running ALTER TABLE passengers REPLICA IDENTITY FULL. After you make this change in the PostgreSQL database, and records are updated, you should see records similar to the following sample.

{
  "before": {
    "id": 8,
    "name": "Gosta Leonard",
    "sex": "male",
    "age": 2,
    "sibsp": 3,
    "parch": 1,
    "created_at": "2018-01-03T20:53:55.955056Z"
  },
  "after": {
    "id": 8,
    "name": "Gosta Leonard",
    "sex": "male",
    "age": 25,
    "sibsp": 3,
    "parch": 1,
    "created_at": "2018-01-03T20:53:55.955056Z"
  },
  "source": {
    "version": "1.3.1.Final",
    "connector": "postgresql",
    "name": "test",
    "ts_ms": 1621390542864,
    "snapshot": "false",
    "db": "postgres",
    "schema": "public",
    "table": "passengers",
    "txId": 581,
    "lsn": 1207967968,
    "xmin": null
  },
  "op": "u",
  "ts_ms": 1621390544032,
  "transaction": null
}