PostgreSQL Sink (JDBC) Connector for Confluent Cloud

The fully-managed PostgreSQL Sink connector for Confluent Cloud moves data from an Apache Kafka® topic to a PostgreSQL database. It writes data from a topic in Kafka to a table in the specified PostgreSQL database. Table auto-creation and limited auto-evolution are supported.

Note

Features

The PostgreSQL Sink connector provides the following features:

  • Idempotent writes: The default insert.mode is INSERT. If it is configured as UPSERT, the connector will use upsert semantics rather than plain insert statements. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key constraint violation, which provides idempotence.

  • SSL support: Supports one-way SSL.

  • Schemas: The connector supports Avro, JSON Schema, and Protobuf input value formats. The connector supports Avro, JSON Schema, Protobuf, and String input key formats. Schema Registry must be enabled to use a Schema Registry-based format.

  • Primary key support: Supported PK modes are kafka, none, record_key, and record_value. Used in conjunction with the PK Fields property.

  • Table and column auto-creation: auto.create and auto-evolve are supported. If tables or columns are missing, they can be created automatically. Table names are created based on Kafka topic names. For more information, see Table names and Kafka topic names.

  • At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.

  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.

  • Provider integration support: The connector supports Google Cloud’s native identity authorization and Microsoft Entra ID-based authentication using Confluent Provider Integration. For more information about provider integration setup, see the connector authentication.

  • PostgreSQL JSON and JSONB: The connector supports sinking to PostgreSQL tables containing data stored as JSON or JSONB (JSON binary format). JSON or JSONB should be stored as STRING type in Kafka and matching columns should be defined as JSON or JSONB in PostgreSQL.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Table names and Kafka topic names

You can configure the connector to combine the value for table.name.format and the Kafka topic name. If the resulting combined value (table name) exceeds the maximum-permitted identifier length for the database version in use, the connector truncates the value to the permitted identifier length.

For example, PostgreSQL 14 uses 63 bytes as its default identifier length setting. If the value used for table.name.format and the Kafka topic name exceeds 63 characters, only the first 63 characters from the combined name are used.

For this reason, you should not run the connector with very long Kafka topic names and table names. If the table name is truncated, and the connector receives records from different upstream topics, the records map to the same table name after truncation takes place. This results in a duplicate table name collision.

Note

You can expect this connector behavior for any interactions with the database, both DDL (table creation and evolution) and DML (insert, upsert, and delete).

Quick Start

Use this quick start to get up and running with the Confluent Cloud PostgreSQL sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a PostgreSQL database.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.

    • Enter an existing service account resource ID.

    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.

    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the PostgreSQL Sink connector card.

PostgreSQL Sink Connector Card

Step 4: Enter the connector details

Note

  • Ensure you have all your prerequisites completed.

  • An asterisk ( * ) designates a required entry.

At the Add PostgreSQL Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:

    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.

    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.

    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.

    Note

    Freight clusters support only service accounts for Kafka authentication.

  2. Click Continue.

  1. Configure the authentication properties:

    • Authentication method: How Confluent Cloud authenticates with Azure or GCP. Allowed values - Password, Microsoft Entra ID application and Google service account impersonation.

    • Connection host: The JDBC connection host.

    • Connection port: The JDBC connection port.

    • Connection user: The JDBC connection user.

    • Connection password: The JDBC connection password.

    • Database name: The JDBC database name.

    • SSL mode: The SSL mode to use to connect to your database. Possible options are prefer, require, verify-ca, and verify-full.

      • prefer (default): Attempts to use a secure (encrypted) connection first and, failing that, an unencrypted connection.

      • require: Uses a secure (encrypted) connection, and fails if one cannot be established, but does not perform certificate validation on the server.

      • verify-ca: Uses SSL/TLS for encryption and performs certificate verification, but does not perform hostname verification.

      • verify-full: Uses SSL/TLS for encryption, certificate verification, and hostname verification.

    • SSL root cert: The server root cert file used for certificate validation. Only required if using verify-ca or verify-full for ssl mode.

    Authentication method

    • Provider Integration: Select an existing integration that has access to your resource.

  2. Click Continue.

Note

Configuration properties that are not shown in the Cloud Console use the default values. See Configuration Properties for all property values and definitions.

  • Input Kafka record value format: Select an Input Kafka record value format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format.

  • Insert mode: Select an insert mode–the insertion mode to use:

    • INSERT: Use the standard INSERT row function. An error occurs if the row already exists in the table.

    • UPSERT: This mode is similar to INSERT. However, if the row already exists, the UPSERT function overwrites column values with the new values provided.

Show advanced configurations
  • Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.

  • Auto create table: Whether to automatically create the destination table if it is missing.

  • Auto add columns: Whether to automatically add columns in the table if they are missing.

  • Database timezone: Name of the JDBC timezone used in the connector when querying with time-based criteria. Defaults to UTC.

  • Table name format: A format string for the destination table name, which may contain ${topic} as a placeholder for the originating topic name.

  • Timezone used for Date: Name of the JDBC timezone used in the connector when inserting DATE type values. Defaults to DB_TIMEZONE that uses the timezone configured via db.timezeone for backward compatibility. To avoid conversion for DATE type values, set as UTC.

  • Table types: The comma-separated types of database tables to which the sink connector can write.

  • Timestamp Precision Mode: Convert the Timestamp with precision. If set to microseconds, the timestamp will be converted to microsecond precision. If set to nanoseconds, the timestamp will be converted to nanoseconds precision.

  • Timestamp Fields: List of comma-separated record value timestamp field names that should be converted to timestamps. These fields will be converted based on precision mode specified in Timestamp Precision Mode. The timestamp fields included here should be Long or String type and nested fields are not supported.

  • Fields included: List of comma-separated record value field names. If empty, all fields from the record value are used.

  • PK mode: The primary key mode.

  • PK Fields: List of comma-separated primary key field names.

  • When to quote SQL identifiers: When to quote table names, column names, and other identifiers in SQL statements.

  • Max rows per batch: The maximum number of rows to include in a single batch when polling for new data. Each batch includes all insert statements as individual updates. Use this setting to limit the amount of data buffered internally in the connector.

  • Input Kafka record key format: Sets the input Kafka record key format. This needs to be set to a proper format if using pk.mode=record_key. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING. Note that you must have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Delete on null: Whether to treat null record values as deletes. Requires pk.mode to be record_key.

  • Date Calendar System: Controls the calendar used to interpret the time-since-epoch value in the Kafka topic record during conversion to DATE or TIMESTAMP.

    The ideal setting you choose depends on whether the values in the source topic were populated using old or new Java date and time APIs.

    • If you use LEGACY (the default), the connector uses the hybrid Gregorian/Julian calendar. This matches the default behavior of older Java date and time APIs.

    • If you use PROLEPTIC_GREGORIAN, the connector uses the proleptic Gregorian calendar (which extends Gregorian rules backward indefinitely). This matches the behavior of modern Java date/time APIs (java.time).

    Warning

    Changing this configuration on an existing connector might lead to a drift in the DATE/TIMESTAMP column’s values populated in the sink database.

Additional Configs

  • Value Converter Schema ID Deserializer: The class name of the schema ID deserializer for values. This is used to deserialize schema IDs from the message headers.

  • Value Converter Reference Subject Name Strategy: Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Schema ID For Value Converter: The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema ID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Errors Tolerance: Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Value Converter Ignore Default For Nullables: When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Key Converter Schema ID Deserializer: The class name of the schema ID deserializer for keys. This is used to deserialize schema IDs from the message headers.

  • Value Converter Decimal Format: Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals: BASE64 to serialize DECIMAL logical types as base64 encoded binary data and NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Schema GUID For Key Converter: The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema GUID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Schema GUID For Value Converter: The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema GUID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Value Converter Connect Meta Data: Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Value Converter Value Subject Name Strategy: Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Key Converter Key Subject Name Strategy: How to construct the subject name for key schema registration.

  • Schema ID For Key Converter: The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This is used to specify a fixed schema ID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

Auto-restart policy

  • Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to true, enabling the connector to automatically restart in case of user-actionable errors. Set this property to false to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.

Consumer configuration

  • Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).

  • Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.

Transforms

Processing position

  • Set offsets: Click Set offsets to define a specific offset for this connector to begin procession data from. For more information on managing offsets, see Manage offsets.

See Configuration Properties for all property values and definitions.

  • Click Continue.

Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.

  1. To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.

  2. Click Continue.

  1. Verify the connection details.

  2. Click Launch.

    The status for the connector should go from Provisioning to Running.

Step 5: Check the results in PostgreSQL

Verify that new records are being added to the PostgreSQL database.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties:

{
  "connector.class": "PostgresSink",
  "name": "PostgresSinkConnector_0",
  "input.data.format": "AVRO",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "****************************************************************",
  "connection.host": "database-4.<host-id>.us-east-2.rds.amazonaws.com",
  "connection.port": "5432",
  "connection.user": "postgres",
  "connection.password": "**************",
  "db.name": "postgres",
  "topics": "postgresql_ratings",
  "insert.mode": "UPSERT",
  "db.timezone": "UTC",
  "auto.create": "true",
  "auto.evolve": "true",
  "pk.mode": "record_value",
  "pk.fields": "user_id",
  "tasks.max": "1"
}

Note the following property definitions. See the PostgreSQL Sink configuration properties for additional property values and definitions.

  • "connector.class": Identifies the connector plugin name.

  • "name": Sets a name for your new connector.

  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "ssl.rootcertfile": The default ssl.mode is verify-full. When using this mode, you must provide the PEM-formatted root certificate for the database. For example, "ssl.rootcertfile": "-----BEGIN CERTIFICATE-----\nABCDfJP...Bbc\n4\n-----END CERTIFICATE-----\n". For additional ssl.mode options, see the Configuration Properties.

  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), or PROTOBUF. You must have Confluent Cloud Schema Registry configured if using a schema-based message format.

  • "input.key.format": Sets the input record key format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), PROTOBUF, or STRING. You must have Confluent Cloud Schema Registry configured if using a schema-based message format.

  • "delete.on.null": Whether to treat null record values as deletes. Defaults to false. Requires pk.mode to be record_key. Defaults to false.

  • "topics": Identifies the topic name or a comma-separated list of topic names.

  • "insert.mode": Enter one of the following modes:

    • INSERT: Use the standard INSERT row function. An error occurs if the row already exists in the table.

    • UPSERT: This mode is similar to INSERT. However, if the row already exists, the UPSERT function overwrites column values with the new values provided.

  • db.timezone: Name of the time zone the connector uses when inserting time-based values. Defaults to UTC.

  • "auto.create" (tables) and "auto-evolve" (columns): (Optional) Sets whether to automatically create tables or columns if they are missing relative to the input record schema. If not entered in the configuration, both default to false. When``auto.create`` is set to true, the connector creates a table name using ${topic} (that is, the Kafka topic name). For more information, see Table names and Kafka topic names and the PostgreSQL Sink configuration properties.

  • "pk.mode": Supported modes are listed below:

    • kafka: Kafka coordinates are used as the primary key. Must be used with the PK Fields.

    • none: No primary keys used.

    • record_key: Fields from the record key are used. May be a primitive or a struct.

    • record_value: Fields from the Kafka record value are used. Must be a struct type.

  • "pk.fields": A list of comma-separated primary key field names. The runtime interpretation of this property depends on the pk.mode selected. Options are listed below:

    • kafka: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to __connect_topic,__connect_partition,__connect_offset.

    • none: PK Fields not used.

    • record_key: If left empty, all fields from the key struct are used. Otherwise, this is used to extract the fields in the property. A single field name must be configured for a primitive key.

    • record_value: Used to extract fields from the record value. If left empty, all fields from the value struct are used.

  • "tasks.max": Maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for all property values and definitions.

Step 4: Load the configuration file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file postgresql-sink-config.json

Example output:

Created connector PostgresSinkConnector_0 lcc-ix4dl

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID          |       Name               | Status  | Type
+-----------+--------------------------+---------+------+
lcc-ix4dl   | PostgresSinkConnector_0  | RUNNING | sink

Step 6: Check the results in PostgreSQL.

Verify that new records are being added to the PostgreSQL database.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics.regex

A regular expression that matches the names of the topics to consume from. This is useful when you want to consume from multiple topics that match a certain pattern without having to list them all individually.

  • Type: string

  • Importance: low

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list

  • Importance: high

errors.deadletterqueue.topic.name

The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. Defaults to ‘dlq-${connector}’ if not set. The DLQ topic will be created automatically if it does not exist. You can provide ${connector} in the value to use it as a placeholder for the logical cluster ID.

  • Type: string

  • Default: dlq-${connector}

  • Importance: low

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string

  • Default: default

  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string

  • Importance: high

input.key.format

Sets the input Kafka record key format. This need to be set to a proper format if using pk.mode=record_key. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string

  • Importance: high

delete.enabled

Whether to treat null record values as deletes. Requires pk.mode to be record_key.

  • Type: boolean

  • Default: false

  • Importance: low

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string

  • Valid Values: A string at most 64 characters long

  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode, whenever possible.

  • Type: string

  • Valid Values: SERVICE_ACCOUNT, KAFKA_API_KEY

  • Importance: high

kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string

  • Importance: high

kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

Authentication method

authentication.method

How Confluent Cloud authenticates with Azure or GCP. Allowed values - Password, Microsoft Entra ID application and Google service account impersonation.

  • Type: string

  • Default: Password

  • Valid Values: Google service account impersonation, Microsoft Entra ID application, Password

  • Importance: high

provider.integration.id

Select an existing integration that has access to your resource.

  • Type: string

  • Importance: high

How should we connect to your database?

connection.host

Depending on the service environment, certain network access limitations may exist. Make sure the connector can reach your service. Do not include jdbc:xxxx:// in the connection hostname property (e.g. database-1.abc234ec2.us-west.rds.amazonaws.com).

  • Type: string

  • Importance: high

connection.port

JDBC connection port.

  • Type: int

  • Valid Values: [0,…,65535]

  • Importance: high

connection.user

JDBC connection user.

  • Type: string

  • Importance: high

connection.password

JDBC connection password.

  • Type: password

  • Importance: high

db.name

JDBC database name.

  • Type: string

  • Importance: high

ssl.mode

What SSL mode should we use to connect to your database. prefer allows for the connection to not be encrypted and require allows for the connection to be encrypted but does not do certificate validation on the server. verify-ca and verify-full require a file containing SSL CA certificate to be provided. The server’s certificate will be verified to be signed by one of these authorities.`verify-ca` will verify that the server certificate is issued by a trusted CA. verify-full will verify that the server certificate is issued by a trusted CA and that the server hostname matches that in the certificate. Client authentication is not performed.

  • Type: string

  • Default: prefer

  • Importance: high

ssl.rootcertfile

The server root cert file used for certificate validation. Only required if using verify-ca or verify-full ssl mode.

  • Type: password

  • Default: [hidden]

  • Importance: low

Database details

insert.mode

The insertion mode to use. INSERT uses the standard INSERT row function. An error occurs if the row already exists in the table; UPSERT mode is similar to INSERT. However, if the row already exists, the UPSERT function overwrites column values with the new values provided.

  • Type: string

  • Default: INSERT

  • Importance: high

table.name.format

A format string for the destination table name, which may contain ${topic} as a placeholder for the originating topic name.

For example, kafka_${topic} for the topic ‘orders’ will map to the table name ‘kafka_orders’.

  • Type: string

  • Default: ${topic}

  • Importance: medium

table.types

The comma-separated types of database tables to which the sink connector can write. By default this is TABLE, but any combination of TABLE, PARTITIONED TABLE and VIEW is allowed. Not all databases support writing to views, and when they do the sink connector will fail if the view definition does not match the records’ schemas (regardless of auto.evolve).

  • Type: list

  • Default: TABLE

  • Importance: low

fields.whitelist

List of comma-separated record value field names. If empty, all fields from the record value are utilized, otherwise used to filter to the desired fields.

  • Type: list

  • Importance: medium

timestamp.fields.list

List of comma-separated record value timestamp field names that should be converted to timestamps. These fields will be converted based on precision mode specified in Timestamp Precision Mode. The timestamp fields included here should be Long or String type and nested fields are not supported.

  • Type: list

  • Importance: medium

db.timezone

Name of the JDBC timezone used in the connector when querying with time-based criteria. Defaults to UTC.

  • Type: string

  • Default: UTC

  • Importance: medium

date.timezone

Name of the JDBC timezone that should be used in the connector when inserting DATE type values. Defaults to DB_TIMEZONE that uses the timezone set for db.timzeone configuration (to maintain backward compatibility). It is recommended to set this to UTC to avoid conversion for DATE type values.

  • Type: string

  • Default: DB_TIMEZONE

  • Valid Values: DB_TIMEZONE, UTC

  • Importance: medium

timestamp.precision.mode

Convert the Timestamp with precision. If set to microseconds the timestamp will be converted to microsecond precision. If set to nanoseconds the timestamp will be converted to nanoseconds precision.

  • Type: string

  • Default: microseconds

  • Importance: medium

date.calendar.system

Conversion of time since epoch value in kafka topic record to DATE or TIMESTAMP depends on the calendar used to interpret it. If LEGACY is used, it will use the hybrid Gregorian/Julian calendar which was the default in the older java date time APIs. However, if ‘PROLEPTIC_GREGORIAN’ is used, then it will use the proleptic gregorian calendar which extends the Gregorian rules backward indefinitely and does not apply the 1582 cutover. This matches the behavior of modern Java date/time APIs (java.time). This is defaulted to LEGACY for backward compatibility. The ideal setting for this depends on whether the values in source topic were populated using old or new java date time APIs. Changing this configuration on an existing connector might lead to a drift in the DATE/TIMESTAMP column’s values populated in the sink database.

  • Type: string

  • Default: LEGACY

  • Importance: medium

Primary Key

pk.mode

The primary key mode, also refer to pk.fields documentation for interplay. Supported modes are:

none: No keys utilized.

kafka: Apache Kafka® coordinates are used as the PK.

record_value: Field(s) from the record value are used, which must be a struct.

record_key: Field(s) from the record key are used, which must be a struct.

  • Type: string

  • Valid Values: kafka, none, record_key, record_value

  • Importance: high

pk.fields

List of comma-separated primary key field names. The runtime interpretation of this config depends on the pk.mode:

none: Ignored as no fields are used as primary key in this mode.

kafka: Must be a trio representing the Kafka coordinates, defaults to __connect_topic,__connect_partition,__connect_offset if empty.

record_value: If empty, all fields from the value struct will be used, otherwise used to extract the desired fields.

  • Type: list

  • Importance: high

SQL/DDL Support

auto.create

Whether to automatically create the destination table if it is missing.

  • Type: boolean

  • Default: false

  • Importance: medium

auto.evolve

Whether to automatically add columns in the table if they are missing.

  • Type: boolean

  • Default: false

  • Importance: medium

quote.sql.identifiers

When to quote table names, column names, and other identifiers in SQL statements. For backward compatibility, the default is ‘always’.

  • Type: string

  • Default: ALWAYS

  • Valid Values: ALWAYS, NEVER

  • Importance: medium

Connection details

batch.sizes

Maximum number of rows to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.

  • Type: int

  • Default: 3000

  • Valid Values: [1,…,5000]

  • Importance: low

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long

  • Default: 300000 (5 minutes)

  • Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters

  • Importance: low

max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long

  • Default: 500

  • Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters

  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int

  • Valid Values: [1,…]

  • Importance: high

Additional Configs

consumer.override.auto.offset.reset

Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset (the default) or the “latest” offset. You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset

  • Type: string

  • Importance: low

consumer.override.isolation.level

Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level

  • Type: string

  • Importance: low

header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string

  • Importance: low

key.converter.use.schema.guid

The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema GUID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: string

  • Importance: low

key.converter.use.schema.id

The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema ID to be used for deserializing message keys. Only applicable when key.converter.key.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: int

  • Importance: low

value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean

  • Importance: low

value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean

  • Importance: low

value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.scrub.invalid.names

Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean

  • Importance: low

value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.use.schema.guid

The schema GUID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema GUID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: string

  • Importance: low

value.converter.use.schema.id

The schema ID to use for deserialization when using ConfigSchemaIdDeserializer. This allows you to specify a fixed schema ID to be used for deserializing message values. Only applicable when value.converter.value.schema.id.deserializer is set to ConfigSchemaIdDeserializer.

  • Type: int

  • Importance: low

value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

errors.tolerance

Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Type: string

  • Default: all

  • Importance: low

key.converter.key.schema.id.deserializer

The class name of the schema ID deserializer for keys. This is used to deserialize schema IDs from the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.DualSchemaIdDeserializer

  • Importance: low

key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string

  • Default: BASE64

  • Importance: low

value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.ignore.default.for.nullables

When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string

  • Default: DefaultReferenceSubjectNameStrategy

  • Importance: low

value.converter.value.schema.id.deserializer

The class name of the schema ID deserializer for values. This is used to deserialize schema IDs from the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.DualSchemaIdDeserializer

  • Importance: low

value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean

  • Default: true

  • Importance: medium

Frequently asked questions

Find answers to frequently asked questions about the PostgreSQL Sink connector for Confluent Cloud.

How do I resolve duplicate key errors?

The connector fails with duplicate key constraint violations when attempting to insert records that already exist in the PostgreSQL table.

You might encounter errors similar to:

java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "<table>" ("<columns>") VALUES (<values>) was aborted:
ERROR: duplicate key value violates unique constraint "<constraint>"
Detail: Key (<column>)=(<value>) already exists.

Root cause:

When using the default insert.mode of INSERT, the connector attempts to insert new rows into the PostgreSQL table. If a record with the same primary key already exists, PostgreSQL rejects the operation with a duplicate key error. This commonly occurs when:

  • The connector is restarted and reprocesses records.

  • The same data is sent multiple times due to retries or rebalancing.

  • Multiple connectors or processes are writing to the same table.

Solution:

Change the insert.mode configuration from INSERT to UPSERT:

{
  "insert.mode": "UPSERT"
}

The UPSERT mode uses PostgreSQL’s ON CONFLICT clause to update existing rows instead of inserting duplicates. This allows the connector to handle records with existing primary keys by updating the values rather than failing.

Alternatively, use UPDATE mode when you only want to update existing rows. The connector issues SQL UPDATE statements. Records with keys that do not exist in the table fail.

For more information, see the insert.mode property in Configuration Properties.

How do I configure SSL/TLS connections to PostgreSQL?

The PostgreSQL Sink connector for Confluent Cloud supports SSL/TLS connections to PostgreSQL databases. The default ssl.mode is verify-full, which requires SSL certificate validation.

Configuration:

When using the default ssl.mode of verify-full, you must provide the PEM-formatted root certificate:

{
  "ssl.mode": "verify-full",
  "ssl.rootcertfile": "-----BEGIN CERTIFICATE-----\nABCDfJP...Bbc\n4\n-----END CERTIFICATE-----\n"
}

Available SSL modes:

  • verify-full: Verify both the certificate and the hostname. Requires ssl.rootcertfile.

  • verify-ca: Verify the certificate but not the hostname. Requires ssl.rootcertfile.

  • require: Use SSL but do not verify the certificate. No certificate file needed.

  • prefer: Try SSL first, fall back to non-SSL if unavailable.

  • disable: Do not use SSL.

Troubleshooting SSL/TLS issues:

  • Missing certificate error: If you see certificate validation errors, ensure you have provided the complete PEM certificate chain in ssl.rootcertfile.

  • Certificate format: The certificate must be in PEM format with proper newline characters (\n). Include the full certificate chain if your database uses intermediate certificates.

  • Hostname mismatch: When using verify-full, ensure the connection.host matches the certificate’s Common Name (CN) or Subject Alternative Name (SAN).

  • Firewall rules: Verify that the PostgreSQL port (default 5432) allows encrypted connections from Confluent Cloud.

For more information about SSL configuration, see Configuration Properties.

Why is auto.create or auto.evolve not creating tables or columns?

The connector fails to automatically create tables or add new columns even though auto.create or auto.evolve is set to true.

Root cause:

Table creation or schema evolution failures typically occur due to:

  • Insufficient database permissions: The PostgreSQL user account lacks CREATE or ALTER privileges.

  • Schema compatibility issues: The Kafka record schema cannot be mapped to a valid PostgreSQL table structure.

  • Column count limits: PostgreSQL has limits on the number of columns per table. If your schema exceeds this limit (typically 1,600 columns), table creation fails.

  • Data type incompatibilities: Some Kafka schema types might not have a direct PostgreSQL equivalent.

  • Reserved keywords: Table or column names that are PostgreSQL reserved keywords can cause creation to fail.

Solution:

  • Grant required permissions: Ensure the PostgreSQL user has the necessary DDL privileges. See Prerequisites for the complete list of required privileges (CREATE, ALTER, and DROP).

  • Pre-create tables: For complex schemas or production environments, consider creating tables manually with the exact schema you need, then set both auto.create and auto.evolve to false.

  • Monitor column counts: If your schema has many fields, check that you’re within PostgreSQL’s column limit. Consider restructuring your data model if you exceed this limit.

  • Verify schema format: Ensure you’re using a schema-based format (Avro, JSON Schema, or Protobuf) with Schema Registry enabled. The connector cannot auto-create tables from schemaless data.

  • Check naming conventions: Avoid PostgreSQL reserved keywords for table and column names. If necessary, the connector quotes identifiers, but pre-creating tables with proper naming is recommended.

How do I configure primary keys correctly?

You need to understand how to configure pk.mode and pk.fields for your PostgreSQL Sink connector.

Configuration options:

The pk.mode property determines how primary keys are assigned. Supported modes are:

  • kafka: Uses Kafka coordinates (topic, partition, offset) as the primary key. This guarantees uniqueness but doesn’t reflect your business logic.

    {
      "pk.mode": "kafka",
      "pk.fields": "__connect_topic,__connect_partition,__connect_offset"
    }
    

    If pk.fields is not specified, these default field names are used automatically.

  • none: No primary key is used. Rows are inserted without unique constraints.

    {
      "pk.mode": "none"
    }
    

    Use this mode with caution as it can lead to duplicate records.

  • record_key: Uses fields from the Kafka record key as the primary key.

    {
      "pk.mode": "record_key",
      "pk.fields": "user_id"
    }
    

    For primitive keys, specify a single field name. For struct keys, specify the field names to use, or leave pk.fields empty to use all fields from the key.

  • record_value: Uses fields from the Kafka record value as the primary key.

    {
      "pk.mode": "record_value",
      "pk.fields": "order_id,line_item_id"
    }
    

    The record value must be a struct type. Specify comma-separated field names, or leave pk.fields empty to use all fields.

Common errors:

  • Mismatched field names: If pk.fields references fields that don’t exist in the record key or value (depending on pk.mode), the connector fails.

  • Missing pk.fields with primitive keys: When using pk.mode=record_key with a primitive key type (like String or Integer), you must specify exactly one field name in pk.fields.

  • Incompatible with delete.on.null: The delete.on.null property (which treats null values as deletes) requires pk.mode=record_key. Other modes are not compatible.

For more information, see the pk.mode and pk.fields properties in Configuration Properties.

How do I handle database connection failures?

The connector fails to connect to the PostgreSQL database or experiences intermittent connection issues.

Common connection errors:

  • Network connectivity: The connector cannot reach the PostgreSQL host due to firewall rules, security groups, or network configuration.

  • Authentication failures: Incorrect username or password, or the PostgreSQL user is not permitted to connect from the connector’s IP address.

  • Database does not exist: The database specified in db.name does not exist on the PostgreSQL server.

  • Connection timeout: The PostgreSQL server is not responding within the expected timeout period.

  • SSL/TLS errors: Certificate validation failures when using ssl.mode of verify-full or verify-ca.

Troubleshooting steps:

  • Review connector logs: Check the connector logs in the Confluent Cloud Console for specific error messages that can help identify the root cause.

  • Verify network access: Ensure the PostgreSQL database is accessible from Confluent Cloud. For networking considerations, see Networking and DNS. If using private networking, ensure it is properly configured. The database and Kafka cluster should be in the same region to avoid additional data transfer charges.

  • Test credentials manually: Use a PostgreSQL client to verify the connection string, username, and password:

    psql -h <connection.host> -p <connection.port> -U <connection.user> -d <db.name>
    
  • Check PostgreSQL user permissions: Ensure the user has the necessary permissions. See Prerequisites for required privileges.

  • Verify database exists: Confirm that the database specified in db.name exists:

    \l
    
  • Check SSL configuration: If using SSL, verify that your ssl.mode and ssl.rootcertfile (if required) are correctly configured. See the “How do I configure SSL/TLS connections to PostgreSQL?” question in this section.

Why are my table names being truncated?

The connector creates tables with truncated names, or multiple topics are writing to the same table.

Root cause:

PostgreSQL has a default identifier length limit of 63 bytes. When the combined value of table.name.format and the Kafka topic name exceeds this limit, the connector truncates the name to 63 characters. This can cause issues when:

  • Multiple topics with similar long names truncate to the same table name, causing a collision.

  • The truncated table name doesn’t match your expected naming convention.

Solution:

  • Use shorter topic names: Keep Kafka topic names short to avoid truncation, especially when using table.name.format that adds prefixes or suffixes.

  • Customize table.name.format: Modify the table.name.format configuration to use shorter prefixes or different naming patterns:

    {
      "table.name.format": "${topic}"
    }
    

    The default format is ${topic}, which uses the topic name directly. Adding prefixes or suffixes increases the likelihood of truncation.

  • Pre-create tables: For better control over table names, create tables manually before running the connector and set auto.create to false.

  • Plan for unique names: Ensure that even after truncation to 63 characters, your table names remain unique across all topics.

For more information about table name handling and truncation behavior, see Table names and Kafka topic names.

Why is my data not appearing in PostgreSQL tables?

The connector is running, but data from Kafka topics is not being written to PostgreSQL tables.

Root cause:

This issue typically occurs due to:

  • Schema mismatches: The schema of the Kafka records doesn’t match the PostgreSQL table schema, causing the connector to skip or fail processing records.

  • Serialization errors: The connector cannot deserialize the Kafka records due to missing or incompatible schemas in Confluent Cloud Schema Registry.

  • Connector is paused or failed: The connector status shows as paused or failed, preventing data flow.

  • Consumer lag: The connector is not consuming messages from the topic due to consumer group issues or offset problems.

  • Filtering or transformations: Single Message Transforms (SMTs) or filtering logic is dropping records before they reach PostgreSQL.

  • Permission issues: The PostgreSQL user lacks necessary DML permissions (SELECT, INSERT, UPDATE, DELETE).

Solution:

  • Verify connector status: Check that the connector status is Running in the Confluent Cloud Console. If it shows Failed or Paused, review the error messages and restart the connector.

  • Check Schema Registry: Ensure that the Kafka topic has a valid schema registered in Schema Registry that matches your input.data.format (Avro, JSON_SR, or Protobuf).

  • Validate table schema: Confirm that the PostgreSQL table schema is compatible with the Kafka record schema. Check column names, data types, and nullable constraints.

  • Verify permissions: Ensure the PostgreSQL user has the required DML privileges. See Prerequisites for the complete list of required permissions.

  • Review connector logs: Look for serialization errors, schema compatibility issues, or other error messages in the connector logs.

  • Monitor consumer lag: Check if the connector is consuming messages from the topic. High consumer lag can indicate processing issues.

  • Verify SMT configuration: If you’re using Single Message Transforms, ensure they’re not inadvertently dropping records.

How do I handle schema evolution?

You need to add new fields to your Kafka topic schema and want them to be reflected in the PostgreSQL table.

Using auto.evolve:

When auto.evolve is set to true, the connector automatically adds new columns to the PostgreSQL table when it encounters new fields in the Kafka record schema:

{
  "auto.evolve": "true"
}

Important considerations:

  • Backward compatibility: The connector can add new columns but cannot remove existing columns or change column types. Schema changes must be backward compatible.

  • Default values: New columns are added with NULL as the default value unless your schema specifies a default.

  • Database permissions: The PostgreSQL user must have ALTER privileges to evolve the schema. See Prerequisites for required permissions.

  • Schema Registry: Use Schema Registry with schema compatibility rules to ensure controlled schema evolution. The recommended compatibility modes are BACKWARD or FULL.

  • Column limit: Be aware of PostgreSQL’s column count limit (typically 1,600 columns). Adding many fields over time can eventually exceed this limit.

Manual schema management:

For production environments, consider managing schema changes manually:

  1. Set auto.evolve to false.

  2. Coordinate schema changes between your Kafka topics and PostgreSQL tables.

  3. Use Confluent Cloud Schema Registry with strict compatibility rules to prevent incompatible schema changes.

  4. Apply schema changes to PostgreSQL tables first, then update the Kafka topic schema.

This approach gives you more control and prevents unexpected schema changes in your database.

How do I handle JSON and JSONB data types?

You want to sink data to PostgreSQL tables containing JSON or JSONB columns.

Configuration:

The connector supports writing to PostgreSQL tables with JSON or JSONB columns. To use this feature:

  1. Define your PostgreSQL table with JSON or JSONB column types:

    CREATE TABLE events (
      id INTEGER PRIMARY KEY,
      event_data JSONB
    );
    
  2. In your Kafka topic, store the JSON data as a STRING type.

  3. The connector automatically writes the string value to the JSON or JSONB column in PostgreSQL.

Important notes:

  • Data format: The JSON data must be stored as STRING type in Kafka, not as a complex schema type.

  • JSON vs JSONB: PostgreSQL stores JSON as text and JSONB in a binary format. JSONB is generally recommended for better performance and indexing capabilities.

  • Schema validation: Ensure the string value contains valid JSON. Invalid JSON causes insertion errors.

  • Quoting: The connector handles the necessary quoting and escaping when writing to JSON/JSONB columns.

Example configuration:

{
  "connector.class": "PostgresSink",
  "topics": "events_topic",
  "input.data.format": "AVRO",
  "auto.create": "false",
  "db.name": "mydb"
}

In this example, if your Avro schema has a field of type STRING containing JSON data, it is written to the corresponding JSONB column in PostgreSQL.

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png