Debezium PostgreSQL Source Connector for Confluent Platform

The Debezium PostgreSQL Connector is a source connector that can obtain a snapshot of the existing data in a PostgreSQL database and then monitor and record all subsequent row-level changes to that data. All of the events for each table are recorded in a separate Apache Kafka® topic, where they can be easily consumed by applications and services.

  • Confluent supports Debezium PostgreSQL connector version 0.9.3 and later.
  • Confluent supports using this connector with PostgreSQL 9.6, 10, 11, 12, 13, and 14.
  • Databases hosted by a service such as Heroku Postgres can’t be monitored with Debezium, since you may be unable to install the logical decoding plugin.


The Debezium PostgreSQL Source connector includes the following features:

At least once delivery

The connector guarantees that records are delivered at least once to the Kafka topic. If a fault occurs (for example, if there are network connectivity issues), or the connector restarts, you may see some duplicate records in the Kafka topic.

Supports one task

The Debezium PostgreSQL Source connector supports running only one task.

Automatic topic creation

The connector will create the internal database history Kafka topic if it doesn’t exist.

Install the Postgres Connector

You can install this connector by using the instructions or you can manually download the ZIP file.

confluent-hub install debezium/debezium-connector-postgresql:latest

You can install a specific version by replacing latest with a version number. For example:

confluent-hub install debezium/debezium-connector-postgresql:<version-number>

The Debezium PostgreSQL Source connector has specific ACL requirements. See the ACL requirements for Debezium Source connectors to ensure you meet the specified requirements.

Install the connector manually

Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.


The Debezium PostgreSQL connector is an open source connector and does not require a Confluent Enterprise License.

Configuration Properties

For a complete list of configuration properties for this connector, see PostgreSQL Source Connector (Debezium) Configuration Properties.

Setting up PostgreSQL

Before using the Debezium PostgreSQL connector to monitor the changes committed on a PostgreSQL server, first install the logical decoding plugin into the PostgreSQL server. Enable a replication slot and configure a user with sufficient privileges to perform the replication.

To monitor a PostgreSQL database running in Amazon RDS, refer to the Debezium documentation for PostgreSQL on AmazonRDS.

Enable logical decoding and replication on the PostgreSQL server

The Postgres relational database management system has a feature called logical decoding that allows clients to extract all persistent changes to database tables into a coherent format. This formatted data can be interpreted without detailed knowledge of the internal state of the database. An output plugin transforms the data from the write-ahead log’s internal representation into a format the consumer of a replication slot needs.

The Debezium PostgreSQL connector works with one of the following supported logical decoding plugins from Debezium:

  • protobuf : To encode changes in Protobuf format
  • wal2json : To encode changes in JSON format

Install the wal2json plugin

Before executing the commands, make sure the user has write-privilege to the wal2json library at the PostgreSQL lib directory. Note that for the test environment, this directory is /usr/pgsql-9.6/lib/. In the test environment set the export path as shown below:

export PATH="$PATH:/usr/pgsql-9.6/bin"

Enter the wal2json installation commands.

git clone -b master --single-branch \
&& cd wal2json \
&& git checkout 92b33c7d7c2fccbeb9f79455dafbc92e87e00ddd \
&& make && make install \
&& cd .. \
&& rm -rf wal2json

Enable replication on the PostgreSQL server

Add the following lines to the end of the /usr/share/postgresql/postgresql.conf PostgreSQL configuration file. These lines include the plugin at the shared libraries and adjust some Write-Ahead Log (WAL) and streaming replication settings.

log_min_error_statement = fatal
listen_addresses = '*'
shared_preload_libraries = 'decoderbufs'
wal_level = logical             # minimal, archive, hot_standby, or logical (change requires restart)
max_wal_senders = 1             # max number of walsender processes (change requires restart)
#wal_keep_segments = 4          # in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s       # in milliseconds; 0 disables
max_replication_slots = 1       # max number of replication slots (change requires restart)

Initialize replication permissions

Add the following lines to the end of the pg_hba.conf PostgreSQL configuration file. These lines configure the client authentication for the database replication.

############ REPLICATION ##############
local   replication     postgres                          trust
host    replication     postgres            trust
host    replication     postgres  ::1/128                 trust

Quick Start

The Debezium PostgreSQL Connector is a source connector that can record events for each table in a separate Kafka topic, where they can be easily consumed by applications and services.


For an example of how to get Kafka Connect connected to Confluent Cloud, see Distributed Cluster.

Install the connector

Refer to the Debezium tutorial if you want to use Docker images to set up Kafka, ZooKeeper and Connect. For the following tutorial, you need to have a local Confluent Platform installation. Note that as of Confluent Platform 7.5, ZooKeeper is deprecated for new deployments. Confluent recommends KRaft mode for new deployments.

Navigate to your Confluent Platform installation directory and run the following command to install the connector:

confluent-hub install debezium/debezium-connector-postgresql:0.9.4

Adding a new connector plugin requires restarting Connect. Use the Confluent CLI to restart Connect.

confluent local services connect stop && confluent local services connect start
Using CONFLUENT_CURRENT: /Users/username/Sandbox/confluent-snapshots/var/confluent.NuZHxXfq
Starting Zookeeper
Zookeeper is [UP]
Starting Kafka
Kafka is [UP]
Starting Schema Registry
Schema Registry is [UP]
Starting Kafka REST
Kafka REST is [UP]
Starting Connect
Connect is [UP]

Check if the PostgreSQL plugin has been installed correctly and picked up by the plugin loader.

curl -sS localhost:8083/connector-plugins | jq '.[].class' | grep postgres

Set up PostgreSQL using Docker (Optional)

If you do not have a native installation of PostgreSQL, you may use the following command to start a new container to run a PostgreSQL database server preconfigured with the logical decoding plugin, replication slot and an inventory test database.

# Pull docker image
docker pull debezium/example-postgres

# Run docker container
docker run -it --rm --name postgres -p 5432:5432 \
-e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres \

# In a separate terminal, launch psql to run SQL queries:
docker run -it --rm --name psql_client \
-e PGOPTIONS="--search_path=inventory" \
-e PGPASSWORD=postgres --link postgres:postgres debezium/example-postgres \
psql -h postgres -U postgres

# To see the list of relations in the inventory database, type \d at the postgres prompt. To exit, type \q

Enable logical decoding on the PostgreSQL server

Logical encoding is already enabled if you set up PostgreSQL using the Docker image (in the previous section). On your native installation, follow these steps to Enable logical decoding and replication on the PostgreSQL server.

Start the Debezium PostgreSQL connector

Create the file register-postgres.json to store the following connector configuration:

  "name": "inventory-connector",
  "config": {
      "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
      "tasks.max": "1",
      "database.hostname": "",
      "database.port": "5432",
      "database.user": "postgres",
      "database.password": "postgres",
      "database.dbname" : "postgres",
      "": "dbserver1",
      "schema.include.list": "inventory"

Start the connector.

curl -i -X POST -H "Accept:application/json" -H  "Content-Type:application/json" http://localhost:8083/connectors/ -d @register-postgres.json

Start your Kafka consumer

Start the consumer in a new terminal session.

confluent local services kafka consume dbserver1.inventory.customers --from-beginning

When you enter SQL queries in bash (to add or modify records in the database) messages populate and are displayed on your consumer terminal to reflect those records.

Following is an example psql query to update a record in the customers table.

update customers set first_name = 'Sarah' where id = 1001;

Clean up resources

Delete the connector and stop Confluent services.

curl -X DELETE localhost:8083/connectors/inventory-connector
confluent local stop

Stop PostgreSQL containers.

docker stop psql_client # Alternatively type \q at the psql prompt
docker stop postgres


Portions of the information provided here derive from documentation originally produced by the Debezium Community. Work produced by Debezium is licensed under Creative Commons 3.0.