Quick Start for the Oracle CDC Source Connector for Confluent Platform

Oracle 11g, 12c and 18c Deprecation

Oracle discontinued support for the following Oracle Database versions:

  • Version 11g on December 31, 2020
  • Version 12c on March 31, 2022
  • Version 18c on June 30, 2021

Oracle CDC connector support for each of these versions will reach end-of-life on June 30, 2025. Confluent currently supports Oracle Database versions 19c and later.

Use the Kafka Connect Oracle CDC Source connector to capture changes to rows in a database, and then represent those changes as change event records in Apache Kafka® topics.

In this quick start, you will:

Prerequisites

Before you proceed with the Oracle CDC connector quick start, ensure you do the following:

Step 1: Launch the connector

Use the steps in this section to launch the Oracle CDC Source connector on Confluent Platform and produce data to Kafka topics. You can launch the connector using either of the following methods:

  1. Load the connector by passing a .json configuration file.

    confluent local services connect connector load <name-of-connector> --config <path-to-config-file>
    

    Example of a configuration file:

    {
      "name": "cdc-source-oracle-pdb",
      "config": {
        "connector.class": "io.confluent.connect.oracle.cdc.OracleCdcSourceConnector",
        "tasks.max": "2",
        "key.converter": "io.confluent.connect.avro.AvroConverter",
        "value.converter": "io.confluent.connect.avro.AvroConverter",
        "oracle.server": "localhost",
        "oracle.port": "1521",
        "oracle.sid": "ORCLCDB",
        "oracle.pdb.name": "ORCLPDB1",
        "oracle.username": "C##MYUSER",
        "oracle.password": "mypassword",
        "redo.log.consumer.bootstrap.servers": "localhost:9092",
        "table.inclusion.regex": "ORCLPDB1[.]C##MYUSER[.].*",
        "value.converter.schema.registry.url": "http://localhost:8081",
        "key.converter.schema.registry.url": "http://localhost:8081"
      }
    }
    

    Note

    Note that these are the minimum configurations required to run the connector. For detailed configurations, see configuration properties.

  2. Verify the connector configurations.

    confluent local services connect connector config <name-of-connector>
    

    The output should be similar to:

    Current configuration of cdc-oracle-source-pdb:
    {
       "connector.class": "io.confluent.connect.oracle.cdc.OracleCdcSourceConnector",
       "oracle.password": "mypassword",
       "oracle.server": "localhost",
       "oracle.sid": "ORCLCDB",
       "oracle.pdb.name": "ORCLPDB1",
       "redo.log.consumer.bootstrap.servers": "localhost:9092",
       "tasks.max": "2",
       "oracle.port": "1521",
       "oracle.username": "C##MYUSER",
       "value.converter.schema.registry.url": "http://localhost:8081",
       "table.inclusion.regex": "ORCLPDB1[.]C##MYUSER[.].*",
       "name": "cdc-source-oracle-pdb",
       "value.converter": "io.confluent.connect.avro.AvroConverter",
       "key.converter": "io.confluent.connect.avro.AvroConverter",
       "key.converter.schema.registry.url": "http://localhost:8081"
    }
    

Step 2: Verify the connector processes records

After launching the connector, the redo log topic and table topics are auto created. Use the following steps to verify the connector is processing records.

Note that the connector can run in snapshot and/or change data capture (CDC) mode depending on the value set for the start.from configuration property. This quick start does snapshot mode first and then switches to CDC mode (default).

The connector will first read the accessible tables–tables that are included in table.inclusion.regex and not in table.exclusion.regex—and find the tables for which the snapshot is not complete. It will initiate the snapshot for all those tables.

Note

If the validations fail for any reason, such as invalid configurations or unmet database prerequisites, the connector will go in a failed state and the following steps would not occur.

  1. Verify the connector creates a topic for each table, based on the table.topic.name.template configuration property. The connect logs should resemble the following:

    Determining status of snapshots for tables ORCLPDB1.C##MYUSER.CUSTOMERS
    Found 0 of 1 snapshots are complete: [table 'ORCLPDB1.C##MYUSER.CUSTOMERS' at SCN=2640682]
    Beginning 1 snapshots using 1 of 4 snapshot threads
    Reading 2301 rows from table 'ORCLPDB1.C##MYUSER.CUSTOMERS' at SCN=2640682
    Completed snapshot of 2301 rows from table 'ORCLPDB1.C##MYUSER.CUSTOMERS' at SCN=2642470 in 0:00:02.130
    Completed snapshots for all assigned tables after 0:00:02.707.
    
  2. Once the snapshot is complete, verify the connector proceeds to capture change events. You should see connect logs that resemble the following:

    Proceeding to capture change events for ORCLPDB1.C##MYUSER.CUSTOMERS since FULL Supplemental Logging mode is set
    

Step 3: Produce change events

Use the steps in this section to produce change events.

  1. To produce a change event, execute a DML statement on the database connected to the connector and commit the command as shown in the following example:

    SQL> insert into CUSTOMERS (id, first_name, last_name, email, gender, club_status, comments) values (10001, 'Rica', 'Blaisdell', 'rblaisdell0@rambler.ru', 'Female', 'bronze', 'Universal optimal hierarchy');
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    

    Important

    Ensure you commit the data for a change event to be processed and produced to the topic. Transactions that have been rolled back will not be produced to the topic.

    In case no new records are being generated in the table specific topics, see Why do my table-specific topics show no new records?.

  2. Verify the redo log topic has been created after the first DML post connector creation and snapshot in accordance with the redo.log.topic.name property.

Step 4: Consume from topics

Use this section to consume data from Kafka topics. You can use the Confluent Platform GUI or the Confluent CLI.

  1. Start consuming from the topic. To consume from the beginning of the topic append the flag --from-beginning:

    confluent local services kafka consume <topic-name> --from-beginning
    

Step 5: Delete the Oracle CDC connector and topics

Use this section to delete an Oracle CDC Source connector and its associated Kafka topics. You can use the Confluent Platform GUI or the Confluent CLI.

  1. Unload the connector to delete it.

    confluent local services connect connector unload <connector-name>
    

Step 6: Troubleshoot the Oracle CDC connector

For connect related logging and debugging, run the following command and append the --follow flag to log additional output until the command is interrupted:

confluent local services connect log -f

For more troubleshooting tips, see Troubleshooting.