AlloyDB Sink Connector for Confluent Cloud¶
The Kafka Connect AlloyDB Sink connector for Confluent Cloud moves data from an Apache Kafka® topic to an AlloyDB database. It writes data from a topic in Kafka to a table in the specified AlloyDB database. Table auto-creation and limited auto-evolution are supported.
Note
If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The AlloyDB Sink connector provides the following features:
- Idempotent writes: The default
insert.modeis INSERT. If it is configured as UPSERT, the connector will use upsert semantics rather than plain insert statements. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key constraint violation, which provides idempotence. - Schemas: The connector supports Avro, JSON Schema, and Protobuf input value formats. The connector supports Avro, JSON Schema, Protobuf, and String input key formats. Schema Registry must be enabled to use a Schema Registry-based format.
- Primary key support: Supported PK modes are
kafka,none,record_key, andrecord_value. These are used in conjunction with the PK Fields property. - Table and column auto-creation:
auto.createandauto-evolveare supported. If tables or columns are missing, they can be created automatically. Table names are created based on Kafka topic names. For more information, see Table names and Kafka topic names. - At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
- Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
- PostgreSQL JSON and JSONB: The connector supports sinking to AlloyDB tables containing data stored as JSON or JSONB (JSON binary format). JSON or JSONB should be stored as STRING type in Kafka and matching columns should be defined as JSON or JSONB in AlloyDB.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see AlloyDB Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
Table names and Kafka topic names¶
You can configure the connector to combine the value for table.name.format
and the Kafka topic name. If the resulting combined value (table name) exceeds
the maximum-permitted identifier length for the database version in use, the
connector truncates the value to the permitted identifier length.
For example, PostgreSQL 14 (fully compatible with AlloyDB) uses 63 bytes as
its default identifier length setting. If the value used for table.name.format
and the Kafka topic name exceeds 63 characters, only the first 63 characters from
the combined name are used.
For this reason, you should not run the connector with very long Kafka topic names and table names. If the table name is truncated, and the connector receives records from different upstream topics, the records map to the same table name after truncation takes place. This results in a duplicate table name collision.
Note
You can expect this connector behavior for any interactions with the database, both DDL (table creation and evolution) and DML (insert, upsert, and delete).
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud AlloyDB sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to an AlloyDB database.
- Prerequisites
- Authorized access to a Confluent Cloud cluster on Google Cloud.
- Authorized access to a AlloyDB database via AlloyDB Auth Proxy running on an intermediary VM accessible over a public IP.
- The database and Kafka cluster should be in the same region.
- For networking considerations, see Networking and DNS. To use a set of public egress IP addresses, see Public Egress IP Addresses for Confluent Cloud Connectors.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Ensure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add AlloyDB Sink Connector screen, complete the following:
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
Note
Freight clusters support only service accounts for Kafka authentication.
Click Continue.
- Enter your AlloyDB database connection detail:
- Connection host: The hostname or the IP address of the VM running the AlloyDB Auth Proxy.
- Connection port: The AlloyDB database connection port. Defaults
to
5432. - Connection user: The AlloyDB database user name.
- Connection password: The AlloyDB database password.
- Database name: The AlloyDB database name.
- Click Continue.
Note
Configuration properties that are not shown in the Cloud Console use the default values. See Configuration Properties for all property values and definitions.
Select an Input Kafka record value format: (data coming from the Kafka topic) AVRO, JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format..
Select an insert mode (insertion mode) to use:
INSERT: Use the standardINSERTrow function. An error occurs if the row already exists in the table.UPSERT: This mode is similar toINSERT. However, if the row already exists, theUPSERTfunction overwrites column values with the new values provided.
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Auto create table: Whether to automatically create the destination table if it is missing.
Auto add columns: Whether to automatically add columns in the table if they are missing.
Note
Auto create tables and Auto add columns are optional. These properties set whether to automatically create tables or columns if they are missing relative to the input record schema. If not used, both default to
false. When Auto create tables is set totrue, the connector creates a table name using${topic}(that is, the Kafka topic name). For more information, see Table names and Kafka topic names and the AlloyDB Sink configuration properties.Database timezone: Name of the timezone used in the connector when querying with time-based criteria. Defaults to
UTC.Table name format: A format string for the destination table name, which may contain
${topic}as a placeholder for the originating topic name.Table types: The comma-separated types of database tables to which the sink connector can write.
Fields included: List of comma-separated record value field names. If empty, all fields from the record value are used.
PK mode: The primary key mode. Options are:
kafka: Kafka coordinates are used as the primary key. Must be used with the PK Fields property.none: No primary keys used.record_key: Fields from the record key are used. May be a primitive or a struct.record_value: Fields from the Kafka record value are used. Must be a struct type.
PK Fields: List of comma-separated primary key field names. Options are:
kafka: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to__connect_topic,__connect_partition,__connect_offset.none: PK Fields not used.record_key: If left empty, all fields from the key struct are used. Otherwise, this is used to extract the fields in the property. A single field name must be configured for a primitive key.record_value: Used to extract fields from the record value. If left empty, all fields from the value struct are used.
When to quote SQL identifiers: When to quote table names, column names, and other identifiers in SQL statements.
Max rows per batch: Maximum number of rows to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.
Input Kafka record key format: Sets the input Kafka record key format. This need to be set to a proper format if using
pk.mode=record_key. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING. Note that you must have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.Delete on null: Whether to treat null record values as deletes. Requires
pk.modeto berecord_key.
Auto-restart policy
Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to
true, enabling the connector to automatically restart in case of user-actionable errors. Set this property tofalseto disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.
Consumer configuration
Max poll interval(ms): Set the maximum delay between subsequent consume requests to Kafka. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 300,000 milliseconds (5 minutes).
Max poll records: Set the maximum number of records to consume from Kafka in a single request. Use this property to improve connector performance in cases when the connector cannot send records to the sink system. The default is 500 records.
Transforms
Single Message Transforms: To add a new SMT, see Add transforms. For more information about unsupported SMTs, see Unsupported transformations.
See Configuration Properties for all property values and definitions.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.
- To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
- Click Continue.
Verify the connection details.
Click Launch.
The status for the connector should go from Provisioning to Running.
Step 5: Check the results in AlloyDB¶
Verify that new records are being added to the AlloyDB database.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties:
{
"connector.class": "AlloyDbSink",
"name": "AlloyDbSinkConnector_0",
"input.data.format": "AVRO",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "****************",
"kafka.api.secret": "****************************************************************",
"connection.host": "34.27.121.137",
"connection.port": "5432",
"connection.user": "postgres",
"connection.password": "**************",
"db.name": "postgres",
"topics": "postgresql_ratings",
"insert.mode": "UPSERT",
"db.timezone": "UTC",
"auto.create": "true",
"auto.evolve": "true",
"pk.mode": "record_value",
"pk.fields": "user_id",
"tasks.max": "1"
}
Note the following property definitions. See the AlloyDB Sink configuration properties for additional property values and definitions.
"connector.class": Identifies the connector plugin name."name": Sets a name for your new connector.
"kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNTorKAFKA_API_KEY(the default). To use an API key and secret, specify the configuration propertieskafka.api.keyandkafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"connection.host": The hostname or the IP address of the VM running the AlloyDB Auth Proxy."connection.port": The AlloyDB database connection port. Defaults to5432."connection.user": The AlloyDB database user name."connection.password": The AlloyDB database password."db.name": The AlloyDB database name."input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), or PROTOBUF. You must have Confluent Cloud Schema Registry configured if using a schema-based message format."input.key.format": Sets the input record key format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), PROTOBUF, or STRING. You must have Confluent Cloud Schema Registry configured if using a schema-based message format."delete.on.null": Whether to treat null record values as deletes. Defaults tofalse. Requirespk.modeto berecord_key. Defaults tofalse."topics": Identifies the topic name or a comma-separated list of topic names."insert.mode": Enter one of the following modes:INSERT: Use the standardINSERTrow function. An error occurs if the row already exists in the table.UPSERT: This mode is similar toINSERT. However, if the row already exists, theUPSERTfunction overwrites column values with the new values provided.
db.timezone: Name of the time zone the connector uses when inserting time-based values. Defaults to UTC."auto.create"(tables) and"auto-evolve"(columns): (Optional) Sets whether to automatically create tables or columns if they are missing relative to the input record schema. If not entered in the configuration, both default tofalse. When``auto.create`` is set totrue, the connector creates a table name using${topic}(that is, the Kafka topic name). For more information, see Table names and Kafka topic names and the AlloyDB Sink configuration properties."pk.mode": Supported modes are listed below:kafka: Kafka coordinates are used as the primary key. Must be used with the"pk.fields"property.none: No primary keys used.record_key: Fields from the record key are used. May be a primitive or a struct.record_value: Fields from the Kafka record value are used. Must be a struct type.
"pk.fields": A list of comma-separated primary key field names. The runtime interpretation of this property depends on thepk.modeselected. Options are listed below:kafka: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to__connect_topic,__connect_partition,__connect_offset.none: PK Fields not used.record_key: If left empty, all fields from the key struct are used. Otherwise, this is used to extract the fields in the property. A single field name must be configured for a primitive key.record_value: Used to extract fields from the record value. If left empty, all fields from the value struct are used.
"tasks.max": Maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and definitions.
Step 4: Load the configuration file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file alloydb-sink-config.json
Example output:
Created connector AlloyDbSinkConnector_0 lcc-ix4dl
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type
+-----------+--------------------------+---------+------+
lcc-ix4dl | AlloyDbSinkConnector_0 | RUNNING | sink
Step 6: Check the results in AlloyDB.¶
Verify that new records are being added to the AlloyDB database.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
Which topics do you want to get data from?¶
topics.regexA regular expression that matches the names of the topics to consume from. This is useful when you want to consume from multiple topics that match a certain pattern without having to list them all individually.
- Type: string
- Importance: low
topicsIdentifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
errors.deadletterqueue.topic.nameThe name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. Defaults to ‘dlq-${connector}’ if not set. The DLQ topic will be created automatically if it does not exist. You can provide
${connector}in the value to use it as a placeholder for the logical cluster ID.- Type: string
- Default: dlq-${connector}
- Importance: low
Schema Config¶
schema.context.nameAdd a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
Input messages¶
input.data.formatSets the input Kafka record value format. Valid entries are AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Importance: high
input.key.formatSets the input Kafka record key format. This need to be set to a proper format if using pk.mode=record_key. Valid entries are AVRO, JSON_SR, PROTOBUF, STRING. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Importance: high
delete.enabledWhether to treat null record values as deletes. Requires pk.mode to be record_key.
- Type: boolean
- Default: false
- Importance: low
How should we connect to your data?¶
nameSets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.modeKafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.keyKafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.idThe Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secretSecret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
How should we connect to your database?¶
connection.hostHostname or IP address of the virtual machine running the AlloyDB Auth Proxy. Make sure the connector can reach your service. Do not include jdbc:xxxx:// in the connection hostname property.
- Type: string
- Importance: high
connection.portConnection port for the AlloyDB database.
- Type: int
- Default: 5432
- Valid Values: [0,…,65535]
- Importance: high
connection.userUser of the AlloyDB database.
- Type: string
- Importance: high
connection.passwordPassword of the AlloyDB database.
- Type: password
- Importance: high
db.nameAlloyDB database name.
- Type: string
- Importance: high
Database details¶
insert.modeThe insertion mode to use. INSERT uses the standard INSERT row function. An error occurs if the row already exists in the table; UPSERT mode is similar to INSERT. However, if the row already exists, the UPSERT function overwrites column values with the new values provided.
- Type: string
- Default: INSERT
- Importance: high
table.name.formatA format string for the destination table name, which may contain ${topic} as a placeholder for the originating topic name.
For example, kafka_${topic} for the topic ‘orders’ will map to the table name ‘kafka_orders’.
- Type: string
- Default: ${topic}
- Importance: medium
table.typesThe comma-separated types of database tables to which the sink connector can write. By default this is
TABLE, but any combination ofTABLE,PARTITIONED TABLEandVIEWis allowed. Not all databases support writing to views, and when they do the sink connector will fail if the view definition does not match the records’ schemas (regardless ofauto.evolve).- Type: list
- Default: TABLE
- Importance: low
fields.whitelistList of comma-separated record value field names. If empty, all fields from the record value are utilized, otherwise used to filter to the desired fields.
- Type: list
- Importance: medium
timestamp.fields.listList of comma-separated record value timestamp field names that should be converted to timestamps. These fields will be converted based on precision mode specified in Timestamp Precision Mode. The timestamp fields included here should be Long or String type and nested fields are not supported.
- Type: list
- Importance: medium
db.timezoneName of the JDBC timezone used in the connector when querying with time-based criteria. Defaults to UTC.
- Type: string
- Default: UTC
- Importance: medium
date.timezoneName of the JDBC timezone that should be used in the connector when inserting DATE type values. Defaults to DB_TIMEZONE that uses the timezone set for db.timzeone configuration (to maintain backward compatibility). It is recommended to set this to UTC to avoid conversion for DATE type values.
- Type: string
- Default: DB_TIMEZONE
- Valid Values: DB_TIMEZONE, UTC
- Importance: medium
timestamp.precision.modeConvert the Timestamp with precision. If set to microseconds the timestamp will be converted to microsecond precision. If set to nanoseconds the timestamp will be converted to nanoseconds precision.
- Type: string
- Default: microseconds
- Importance: medium
date.calendar.systemConversion of time since epoch value in kafka topic record to DATE or TIMESTAMP depends on the calendar used to interpret it. If LEGACY is used, it will use the hybrid Gregorian/Julian calendar which was the default in the older java date time APIs. However, if ‘PROLEPTIC_GREGORIAN’ is used, then it will use the proleptic gregorian calendar which extends the Gregorian rules backward indefinitely and does not apply the 1582 cutover. This matches the behavior of modern Java date/time APIs (java.time). This is defaulted to LEGACY for backward compatibility. The ideal setting for this depends on whether the values in source topic were populated using old or new java date time APIs. Changing this configuration on an existing connector might lead to a drift in the DATE/TIMESTAMP column’s values populated in the sink database.
- Type: string
- Default: LEGACY
- Importance: medium
Primary Key¶
pk.modeThe primary key mode, also refer to pk.fields documentation for interplay. Supported modes are:
none: No keys utilized.
kafka: Apache Kafka® coordinates are used as the PK.
record_value: Field(s) from the record value are used, which must be a struct.
record_key: Field(s) from the record key are used, which must be a struct.
- Type: string
- Valid Values: kafka, none, record_key, record_value
- Importance: high
pk.fieldsList of comma-separated primary key field names. The runtime interpretation of this config depends on the pk.mode:
none: Ignored as no fields are used as primary key in this mode.
kafka: Must be a trio representing the Kafka coordinates, defaults to __connect_topic,__connect_partition,__connect_offset if empty.
record_value: If empty, all fields from the value struct will be used, otherwise used to extract the desired fields.
- Type: list
- Importance: high
SQL/DDL Support¶
auto.createWhether to automatically create the destination table if it is missing.
- Type: boolean
- Default: false
- Importance: medium
auto.evolveWhether to automatically add columns in the table if they are missing.
- Type: boolean
- Default: false
- Importance: medium
quote.sql.identifiersWhen to quote table names, column names, and other identifiers in SQL statements. For backward compatibility, the default is ‘always’.
- Type: string
- Default: ALWAYS
- Valid Values: ALWAYS, NEVER
- Importance: medium
Connection details¶
batch.sizesMaximum number of rows to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.
- Type: int
- Default: 3000
- Valid Values: [1,…,5000]
- Importance: low
Consumer configuration¶
max.poll.interval.msThe maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
- Type: long
- Default: 300000 (5 minutes)
- Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
- Importance: low
max.poll.recordsThe maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
- Type: long
- Default: 500
- Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: low
Number of tasks for this connector¶
tasks.maxMaximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Additional Configs¶
consumer.override.auto.offset.resetDefines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset (the default) or the “latest” offset. You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset
- Type: string
- Importance: low
consumer.override.isolation.levelControls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level
- Type: string
- Importance: low
header.converterThe converter class for the headers. This is used to serialize and deserialize the headers of the messages.
- Type: string
- Importance: low
value.converter.allow.optional.map.keysAllow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.auto.register.schemasSpecify if the Serializer should attempt to register the Schema.
- Type: boolean
- Importance: low
value.converter.connect.meta.dataAllow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.avro.schema.supportEnable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.
- Type: boolean
- Importance: low
value.converter.enhanced.protobuf.schema.supportEnable enhanced schema support to preserve package information. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.flatten.unionsWhether to flatten unions (oneofs). Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.index.for.unionsWhether to generate an index suffix for unions. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.generate.struct.for.nullsWhether to generate a struct variable for null values. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.int.for.enumsWhether to represent enums as integers. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.latest.compatibility.strictVerify latest subject version is backward compatible when use.latest.version is true.
- Type: boolean
- Importance: low
value.converter.object.additional.propertiesWhether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.nullablesWhether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.optional.for.proto2Whether proto2 optionals are supported. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.scrub.invalid.namesWhether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.use.latest.versionUse latest version of schema in subject for serialization when auto.register.schemas is false.
- Type: boolean
- Importance: low
value.converter.use.optional.for.nonrequiredWhether to set non-required properties to be optional. Applicable for JSON_SR Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.nullablesWhether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
value.converter.wrapper.for.raw.primitivesWhether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.
- Type: boolean
- Importance: low
errors.toleranceUse this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.
- Type: string
- Default: all
- Importance: low
key.converter.key.subject.name.strategyHow to construct the subject name for key schema registration.
- Type: string
- Default: TopicNameStrategy
- Importance: low
value.converter.decimal.formatSpecify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
BASE64 to serialize DECIMAL logical types as base64 encoded binary data and
NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.
- Type: string
- Default: BASE64
- Importance: low
value.converter.flatten.singleton.unionsWhether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.ignore.default.for.nullablesWhen set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.
- Type: boolean
- Default: false
- Importance: low
value.converter.reference.subject.name.strategySet the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
- Type: string
- Default: DefaultReferenceSubjectNameStrategy
- Importance: low
value.converter.value.subject.name.strategyDetermines how to construct the subject name under which the value schema is registered with Schema Registry.
- Type: string
- Default: TopicNameStrategy
- Importance: low
Auto-restart policy¶
auto.restart.on.user.errorEnable connector to automatically restart on user-actionable errors.
- Type: boolean
- Default: true
- Importance: medium
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
