Configuration Reference for Vertica Sink Connector for Confluent Platform
To use this connector, specify the name of the connector class in the
connector.class configuration property.
connector.class=io.confluent.vertica.VerticaSinkConnector
Connector-specific configuration properties are described below.
Connection
vertica.databaseThe database on the Vertica system. This is used to build the JDBC URL.
Type: string
Importance: high
vertica.hostThe Vertica host to connect to. This is used to build the JDBC URL.
Type: string
Importance: high
vertica.passwordThe password to authenticate to Vertica with.
Type: password
Importance: high
vertica.usernameThe username to authenticate to Vertica with.
Type: string
Importance: high
vertica.portThe Vertica port to connect to. This is used to build the JDBC URL.
Type: int
Default: 5433
Valid Values: ValidPort{start=1025, end=65535}
Importance: medium
max.hikari.connection.pool.sizeThe maximum number of connections the HikariCp pool will contain.
Type: int
Default: 10
Valid Values: [1,…]
Importance: medium
Writes
stream.builder.cache.msThe amount of time in milliseconds to cache the stream builder objects that are used to define the table structure.
Type: int
Default: 300000
Valid Values: [1000,…,2147483647]
Importance: high
vertica.buffer.size.bytesThe buffer for the input stream that is used by the Vertica Copy Stream.
Type: int
Default: 1048576
Importance: high
vertica.timeout.msThe timeout for completing the write to Vertica.
Type: int
Default: 60000
Valid Values: [10000,…,2147483647]
Importance: high
vertica.compressionThe type of compression for the data load.
Type: string
Default: UNCOMPRESSED
Valid Values:
UNCOMPRESSED,BZIP,GZIP,LZOImportance: medium
rejected.record.logging.modeLogging mode when Vertica rejects a record. Must be configured to one of the following:
logLogs the rejected records; available in Connect logs.
fileWrites the rejected records and exceptions to the configured files. The
rejected.record.pathandrejected.record.exception.pathshould be configured to the designated file paths.tableWrites the rejected records and exceptions of Vertica tables to the respective rejected tables with the same table name appended with corresponding value of
rejected.record.table.suffix.
Type: string
Default: log
Valid Values: [log, file, table]
Importance: medium
rejected.record.pathLocal directory path where the rejected records from Vertica are stored. This config is only required when
rejected.record.logging.modeis set toFILE.Type: string
Default: “”
Importance: medium
rejected.record.exception.pathLocal directory path where the exceptions of rejected records from Vertica are stored. This config is only required when
rejected.record.logging.modeis set toFILE.Type: string
Default: “”
Importance: medium
rejected.record.table.suffixSuffix value for the Vertica rejected records table name. With this configuration, you can provide any suffix along with a date format placeholder, for example
_rejected_${yyyyddMM}. The placeholder is replaced by the current timestamp in the provided date format and is appended to the table name. By default, the value of the config is_rejectedwhich is added as a suffix to the Vertica table name. This config is only required whenrejected.record.logging.modeis set toTABLE.Type: string
Default: _rejected
Importance: medium
rejected.record.table.schemaSchema name for Vertica rejected records table name. This config is only required when
rejected.record.logging.modeis set toTABLE.Type: string
Default: “”
Importance: medium
vertica.load.methodThe method for loading data.
Type: string
Default: AUTO
Valid values:
AUTO,DIRECT,TRICKLEImportance: medium
expected.recordsThe expected number of records the connector will process each time.
Type: int
Default: 10000
Importance: low
expected.topicsThe expected number of topics the connector will process in a poll.
Type: int
Default: 500
Importance: low
delete.enabledSet this value to
trueto process delete requests for tombstone records—that is, when Kafka record’s value is null. Thepk.modeconfiguration should also be set torecord_keyto allow delete requests to be processed based on the fields present in Kafka record’s key.Type: boolean
Default: false
Importance: medium
table.name.formatA format string for the destination schema table name, which may contain
${topic}as a placeholder for the originating topic name.For example,
kafka_${topic}for the topicorderswill map to the table namekafka_ordersand default schema namepublic. Also, you can use this to configure the schema name. For example,kafka_schema.kafka_${topic}for the topicordersmaps to the table namekafka_ordersand schema namekafka_schema.Type: string
Default: ${topic}
Importance: medium
DDL Support
auto.createWhether to issue
CREATEand automatically create a missing destination table (based on the record schema).Type: boolean
Default: false
Importance: medium
auto.evolveWhether to issue
ALTERand automatically add missing columns to the table schema relative to the record schema.Type: boolean
Default: false
Importance: medium
Data Mapping
pk.modeThe primary key mode, also refer to
pk.fieldsdocumentation for interplay. Supported modes are:noneNo keys utilized.
kafkaKafka coordinates are used as the PK.
record_keyField(s) from the record key are used, which may be a primitive or a struct.
record_valueField(s) from the record value are used, which must be a struct.
Type: string
Default: none
Valid Values: [none, kafka, record_key, record_value]
Importance: high
pk.fieldsList of comma-separated primary key field names. The runtime interpretation of this config depends on the
pk.mode:noneIgnored as no fields are used as primary key in this mode.
kafkaMust be a trio representing the Kafka coordinates, defaults to
__connect_topic,__connect_partition,__connect_offsetif empty.record_keyIf empty, all fields from the key struct will be used, otherwise used to extract the desired fields - for primitive key only a single field name must be configured.
record_valueIf empty, all fields from the value struct will be used, otherwise used to extract the desired fields.
Type: list
Default: “”
Importance: medium
CSFLE configuration
csfle.enabled
Accepts a boolean value. CSFLE is enabled for the connector if csfle.enabled is set to True.
Type: boolean
Default: False
auto.register.schemas
Specifies if the Serializer should attempt to register the Schema with Schema Registry.
Type: boolean
Default: true
Importance: medium
use.latest.version
Only applies when auto.register.schemas is set to false. If auto.register.schemas is set to false and use.latest.version is set to true, then instead of deriving a schema for the object passed to the client for serialization, Schema Registry uses the latest version of the schema in the subject for serialization.
Type: boolean
Default: true
Importance: medium
Confluent Platform license
confluent.topic.bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
Type: list
Importance: high
confluent.topicName of the Kafka topic used for Confluent Platform configuration, including licensing information.
Type: string
Default: _confluent-command
Importance: low
confluent.topic.replication.factorThe replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
Type: int
Default: 3
Importance: low
Confluent license properties
You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command
topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties
as described below.
confluent.licenseConfluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, contact Confluent Support for more information.
Type: string
Default: “”
Valid Values: Confluent Platform license
Importance: high
confluent.topic.ssl.truststore.locationThe location of the trust store file.
Type: string
Default: null
Importance: high
confluent.topic.ssl.truststore.passwordThe password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Type: password
Default: null
Importance: high
confluent.topic.ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.
Type: string
Default: null
Importance: high
confluent.topic.ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
Type: password
Default: null
Importance: high
confluent.topic.ssl.key.passwordThe password of the private key in the key store file. This is optional for client.
Type: password
Default: null
Importance: high
confluent.topic.security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
Type: string
Default: “PLAINTEXT”
Importance: medium
License topic configuration
A Confluent enterprise license is stored in the _confluent-command topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license property. No public
keys are stored in Kafka topics.
The following describes how the default _confluent-command topic is
generated under different scenarios:
A 30-day trial license is automatically generated for the
_confluent commandtopic if you do not add theconfluent.licenseproperty or leave this property empty (for example,confluent.license=).Adding a valid license key (for example,
confluent.license=<valid-license-key>) adds a valid license in the_confluent-commandtopic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command topic using the
confluent.topic property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic..
License topic ACLs
The _confluent-command topic contains the license that corresponds to the
license key supplied through the confluent.license property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
DESCRIBE, READ, and WRITE on the
_confluent-commandtopic.Important
You can also use DESCRIBE and READ without WRITE to restrict access to read-only for license topic ACLs. If a topic exists, the LicenseManager will not try to create the topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the
_confluent-commandtopic:kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Override Default Configuration Properties
You can override the replication factor using
confluent.topic.replication.factor. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor property to 1.
You can override producer-specific properties by using the
producer.override.* prefix (for source connectors) and consumer-specific
properties by using the consumer.override.* prefix (for sink connectors).
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id property defaults to the name of the connector
with -licensing suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.