Configuration Reference for Vertica Sink Connector for Confluent Platform¶
To use this connector, specify the name of the connector class in the
connector.class
configuration property.
connector.class=io.confluent.vertica.VerticaSinkConnector
Connector-specific configuration properties are described below.
Connection¶
vertica.database
The database on the Vertica system. This is used to build the JDBC URL.
- Type: string
- Importance: high
vertica.host
The Vertica host to connect to. This is used to build the JDBC URL.
- Type: string
- Importance: high
vertica.password
The password to authenticate to Vertica with.
- Type: password
- Importance: high
vertica.username
The username to authenticate to Vertica with.
- Type: string
- Importance: high
vertica.port
The Vertica port to connect to. This is used to build the JDBC URL.
- Type: int
- Default: 5433
- Valid Values: ValidPort{start=1025, end=65535}
- Importance: medium
max.hikari.connection.pool.size
The maximum number of connections the HikariCp pool will contain.
- Type: int
- Default: 10
- Valid Values: [1,…]
- Importance: medium
Writes¶
stream.builder.cache.ms
The amount of time in milliseconds to cache the stream builder objects that are used to define the table structure.
- Type: int
- Default: 300000
- Valid Values: [1000,…,2147483647]
- Importance: high
vertica.buffer.size.bytes
The buffer for the input stream that is used by the Vertica Copy Stream.
- Type: int
- Default: 1048576
- Importance: high
vertica.timeout.ms
The timeout for completing the write to Vertica.
- Type: int
- Default: 60000
- Valid Values: [10000,…,2147483647]
- Importance: high
vertica.compression
The type of compression for the data load.
- Type: string
- Default: UNCOMPRESSED
- Valid Values:
UNCOMPRESSED
,BZIP
,GZIP
,LZO
- Importance: medium
rejected.record.logging.mode
Logging mode when Vertica rejects a record. Must be configured to one of the following:
log
- Logs the rejected records; available in Connect logs.
file
- Writes the rejected records and exceptions to the configured files. The
rejected.record.path
andrejected.record.exception.path
should be configured to the designated file paths. table
- Writes the rejected records and exceptions of Vertica tables to the respective rejected tables with the same table name appended with corresponding value of
rejected.record.table.suffix
.
- Type: string
- Default: log
- Valid Values: [log, file, table]
- Importance: medium
rejected.record.path
Local directory path where the rejected records from Vertica are stored. This config is only required when
rejected.record.logging.mode
is set toFILE
.- Type: string
- Default: “”
- Importance: medium
rejected.record.exception.path
Local directory path where the exceptions of rejected records from Vertica are stored. This config is only required when
rejected.record.logging.mode
is set toFILE
.- Type: string
- Default: “”
- Importance: medium
rejected.record.table.suffix
Suffix value for the Vertica rejected records table name. With this configuration, you can provide any suffix along with a date format placeholder, for example
_rejected_${yyyyddMM}
. The placeholder is replaced by the current timestamp in the provided date format and is appended to the table name. By default, the value of the config is_rejected
which is added as a suffix to the Vertica table name. This config is only required whenrejected.record.logging.mode
is set toTABLE
.- Type: string
- Default: _rejected
- Importance: medium
rejected.record.table.schema
Schema name for Vertica rejected records table name. This config is only required when
rejected.record.logging.mode
is set toTABLE
.- Type: string
- Default: “”
- Importance: medium
vertica.load.method
The method for loading data.
- Type: string
- Default: AUTO
- Valid values:
AUTO
,DIRECT
,TRICKLE
- Importance: medium
expected.records
The expected number of records the connector will process each time.
- Type: int
- Default: 10000
- Importance: low
expected.topics
The expected number of topics the connector will process in a poll.
- Type: int
- Default: 500
- Importance: low
delete.enabled
Set this value to
true
to process delete requests for tombstone records—that is, when Kafka record’s value is null. Thepk.mode
configuration should also be set torecord_key
to allow delete requests to be processed based on the fields present in Kafka record’s key.- Type: boolean
- Default: false
- Importance: medium
table.name.format
A format string for the destination schema table name, which may contain
${topic}
as a placeholder for the originating topic name.For example,
kafka_${topic}
for the topicorders
will map to the table namekafka_orders
and default schema namepublic
. Also, you can use this to configure the schema name. For example,kafka_schema.kafka_${topic}
for the topicorders
maps to the table namekafka_orders
and schema namekafka_schema
.- Type: string
- Default: ${topic}
- Importance: medium
DDL Support¶
auto.create
Whether to issue
CREATE
and automatically create a missing destination table (based on the record schema).- Type: boolean
- Default: false
- Importance: medium
auto.evolve
Whether to issue
ALTER
and automatically add missing columns to the table schema relative to the record schema.- Type: boolean
- Default: false
- Importance: medium
Data Mapping¶
pk.mode
The primary key mode, also refer to
pk.fields
documentation for interplay. Supported modes are:none
- No keys utilized.
kafka
- Kafka coordinates are used as the PK.
record_key
- Field(s) from the record key are used, which may be a primitive or a struct.
record_value
- Field(s) from the record value are used, which must be a struct.
- Type: string
- Default: none
- Valid Values: [none, kafka, record_key, record_value]
- Importance: high
pk.fields
List of comma-separated primary key field names. The runtime interpretation of this config depends on the
pk.mode
:none
- Ignored as no fields are used as primary key in this mode.
kafka
- Must be a trio representing the Kafka coordinates, defaults to
__connect_topic,__connect_partition,__connect_offset
if empty. record_key
- If empty, all fields from the key struct will be used, otherwise used to extract the desired fields - for primitive key only a single field name must be configured.
record_value
- If empty, all fields from the value struct will be used, otherwise used to extract the desired fields.
- Type: list
- Default: “”
- Importance: medium
Confluent Platform license¶
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
- Type: list
- Importance: high
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
- Type: string
- Default: _confluent-command
- Importance: low
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
- Type: int
- Default: 3
- Importance: low
Confluent license properties¶
You can put license-related properties in the connector configuration, or starting with Confluent Platform version 6.0, you can put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command
topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.*
properties
as described below.
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license
. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, contact Confluent Support for more information.
- Type: string
- Default: “”
- Valid Values: Confluent Platform license
- Importance: high
confluent.topic.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Importance: high
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
License topic configuration¶
A Confluent enterprise license is stored in the _confluent-command
topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license
property. No public
keys are stored in Kafka topics.
The following describes how the default _confluent-command
topic is
generated under different scenarios:
- A 30-day trial license is automatically generated for the
_confluent command
topic if you do not add theconfluent.license
property or leave this property empty (for example,confluent.license=
). - Adding a valid license key (for example,
confluent.license=<valid-license-key>
) adds a valid license in the_confluent-command
topic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command
topic using the
confluent.topic
property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic.
.
License topic ACLs¶
The _confluent-command
topic contains the license that corresponds to the
license key supplied through the confluent.license
property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
DESCRIBE, READ, and WRITE on the
_confluent-command
topic.Important
You can also use DESCRIBE and READ without WRITE to restrict access to read-only for license topic ACLs. If a topic exists, the LicenseManager will not try to create the topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command
topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the
_confluent-command
topic:kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Override Default Configuration Properties¶
You can override the replication factor using
confluent.topic.replication.factor
. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor
property to 1
.
You can override producer-specific properties by using the
producer.override.*
prefix (for source connectors) and consumer-specific
properties by using the consumer.override.*
prefix (for sink connectors).
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id
property defaults to the name of the connector
with -licensing
suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.