KUDU SOURCE AND SINK
To use this connector, specify the name of the connector class in the connector.class configuration property.
connector.class
connector.class=io.confluent.connect.kudu.KuduSinkConnector
Connector-specific configuration properties are described below.
In the connector configuration you will notice there are no security parameters. This is because SSL is not part of the JDBC standard and will depend on the JDBC driver in use. In general, you will need to configure SSL via the connection.url parameter. In Kudu database specifically, we are using Impala JDBC driver, and you need to set up a LDAP server to do username plus password authentication. Pass the Impala server hostname and port and Kudu database name to Kudu connector, And connection.url will be automatically generated, which would be something resemble:
connection.url
connection.url="jdbc:impala://<Impala server name>:<Impala server port>/<Kudu database name>"
Please check the following links for setting up the LDAP server:
Once you set up the LDAP server and have Kudu and Impala deployed properly, complete the following steps:
Start Impala daemon with the following flags:
--enable_ldap_auth=true --ldap_uri=<LDAP URI> --ldap_bind_pattern="uid=#UID,dc=<Your DC in LDAP server>..." --ldap_passwords_in_clear_ok
Insert in your LDAP URI and DCs in your LDAP server.
LDAP URI should resemble: ldap://<LDAP server IP>:10389.
ldap://<LDAP server IP>:10389
impala.server
The server address to use to format JDBC URL for connection
impala.port
The port to use to format JDBC URL for connection; By default, Impala uses port 21050.
kudu.database
The Kudu database to connect
impala.ldap.user
Username to do LDAP authentication with Impala
impala.ldap.password
Password to do LDAP authentication with Impala
kudu.tablet.replicas
The number of replicas of Kudu tablets
insert.mode
The insertion mode to use. Supported modes are:
insert Use standard SQL INSERT statements.
insert
INSERT
update Use the appropriate update semantics for the target database if it is supported by the connector, for example,``UPDATE``.
update
batch.size
Specifies how many records to attempt to batch together for insertion into the destination table, when possible.
table.name.format
A format string for the destination table name, which may contain ‘${topic}’ as a placeholder for the originating topic name.
For example, kafka_${topic} for the topic ‘orders’ will map to the table name ‘kafka_orders’.
kafka_${topic}
pk.mode
The primary key mode, also refer to pk.fields documentation for interplay. Supported modes are:
pk.fields
none No keys utilized.
none
kafka Kafka coordinates are used as the PK.
kafka
record_key Field(s) from the record key are used, which may be a primitive or a struct.
record_key
record_value Field(s) from the record value are used, which must be a struct.
record_value
List of comma-separated primary key field names. The runtime interpretation of this config depends on the pk.mode:
none Ignored as no fields are used as primary key in this mode.
kafka Must be a trio representing the Kafka coordinates, defaults to __connect_topic,__connect_partition,__connect_offset if empty.
__connect_topic,__connect_partition,__connect_offset
record_key If empty, all fields from the key struct will be used, otherwise used to extract the desired fields - for primitive key only a single field name must be configured.
record_value If empty, all fields from the value struct will be used, otherwise used to extract the desired fields.
fields.whitelist
List of comma-separated record value field names. If empty, all fields from the record value are utilized, otherwise used to filter to the desired fields.
Note
pk.fields is applied independently in the context of which field(s) form the primary key columns in the destination database, while this configuration is applicable for the other columns.
auto.create
Whether to automatically create the destination table based on record schema if it is found to be missing by issuing CREATE.
CREATE
auto.evolve
Whether to automatically add columns in the table schema when found to be missing relative to the record schema by issuing ALTER.
ALTER
quote.sql.identifiers
When to quote table names, column names, and other identifiers in SQL statements. For backward compatibility, the default is ‘always’.
max.retries
The maximum number of times to retry on errors before failing the task.
retry.backoff.ms
The time in milliseconds to wait following an error before a retry attempt is made.
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the following form:
host1:port1,host2:port2,...
Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
Tip
While it is possible to include license-related properties in the connector configuration, starting with Confluent Platform version 6.0, you can now put license-related properties in the Connect worker configuration instead of in each connector configuration.
This connector is proprietary and requires a license. The license information is stored in the _confluent-command topic. If the broker requires SSL for connections, you must include the security-related confluent.topic.* properties as described below.
_confluent-command
confluent.topic.*
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for confluent.license. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.
If you are a subscriber, please contact Confluent Support for more information.
confluent.topic.ssl.truststore.location
The location of the trust store file.
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
A Confluent enterprise license is stored in the _confluent-command topic. This topic is created by default and contains the license that corresponds to the license key supplied through the confluent.license property.
No public keys are stored in Kafka topics.
The following describes how the default _confluent-command topic is generated under different scenarios:
_confluent command
confluent.license=
confluent.license=<valid-license-key>
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command topic using the confluent.topic property (for instance, if your environment has strict naming conventions). The example below shows this change and the configured Kafka bootstrap server.
confluent.topic=foo_confluent-command confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that you can use for development and testing. For a production environment, you add the normal producer, consumer, and topic configuration properties to the connector properties, prefixed with confluent.topic..
confluent.topic.
The _confluent-command topic contains the license that corresponds to the license key supplied through the confluent.license property. It is created by default. Connectors that access this topic require the following ACLs configured:
You can provide access either individually for each principal that will use the license or use a wildcard entry to allow all clients. The following examples show commands that you can use to configure ACLs for the resource cluster and _confluent-command topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the _confluent-command topic:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
You can override the replication factor using confluent.topic.replication.factor. For example, when using a Kafka cluster as a destination with less than three brokers (for development and testing) you should set the confluent.topic.replication.factor property to 1.
1
You can override producer-specific properties by using the confluent.topic.producer. prefix and consumer-specific properties by using the confluent.topic.consumer. prefix.
confluent.topic.producer.
confluent.topic.consumer.
You can use the defaults or customize the other properties as well. For example, the confluent.topic.client.id property defaults to the name of the connector with -licensing suffix. You can specify the configuration settings for brokers that require SSL or SASL for client connections using this prefix.
confluent.topic.client.id
-licensing
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.