Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
AWS DynamoDB Sink Connector Configuration Properties¶
To use this connector, specify the name of the connector class in the connector.class
configuration property.
connector.class=io.confluent.connect.aws.dynamodb.DynamoDbSinkConnector
Connector-specific configuration properties are described below.
DynamoDB Parameters¶
aws.dynamodb.pk.hash
Defines how the table’s hash key is extracted from the records. By default, partition is used as the hash key. The maximum size of a partition with this configuration is 10Gb as per DynamoDB limits.
This hash key reference is created from a record reference and optional alias name. If the alias name is absent, then the last field of the reference is used as the column name.
Valid record references:
partition
- refers to the Kafka partition number where the record is originatedoffset
- refers to the Kafka offset of the recordkey.[fieldNameOrDotDelimitedPath]
- to use the record’s key itself or one of the fields from the record keyvalue.[fieldNameOrDotDelimitedPath]
- to use the record’s value itself or one of the fields from the record’s value
Valid examples:
- partition - A field
partition
as hash key - key - Key of the record as a hash key
- key.customerId - When message key is a Struct use customerId as the field
- value.userId:user - Uses a hash field
user
- Type: string
- Default: partition
- Valid Values: Matches regex
^(?<reference>partition|offset|key(\.[a-zA-Z1-9\_\-\.]{3,255})*|value(\.[a-zA-Z1-9\_\-\.]{3,255})*){1}(:(?<alias>[a-zA-Z1-9\_\-\.]{3,255}))?$
- Importance: high
aws.dynamodb.pk.sort
Defines how the table’s sort key is extracted from the records. By default, it uses the record offset as sort key. This sort key reference is created from a record reference and optional alias name. If the alias name is absent, then the last field of the reference is used as the column name.
If no sort key is required, configure this property to be an empty string. Valid record references:
partition
- refers to the Kafka partition number where the record is originatedoffset
- refers to the Kafka offset of the recordkey.[fieldNameOrDotDelimitedPath]
- to use the record’s key itself or one of the fields from the record keyvalue.[fieldNameOrDotDelimitedPath]
- to use the record’s value itself or one of the fields from the record’s value
Valid examples:
- partition - A field
partition
as hash key - key - Key of the record as a hash key
- key.customerId - When message key is a Struct use customerId as the field
- value.userId:user - Uses a hash field
user
- Type: string
- Default: offset
- Valid Values: either Matches regex
^(?<reference>partition|offset|key(\.[a-zA-Z1-9\_\-\.]{3,255})*|value(\.[a-zA-Z1-9\_\-\.]{3,255})*){1}(:(?<alias>[a-zA-Z1-9\_\-\.]{3,255}))?$
, or one of [] - Importance: high
aws.dynamodb.proxy.user
DynamoDB Proxy User.
- Type: string
- Default: null
- Importance: low
aws.dynamodb.credentials.provider.class
Credentials provider or provider chain to use for authentication to AWS. By default the connector uses
DefaultAWSCredentialsProviderChain
.- Type: class
- Default: com.amazonaws.auth.DefaultAWSCredentialsProviderChain
- Valid Values: Any class implementing: interface com.amazonaws.auth.AWSCredentialsProvider
- Importance: low
aws.dynamodb.endpoint
Overwrite endpoint configuration and AWS service discovery for DynamoDB.
- Type: string
- Default: null
- Importance: low
aws.dynamodb.region
The AWS region to be used by the connector.
- Type: string
- Default: us-west-2
- Valid Values: one of [ap-south-1, eu-north-1, eu-west-3, eu-west-2, eu-west-1, ap-northeast-2, us-gov-east-1, ap-northeast-1, ca-central-1, sa-east-1, ap-east-1, cn-north-1, us-gov-west-1, ap-southeast-1, ap-southeast-2, eu-central-1, us-east-1, us-east-2, us-west-1, cn-northwest-1, us-west-2]
- Importance: medium
table.name.format
A format string for the destination table name, which may contain ‘${topic}’ as a placeholder for the originating topic name.
For example,
kafka_${topic}
for the topic ‘orders’ will map to the table name ‘kafka_orders’.- Type: string
- Default: ${topic}
- Importance: medium
Proxy Connection Parameters¶
aws.dynamodb.proxy.url
DynamoDB Proxy URL. For example http://proxy.example.com:8080, https://user:pass@proxy.example.com:8443
- Type: string
- Default: “”
- Importance: low
aws.dynamodb.proxy.password
DynamoDB Proxy Password.
- Type: password
- Default: null
- Importance: low
Confluent Platform license¶
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
- Type: list
- Importance: high
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
- Type: string
- Default: _confluent-command
- Importance: low
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
- Type: int
- Default: 3
- Importance: low
Confluent license properties¶
Note
This connector is proprietary and requires a license. The license information
is stored in the _confluent-command
topic. If the broker requires SSL for
connections, you must include the security-related confluent.topic.*
properties as described below.
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and paste as the value for
confluent.license
. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.If you are a subscriber, please contact Confluent Support for more information.
- Type: string
- Default: “”
- Valid Values: Confluent Platform license
- Importance: high
confluent.topic.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Importance: high
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
License topic configuration¶
A Confluent enterprise license is stored in the _confluent-command
topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license
property.
Note
No public keys are stored in Kafka topics.
The following describes how the default _confluent-command
topic is
generated under different scenarios:
- A 30-day trial license is automatically generated for the
_confluent command
topic if you do not add theconfluent.license
property or leave this property empty (for example,confluent.license=
). - Adding a valid license key (for example,
confluent.license=<valid-license-key>
) adds a valid license in the_confluent-command
topic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command
topic using the
confluent.topic
property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic.
.
License topic ACLs¶
The _confluent-command
topic contains the license that corresponds to the
license key supplied through the confluent.license
property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
- CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
- DESCRIBE, READ, and WRITE on the
_confluent-command
topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command
topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the
_confluent-command
topic:kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:<principal> \ --operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Overriding Default Configuration Properties¶
You can override the replication factor using
confluent.topic.replication.factor
. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor
property to 1
.
You can override producer-specific properties by using the
confluent.topic.producer.
prefix and consumer-specific properties by using
the confluent.topic.consumer.
prefix.
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id
property defaults to the name of the connector
with -licensing
suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a single partition and is compacted. Also, do not specify serializers and deserializers using this prefix; they are ignored if added.