Amazon DynamoDB Sink Connector for Confluent Cloud

The fully-managed Amazon DynamoDB Sink connector for Confluent Cloud is used to export messages from Apache Kafka® to Amazon DynamoDB, allowing you to export your Kafka data into your DynamoDB key-value and document database.

The connector periodically polls data from Kafka and writes it to Amazon DynamoDB. The data from each Kafka topic is batched and sent to DynamoDB. Because of constraints from DynamoDB, each batch can only contain one change per key, and each failure in a batch must be handled before the next batch is processed. These constraints ensure exactly once delivery. When a table doesn’t exist, the connector creates the table dynamically (depending on the connector configuration and permissions).

Note

Features

  • Auto-created tables: Tables can be auto-created based on topic names and auto-evolved based on the record schema.
  • Select configuration properties:
    • aws.dynamodb.pk.hash: Defines how the DynamoDB table hash key is extracted from the records. By default, the Kafka partition number where the record is generated is used as the hash key. Other record references can be used to create the hash key. See DynamoDB hash keys and sort keys for examples.
    • aws.dynamodb.pk.sort: Defines how the DynamoDB table sort key is extracted from the records. By default, the record offset is used as the sort key. The sort key can be created from other references. See DynamoDB hash keys and sort keys for examples.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

DynamoDB IAM policy

Create an IAM user for the connector. Assign an IAM policy to the user you create. The policy must have the following minimum permissions.

  • CreateTable
  • BatchWriteItem
  • Scan
  • DescribeTable

You can copy the following JSON policy. For more information, see Creating policies on the JSON tab.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "<optional-identifier>",
            "Effect": "Allow",
            "Action": [
                "dynamodb:CreateTable",
                "dynamodb:BatchWriteItem",
                "dynamodb:Scan",
                "dynamodb:DescribeTable"
            ],
            "Resource": "*"
        }
    ]
}
Copy

DynamoDB hash keys and sort keys

The following examples show how the aws.dynamodb.pk.hash and aws.dynamodb.pk.sort are used. The following Avro record is used for the examples:

{
     "ordertime": 1511538140542,
     "orderid": 3243,
     "itemid": "Item_117",
     "orderunits": 1.135368875862714,
     "address": {
       "city": "City_43",
       "state": "State_53",
     }
}
Copy

Example 1

The table hash key is set to the "partition" number where the record was generated. The table sort key is the record "offset". The following example uses these default configuration properties:

  • "aws.dynamodb.pk.hash":"partition"
  • "aws.dynamodb.pk.sort":"offset"

Using these properties, the table in DynamoDB would be similar to the following example:

partition offset address itemid orderid ordertime orderunits
0 6075 {“city”:{“S”:City_66}, “state”:{“S”:”State_42},…} Item_246 6075 1503153618445 3.0818679447783652
0 6076 {“city”:{“S”:City_38}, “state”:{“S”:”State_49},…} Item_536 6076 1515872966736 1.6264301342871472
0 6077 {“city”:{“S”:City_32}, “state”:{“S”:”State_62},…} Item_997 6077 1515872966736 4.189731783402986

Example 2

The table hash key is set to "value.orderid". The table sort key is "". Note that in this example, no sort key is required so you can use an empty string: "aws.dynamodb.pk.sort":"".

  • "aws.dynamodb.pk.hash":"value.orderid"
  • "aws.dynamodb.pk.sort":""

Using these properties, the table in DynamoDB would be similar to the following example:

orderid address itemid ordertime orderunits
2007 {“city”:{“S”:City_69}, “state”:{“S”:”State_19},…} Item_809 1502071602628 8.9866703527786968
2011 {“city”:{“S”:City_32}, “state”:{“S”:”State_11},…} Item_524 1494848995282 2.581428966318308
2012 {“city”:{“S”:City_88}, “state”:{“S”:”State_94},…} Item_169 1491811930181 1.5716303109073455

Example 3

The table hash key is set to "value.orderid". The table sort key is set to "value.ordertime". Note that in this example, one of the record fields ("ordertime") is used as the sort key.

  • "aws.dynamodb.pk.hash":"value.orderid"
  • "aws.dynamodb.pk.sort":"value.ordertime"

Using these properties, the table in DynamoDB would be similar to the following example:

orderid ordertime address itemid orderunits
4520 1519049522647 {“city”:{“S”:City_99}, “state”:{“S”:”State_38},…} Item_650 7.658775648983428
4522 1519049522647 {“city”:{“S”:City_72}, “state”:{“S”:”State_89},…} Item_503 2.1383312466612261
4523 1507101063792 {“city”:{“S”:City_74}, “state”:{“S”:”State_99},…} Item_369 2.1383312466612261

Managing Throughput

When the connector creates a table automatically, 10 write capacity units are provisioned. If the connector needs to send records faster than the provisioned capacity, you may see the following error message:

Hit provisioning capacity, will retry indefinitely.. Increase your throughput capacity
Copy

You can increase the write capacity or use Amazon DynamoDB Auto Scaling.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Amazon DynamoDB Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to Amazon Redshift.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Amazon DynamoDB Sink connector card.

Amazon DynamoDB Sink Connector Card

Step 4: Enter the connector details

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Amazon DynamoDB Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check the results in DynamoDB

Check to verify that the database is being populated.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list
Copy

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>
Copy

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties.

{
  "name": "DynamoDbSinkConnector_0",
  "config": {
    "topics": "pageviews",
    "input.data.format": "AVRO",
    "connector.class": "DynamoDbSink",
    "name": "DynamoDbSinkConnector_0",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key": "<my-kafka-api-key>",
    "kafka.api.secret": "<my-kafka-api-secret>",
    "aws.access.key.id": "********************",
    "aws.secret.access.key": "****************************************",
    "aws.dynamodb.pk.hash": "value.userid",
    "aws.dynamodb.pk.sort": "value.pageid",
    "table.name.format": "kafka-${topic}",
    "tasks.max": "1"
  }
}
Copy

Note the following property definitions:

  • "name": Sets a name for your new connector.
  • "connector.class": Identifies the connector plugin name.
  • "topics": Identifies the topic name or a comma-separated list of topic names.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    
    Copy

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
    Copy
  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

  • "aws.dynamodb.pk.hash": Defines how the DynamoDB table hash key is extracted from the records. By default, the Kafka partition number where the record is generated is used as the hash key. The hash key can be created from other record references. See DynamoDB hash keys and sort keys for examples. Note that the maximum size of a partition using the default configuration is limited to 10 GB (defined by Amazon DynamoDB).

  • "aws.dynamodb.pk.sort": Defines how the DynamoDB table sort key is extracted from the records. By default, the record offset is used as the sort key. If no sort key is required, use an empty string for this property "". The sort key can be created from other record references. See DynamoDB hash keys and sort keys for examples.

  • "table.name.format": The property is optional and defaults to the name of the Kafka topic. To create a table name format use the syntax ${topic}. For example, kafka_${topic} for the topic orders maps to the table name kafka_orders.

  • "tasks.max": Maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for all property values and definitions.

Step 4: Load the configuration file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json
Copy

For example:

confluent connect cluster create --config-file dynamodb-sink-config.json
Copy

Example output:

Created connector DynamoDbSinkConnector_0 lcc-ix4dl
Copy

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list
Copy

Example output:

ID          |       Name              | Status  | Type
+-----------+-------------------------+---------+------+
lcc-ix4dl   | DynamoDbSinkConnector_0 | RUNNING | sink
Copy

Step 6: Check the results in Redshift.

Check to verify that the database is being populated.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Default: JSON
  • Importance: high
input.key.format

Sets the input Kafka record key format. Valid entries are AVRO, BYTES, JSON, JSON_SR, PROTOBUF, or STRING. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string
  • Default: BYTES
  • Valid Values: AVRO, BYTES, JSON, JSON_SR, PROTOBUF, STRING
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

AWS credentials

aws.access.key.id
  • Type: password
  • Importance: high
aws.secret.access.key
  • Type: password
  • Importance: high

DynamoDB Parameters

aws.dynamodb.pk.hash
  • Type: string
  • Default: partition
  • Importance: high
aws.dynamodb.pk.sort
  • Type: string
  • Default: offset
  • Importance: high
table.name.format

A format string for the destination table name, which may contain ‘${topic}’ as a placeholder for the originating topic name.

For example, kafka_${topic} for the topic ‘orders’ will map to the table name ‘kafka_orders’.

  • Type: string
  • Default: ${topic}
  • Importance: medium

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long
  • Default: 300000 (5 minutes)
  • Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
  • Importance: low
max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long
  • Default: 500
  • Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean
  • Default: true
  • Importance: medium

Additional Configs

consumer.override.auto.offset.reset

Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset or the “latest” offset (the default). You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset

  • Type: string
  • Importance: low
consumer.override.isolation.level

Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level

  • Type: string
  • Importance: low
header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string
  • Importance: low
value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean
  • Importance: low
value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean
  • Importance: low
value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean
  • Importance: low
value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean
  • Importance: low
value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean
  • Importance: low
value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string
  • Default: TopicNameStrategy
  • Importance: low
value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string
  • Default: BASE64
  • Importance: low
value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: low
value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string
  • Default: TopicNameStrategy
  • Importance: low

Next Steps

  • For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

    ../_images/topology.png
  • Try Confluent Cloud on AWS Marketplace with $1000 of free usage for 30 days, and pay as you go. No credit card is required.