Amazon S3 Sink Connector for Confluent Cloud¶
You can use the fully-managed Amazon S3 Sink connector for Confluent Cloud to export Avro, JSON Schema, Protobuf, JSON (schemaless), or Bytes data from Apache Kafka® topics to S3 objects in Avro, Parquet, JSON, or Bytes format. Depending on your environment, the S3 connector can export data by guaranteeing exactly-once delivery semantics to consumers of the S3 objects it produces.
Confluent Cloud is available through AWS Marketplace or directly from Confluent.
The fully-managed Amazon S3 Sink connector periodically polls data from Kafka and in turn uploads it to S3. A time-based partitioner is used to split the data of every Kafka partition into chunks. Each chunk of data is represented as an S3 object. The key name encodes the topic, the Kafka partition, and the start offset of this data chunk. The size of each data chunk is determined by the number of records written to S3 and by schema compatibility.
Note
- This Quick Start is for the fully-managed Confluent Cloud connector. If you are installing the connector locally for Confluent Platform, see Amazon S3 Sink Connector for Confluent Platform.
- If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.
Features¶
The Amazon S3 Sink connector provides the following features:
Exactly Once Delivery: Records that are exported using a deterministic partitioner are delivered with exactly-once semantics regardless of the eventual consistency of Amazon S3.
Note that if versioning is enabled for the S3 bucket, you might see multiple versions of the same file in S3; but, if you view the most recent version among those files, you will see that the persistence of data exactly once remains valid.
Provider integration support: The connector supports IAM role-based authorization using Confluent Provider Integration. For more information about provider integration setup, see the IAM roles authentication.
Client-side field level encryption (CSFLE) support: The connector supports CSFLE for sensitive data. For more information about CSFLE setup, see the connector configuration.
Data Format with or without a Schema: The connector supports input data from Kafka topics in Avro, JSON Schema, Protobuf, JSON (schemaless), or Bytes format and exports data to Amazon S3 in Avro, Parquet, JSON, or Bytes format. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro). See Schema Registry Enabled Environments for additional information.
Partitioner: The connector supports the
TimeBasedPartitioner
class based on the Kafka classTimeStamp
. Time-based partitioning options are daily or hourly.Scheduled Rotation and Rotation Interval: The connector supports a regularly scheduled interval for closing and uploading files to storage. See Scheduled Rotation for details.
Flush size: Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters.
The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
Writing Record Keys and Headers: In addition to writing the value files to storage, you can enable the connector to write the associated Kafka record keys and headers to storage as files. To enable writing keys, set the configuration property
store.kafka.keys
totrue
. To enable writing headers, setstore.kafka.headers
totrue
. After enabling these configuration properties, the connector writes keys and headers as additional files. These files use the same name as the associated file that stores the record values, with an extension identifying the part of the record (for example,<filename>.keys.avro
and<filename>.headers.avro
). Key and header files have a one-to-one mapping to the associated value files.Consider the following when enabling this feature:
- If you configure the connector to store keys or headers as files and the Kafka record has no key or headers present, the connector writes the record to the DLQ. The record will not be in the stored output in Amazon S3.
- If both
store.kafka.keys
andstore.kafka.headers
are set totrue
, schema evolution will only work for record values, and not keys and headers. If the record headers and keys have schemas, and records are sent with a different schema from the initial one, the connector stops and the task fails.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see Amazon S3 Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Schema Registry Enabled Environments.
User Account IAM Policy¶
The AWS user account accessing the S3 bucket must have the following effective permissions:
- ListAllMyBuckets
- ListBucket
- GetBucketLocation
- ListBucketMultipartUploads
- PutObject
- GetObject
- AbortMultipartUpload
- ListMultipartUploadParts
Copy the following JSON to create the IAM policy for the user account. Change
<bucket-name>
to a real bucket name. For more information, see Create and attach a policy to an IAM user.
This is the IAM policy for the user account and not a bucket policy.
Note
- If you use object tagging in the S3 bucket, set the connector
configuration property
s3.object.tagging
totrue
. When you enable object tagging, you must also includes3:PutObjectTagging
in the IAM policy for the user account. This optional entry is highlighted in the following JSON example. - If you use AWS Key Management Service (KMS), you must modify the key
policy to grant IAM user account permission for the
kms:GenerateDataKey
andkms:Decrypt
actions. This will allow the connector to access the S3 bucket. For more information, see this AWS knowledge center article.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets"
],
"Resource":"arn:aws:s3:::*"
},
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource":"arn:aws:s3:::<bucket-name>"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectTagging",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource":"arn:aws:s3:::<bucket-name>/*"
}
]
}
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud S3 Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to an S3 bucket.
Prerequisites¶
Ensure you meet all the following prerequisites:
- Authorized access to a Confluent Cloud cluster on AWS.
- The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
- Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
- The data system the sink connector is connecting to should be in the same region as your Confluent Cloud cluster. If you use a different region or cloud platform, be aware that you may incur additional data transfer charges. Contact your Confluent account team or Confluent Support if you need to use Confluent Cloud and connect to a data system that is in a different region or on a different cloud platform.
- For networking considerations, see Networking and DNS. To use a set of public egress IP addresses, see Public Egress IP Addresses for Confluent Cloud Connectors. If connecting from a privately networked cluster, see Private Network Connectivity.
- An AWS User Account IAM Policy configured for bucket access.
- An AWS account configured with Access Keys. You use these access keys when setting up the connector.
- Kafka cluster credentials. The following lists the different ways you can provide credentials.
- Enter an existing service account resource ID.
- Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
- Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.
- (Optional) Confluent Cloud Schema Registry enabled for your cluster, if you are using a messaging schema (like Apache Avro). See Work with schemas.
Caution
You can’t mix schema and schemaless records in storage using kafka-connect-storage-common. Attempting this causes a runtime exception.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster¶
See the Quick Start for Confluent Cloud for installation instructions.
Step 2: Add a connector¶
In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Enter the connector details¶
Note
- Ensure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
At the Add Amazon S3 Sink connector screen, complete the following:
If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.
To create a new topic, click +Add new topic.
- Select the way you want to provide Kafka Cluster credentials. You can
choose one of the following options:
- My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.
- Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.
- Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.
- Click Continue.
- Under Amazon credentials, select how you want to authenticate with AWS:
- If you select Access Keys, enter your AWS credentials in the Amazon Access Key ID and Amazon Secret Access Key fields. For information about how to set these up, see Access Keys.
- If you select IAM Roles, choose an existing integration name under Provider integration name dropdown that has access to your resource. For more information, see Quick Start for Confluent Cloud Provider Integration.
- Under the Amazon S3 bucket name section, enter the S3 bucket name.
- Click Continue.
Note
See the Configuration Properties for configuration property values and definitions.
Select the Input Kafka record value format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (schemaless), or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). The following input-to-output formats are not supported for this connector:
- Input format JSON to output format AVRO
- Input format JSON to output format PARQUET
Select the Output Kafka record value format (data coming from the connector): AVRO, PARQUET, JSON, or BYTES. A valid schema must be available in Schema Registry to use a schema- based message format (for example, Avro).
Tip
The following Topic directory, Path format, and Time interval properties can be used to build a directory structure for data stored in S3. For example: You set Time interval to
Hourly
, Topics directory tojson_logs/hourly
, and Path format to'dt'=YYYY-MM-dd/'hr'=HH
. The result is the directory structure:s3://<s3-bucket-name>/json_logs/hourly/<Topic-Name>/dt=2020-02-06/hr=09/<files>
.- Note that the S3 Sink connector does not allow recursive schema types. Writing to PARQUET output format with a recursive schema type results in a StackOverflowError.
- Performing a compatible schema change may cause the connector to
flush data prior to whatever is configured for
flush.size
.
Select the Time interval that sets how you want your messages grouped in the S3 bucket. For example, if you select Hourly, messages are grouped into folders for each hour data is streamed to the bucket.
Enter the Flush size. Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters. Note that performing a compatible schema change may cause the connector to flush data prior to whatever is configured for
flush.size
.(Optional) Enable Client-Side Field Level Encryption for data decryption. Specify a Service Account to access the Schema Registry and associated encryption rules or keys with that schema. Select the connector behavior (
ERROR
orNONE
) on data decryption failure. If set toERROR
, the connector fails and writes the encrypted data in the DLQ. If set toNONE
, the connector writes the encrypted data in the target system without decryption. For more information on CSFLE setup, see Manage CSFLE for connectors.Advanced users may define how the connector flushes records to S3 by clicking the following:
Show advanced configurations
Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.
Topic directory: This is a top-level directory path to use for data stored in S3. Defaults to
topics
if not used.Path format: This configures the time-based partitioning path created in S3. The property converts the UNIX timestamp to a date format string. If not used, this property defaults to
'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH
if an Hourly Time interval was selected or'year'=YYYY/'month'=MM/'day'=dd
if a Daily Time interval was selected.Rotation interval. See Scheduled Rotation for details about the two Scheduled rotation properties.
The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
Timestamp field name: The record field used for the timestamp, which is then used with the time-base partitioner. If not used, this defaults to the timestamp when the Kafka record was produced or stored by the Kafka broker.
Timezone: Use a valid timezone. Defaults to
UTC
if not used.Locale: This is used to format dates and times. For example, you can use
en-US
for English (USA),en-GB
for English (UK),en-IN
for English (India), orfr-FR
for French (France). Defaults toen
. For a list of locale IDs, see Java locales.How to handle records with null values: How to handle records with null values (for example, Kafka tombstone records). Defaults to
ignore
.Compression Type: For Gzip, you must select a Gzip Compression Level:
1
to9.
Selecting1
results in high-speed compression and a low compression ratio. Selecting9
provides the highest compression ratio and a much slower compression speed.Note
When using Parquet, only compression types
PARQUET - none
,PARQUET - gzip
, andPARQUET - snappy
are currently supported.Preserves Avro schema information: True by default. When set to
true
, this property preserves Avro schema package information and Enums when going from the Avro schema to the Connect schema. This information is added back in when going from the Connect schema to the Avro schema.Schema compatibility: The schema compatibility rule to use when the connector is observing schema changes in files. For usage details, see Schema Evolution. Note that this schema compatibility property is specific to S3 file schemas and not related to Schema Registry operation.
An S3 canned ACL header value: A canned Amazon S3 ACL header value to use when writing objects.
Enable or disable writing record keys to storage: Enable or disable writing record keys to storage. Defaults to
false
. If set totrue
, select the Output keys format.Output keys format: If writing record keys to storage, select the output format. Options are
AVRO
,BYTES
,JSON
, andPARQUET
. A valid schema must be available in Schema Registry to use a schema-based message format. Note that if you selectedJSON - gzip
orBYTES - gzip
for compression then the output keys format must beJSON
orBYTES
, respectively.Value Converter Connect Metadata: Enable or disable the Connect converter adding its metadata to the output schema. Defaults to
true
.Enable or disable writing record headers to storage: Enable or disable writing record headers to storage. Defaults to
false
. If set totrue
, select the Output headers format.Output headers format: If writing record headers to storage, select the output format. Options are
AVRO
,BYTES
,JSON
, andPARQUET
. A valid schema must be available in Schema Registry to use a schema-based message format. Note that if you selectedJSON - gzip
orBYTES - gzip
for compression then the output headers format must beJSON
orBYTES
, respectively.Tag S3 objects offsets and record count: Tag S3 objects with start and end offsets, as well as record count.
The S3 Server Side Encryption Algorithm: The type of S3 server-side encryption algorithm to use.
S3 Server Side (SSE-C) Key: A customer-provided server-side encryption key (SSE-C).
Part Size in multipart uploads: The part size (bytes) for S3 object multipart uploads. Defaults to
5242880
.Use S3 accelerated endpoint: Enable or disable S3 transfer acceleration. Defaults to
false
.Role ARN when starting a session: The Role ARN to use when starting a session.
Role external ID under assumed role: A role external ID to use when retrieving session credentials under an assumed role.
Role session name: The role session name to use when starting a session.
Click Continue.
Based on the number of topic partitions you select, you will be provided with a recommended number of tasks. One task can handle up to 100 partitions.
To change the number of recommended tasks, enter the number of tasks for the connector to use in the Tasks field.
For help with sizing your connector, click How many tasks do I need?.
Click Continue.
Note
- See Configuration Properties for all property values and definitions.
Step 5. Check the S3 bucket.¶
Check the S3 Bucket by going to the AWS Management Console and select Storage > S3.
Open your S3 bucket.
Open your topic folder and each subsequent folder until you see your messages displayed.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.
Using the Confluent CLI¶
Complete the following steps to set up and run the connector using the Confluent CLI.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: List the connector configuration properties¶
Enter the following command to show the connector configuration properties:
confluent connect plugin describe <connector-plugin-name>
The command output shows the required and optional configuration properties.
Step 3: Create the connector configuration file¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"name" : "confluent-s3-sink",
"connector.class": "S3_SINK",
"kafka.auth.mode": "KAFKA_API_KEY",
"kafka.api.key": "<my-kafka-api-key>",
"kafka.api.secret": "<my-kafka-api-secret>",
"aws.access.key.id" : "<my-aws-access-key>",
"aws.secret.access.key": "<my-aws-access-key-secret>",
"input.data.format": "JSON",
"output.data.format": "JSON",
"compression.codec": "JSON - gzip",
"s3.compression.level": "6",
"s3.bucket.name": "<my-bucket-name>",
"time.interval" : "HOURLY",
"flush.size": "1000",
"tasks.max" : "1",
"topics": "<topic-1>, <topic-2>"
}
Note the following required property definitions:
"name"
: Sets a name for your new connector."connector.class"
: Identifies the connector plugin name.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"input.data.format"
: Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).Note
The following input-to-output formats are not supported for this connector:
- Input format JSON to output format AVRO
- Input format JSON to output format PARQUET
"output.data.format"
: Sets the output Kafka record value format (data coming from the connector). Valid entries are AVRO, PARQUET, JSON, or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro)."compression.codec"
: Sets the compression type. Valid entries areAVRO - bzip2
,AVRO - deflate
,AVRO - snappy
,BYTES - gzip
, orJSON - gzip
. For PARQUET only compression types"PARQUET - none"
,"PARQUET - gzip"
, and"PARQUET - snappy"
are currently supported."s3.compression.level"
: Sets a gzip level. Valid entries are from1
to9.
Selecting1
results in high-speed compression and a low compression ratio. Selecting9
provides the highest compression ratio and a much slower compression speed. The default gzip compression level is6
."time.interval"
: Sets how your messages are grouped in the S3 bucket. Valid entries are DAILY or HOURLY.(Optional)
flush.size
: Defaults to 1000. The value can be increased if needed. The value can be lowered (1 minimum) if you are running a Dedicated Confluent Cloud cluster. The minimum value is 1000 for non-dedicated clusters. Note that performing a compatible schema change may cause the connector to flush data prior to whatever is configured forflush.size
.The following scenarios describe a couple of ways records may be flushed to storage:
You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.
You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.
Note
The properties
rotate.schedule.interval.ms
androtate.interval.ms
can be used withflush.size
to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.For example: You have one topic partition. You set
flush.size=1000
androtate.schedule.interval.ms=600000
(10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minuterotate.schedule.interval.ms
condition tripped before theflush.size=1000
condition was met.
"tasks.max"
: Enter the maximum number of tasks for the connector to use."topics"
: Enter the topic name or a comma-separated list of topic names.Tip
The
time.interval
property above and the following optional propertiestopics.dir
andpath.format
can be used to build a directory structure for data stored in S3. For example: You set"time.interval" : "HOURLY"
,"topics.dir" : "json_logs/hourly"
, and"path.format" : "'dt'=YYYY-MM-dd/'hr'=HH"
. The result in S3 is the directory structure:s3://<s3-bucket-name>/json_logs/hourly/<Topic-Name>/dt=2020-02-06/hr=09/<files>
.
The following are optional properties that can be used to organize your data in storage:
"topics.dir"
: A top-level directory path to use for data stored in S3. Defaults totopics
if not used.""path.format"
: Configures the time-based partitioning path created in S3. The property converts the UNIX timestamp to a date format string. If not used, this property defaults to'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH
if an Hourlytime.interval
was selected or'year'=YYYY/'month'=MM/'day'=dd
if a Daily Time interval was selected.rotate.schedule.interval.ms
androtate.interval.ms
: See Scheduled Rotation for details about using these properties.timestamp.field
: The record field used for the timestamp, which is used with the time-base partitioner. If not used, this defaults to the timestamp when the Kafka record was produced or stored by the Kafka broker."timezone"
: A valid timezone. For example, you can useEST
,PST
,WET
, orUTC
. Defaults toUTC
if not used."locale"
. The locale to use with the time-based partitioner. Used to format dates and times. For example, you can useen-US
for English (USA),en-GB
for English (UK),en-IN
for English (India), orfr-FR
for French (France). Defaults toen
. For a list of locale IDs, see Java locales.
Note
(Optional) To enable CSFLE for data encryption, specify the following properties:
csfle.enabled
: Flag to indicate whether the connector honors CSFLE rules.sr.service.account.id
: A Service Account to access the Schema Registry and associated encryption rules or keys with that schema.csfle.onFailure
: Configures the connector behavior (ERROR
orNONE
) on data decryption failure. If set toERROR
, the connector fails and writes the encrypted data in the DLQ. If set toNONE
, the connector writes the encrypted data in the target system without decryption.
For more information on CSFLE setup, see Manage CSFLE for connectors.
See Configuration Properties for property values and definitions.
Step 4: Load the properties file and create the connector¶
Enter the following command to load the configuration and start the connector:
confluent connect cluster create --config-file <file-name>.json
For example:
confluent connect cluster create --config-file s3-sink-config.json
Example output:
Created connector confluent-s3-sink lcc-ix4dl
Step 5: Check the connector status¶
Enter the following command to check the connector status:
confluent connect cluster list
Example output:
ID | Name | Status | Type
+-----------+-------------------+---------+------+
lcc-ix4dl | confluent-s3-sink | RUNNING | sink
Step 6: Check the S3 bucket.¶
Go to the AWS Management Console and select Storage > S3.
Open your S3 bucket.
Open your topic folder and each subsequent folder until you see your messages displayed.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.
Scheduled Rotation¶
Two optional properties are available that allow you to set up a rotation schedule. These properties are provided in the Cloud Console (shown below) and in the Confluent CLI.
rotate.schedule.interval.ms
(Scheduled rotation): This property allows you to configure a regular schedule for when files are closed and uploaded to storage. The default value is-1
(disabled). For example, when this is set for 600000 ms, you will see files available in the storage bucket at least every 10 minutes.rotate.schedule.interval.ms
does not require a continuous stream of data.Note
Using the
rotate.schedule.interval.ms
property results in a non-deterministic environment and invalidates exactly-once guarantees.rotate.interval.ms
(Rotation interval): This property allows you to specify the maximum time span (in milliseconds) that a file can remain open for additional records. When using this property, the time span interval for the file starts with the timestamp of the first record added to the file. The connector closes and uploads the file to storage when the timestamp of a subsequent record falls outside the time span set by the first file’s timestamp. The minimum value is 600000 ms (10 minutes). This property defaults to the interval set by thetime.interval
property.rotate.interval.ms
requires a continuous stream of data.Important
The start and end of the time span interval is determined using file timestamps. For this reason, a file could potentially remain open for a long time if a record does not arrive with a timestamp falling outside the time span set by the first file’s timestamp.
Schema Evolution¶
The Amazon S3 connector supports schema evolution and reacts to schema changes
of record data according to the schema.compatibility
configuration. You can
set schema.compatibility
to NONE
, BACKWARD
, FORWARD
and
FULL
. Review the following information for details.
Note
- If you see a large number of small files in the S3 bucket, it may be that consecutive records in a partition have incompatible schemas, leading to the connector closing and creating many files for each record.
- The following schema compatibility information is specific to S3 file schemas and not associated with Schema Registry operation.
NONE (NO compatibility):
NONE
should only be used when records use the same schema. By default, theschema.compatibility
is set toNONE
.The connector ensures that each file written to S3 has the proper schema. When the connector observes a schema change in data, it commits the current set of files for the affected topic partitions and writes the data with new schema in a new file.
- For example:
When two consecutive records arrive (R1 and R2), the connector checks both record schemas (R1/S1 and R2/S2) for compatibility. If the schemas are not identical, the connector commits R1/S1 and creates a new file for R2/S2.
FORWARD Compatibility: If you set
schema.compatibility
toFORWARD
, the connector compares schemas and uses the earliest version to query all the data uniformly. Removing a field that had a default value is forward compatible, since the earlier schema will use the default value when the field is missing.- For example:
When two consecutive records arrive (R1 and R2), the connector checks both record schemas (R1/S1 and R2/S2) for compatibility. If the schema types are not identical, the schema names are compared. The schema names are not identical, the schema parameters are compared. If these are not identical, the S2 version must be later than S1 or the schemas are not compatible and the connector commits R1/S1 and creates a new file for R2/S2.
BACKWARD Compatibility: If you set
schema.compatibility
toBACKWARD
, the connector keeps track of the latest schema used in writing data to S3, and if a record with a later schema version than current schema arrives, the connector commits the current set of files and writes the record using the new schema to a new file. For records that arrive later which are using an earlier schema, the connector projects the record to the latest schema before writing to the same set of files in S3. This supports rolling back a schema to an earlier version.- For example:
When two consecutive records arrive (R1 and R2), the connector checks both record schemas (R1/S1 and R2/S2) for compatibility. If the schema types are not identical, the schema names are compared. The schema names are not identical, the schema parameters are compared. If these are not identical, the S2 version must be earlier than S1 or the schemas are not compatible and the connector commits R1/S1 and creates a new file for R2/S2
FULL Compatibility: Full compatibility means that old data can be read with the new schema and new data can also be read with the old schema.
FULL
performs the same action asBACKWARD
.
Private Network Connectivity¶
The Amazon Web Services S3 Sink connector can attach to an Amazon S3 bucket from your privately networked Confluent Cloud cluster (AWS PrivateLink or peered VPCs). To set this up, you use an S3 bucket policy with an allow statement and a deny statement.
To configure a S3 Sink connector to access an S3 bucket from a privately networked Confluent Cloud cluster:
- To use a PrivateLink, configure the S3 Sink connector with an Egress PrivateLink Endpoint.
- For a Confluent Cloud cluster with a VPC peering, create a bucket policy.
The following shows an example of how to create the bucket policy for a privately networked Confluent Cloud cluster.
- The first statement (
PolicyForAllowUploadWithACL
) in the JSON example allows the specified principal (arn:aws:iam::<Confluent-Cloud-AWS-Account-ID>:root
) to performs3:PutObject
actions on objects in a specific S3 bucket (<S3-bucket-name>/*
), but only if the ACL is set to"bucket-owner-full-control"
. - The second statement (
Access-to-specific-VPC-only
) in the JSON example denies all S3 actions (s3:*
) to all principals (*
) for the resources specified (arn:aws:s3:::<S3-bucket-name>
and its objects) unless the request originates from your private source VPC (vpc-1234abcde56
).
Be sure to replace all placeholder values with the actual account ID, bucket name, and VPC name. Otherwise, you won’t be able to access your bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PolicyForAllowUploadWithACL",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Confluent-Cloud-AWS-Account-ID>:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::<S3-bucket-name>/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::<S3-bucket-name>",
"arn:aws:s3:::<S3-bucket-name>/*"],
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": "your-source-vpc"
}
}
}
]
}
Important
As noted in Controlling access from VPC endpoints with bucket policies,
the above Access-to-specific-VPC-only
policy disables the AWS console
access to the specified bucket because console requests don’t originate from
your source VPC.
If you need to access the bucket from the AWS console, add the VPC from which
the console sends requests to the "Condition"
section of the
Access-to-specific-VPC-only
policy. And edit the
PolicyForAllowUploadWithACL
policy as shown below:
{
"Sid": "PolicyForAllowUploadWithACL",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Confluent-Cloud-AWS-Account-ID>:root",
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::<S3-bucket-name>",
"arn:aws:s3:::<S3-bucket-name>/*"
]
},
{
"Sid": "Access-to-specific-VPC-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::<S3-bucket-name>",
"arn:aws:s3:::<S3-bucket-name>/*"],
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": ["<your-source-vpc>", "<vpc-of-the-console>"]
}
}
}
Configuration Properties¶
Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.
Which topics do you want to get data from?¶
topics
Identifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
Schema Config¶
schema.context.name
Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.
- Type: string
- Default: default
- Importance: medium
Input messages¶
input.data.format
Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Default: JSON
- Importance: high
input.key.format
Sets the input Kafka record key format. Valid entries are AVRO, BYTES, JSON, JSON_SR, PROTOBUF, or STRING. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF
- Type: string
- Default: BYTES
- Valid Values: AVRO, BYTES, JSON, JSON_SR, PROTOBUF, STRING
- Importance: high
value.converter.reference.subject.name.strategy
Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.
- Type: string
- Default: DefaultReferenceSubjectNameStrategy
- Importance: high
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.
- Type: password
- Importance: high
AWS credentials¶
authentication.method
Select how you want to authenticate with AWS.
- Type: string
- Default: Access Keys
- Importance: high
aws.access.key.id
- Type: password
- Importance: high
provider.integration.id
Select an existing integration that has access to your resource. In case you need to integrate a new IAM role, use provider integration
- Type: string
- Importance: high
aws.secret.access.key
- Type: password
- Importance: high
Amazon S3 details¶
s3.region
The AWS region where the S3 bucket is defined. If no value for this property is provided, the value specified for the ‘kafka.region’ property is used.
- Type: string
- Importance: low
s3.bucket.name
An Amazon S3 bucket must be in the same region as your Confluent Cloud cluster.
- Type: string
- Importance: high
s3.ssea.name
The S3 Server Side Encryption Algorithm.
- Type: string
- Default: “”
- Importance: low
s3.sse.customer.key
The S3 Server Side Encryption Customer-Provided Key (SSE-C)
- Type: password
- Importance: low
store.url
The object storage connection URL, if applicable. For example: ‘https://bucket.s3-aws-region.amazonaws.com’
- Type: string
- Default: “”
- Importance: medium
s3.sse.kms.key.id
The name of the AWS Key Management Service (AWS-KMS) key to be used for server side encryption of the S3 objects. No encryption is used when no key is provided, but it is enabled when
KMS
is specified as encryption algorithm with a valid key name.- Type: string
- Importance: low
s3.part.size
The Part Size(bytes) in S3 Multi-part Uploads.
- Type: int
- Default: 5242880
- Valid Values: [5242880,…,2147483647]
- Importance: high
s3.wan.mode
Use S3 accelerated endpoint.
- Type: boolean
- Default: false
- Importance: medium
s3.path.style.access.enabled
Specifies whether or not to enable path style access to the bucket used by the connector
- Type: boolean
- Default: true
- Importance: medium
Output messages¶
output.data.format
Set the output message format for values. Valid entries are AVRO, JSON, PARQUET or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO. Note that the output message format defaults to the value in the Input Message Format field. If either PROTOBUF or JSON_SR is selected as the input message format, you should select one explicitly. If no value for this property is provided, the value specified for the ‘input.data.format’ property is used.
- Type: string
- Importance: high
output.keys.format
Set the output format for keys. Valid entries are AVRO, JSON, PARQUET or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO.
- Type: string
- Default: AVRO
- Valid Values: AVRO, BYTES, JSON, PARQUET
- Importance: high
output.headers.format
Set the output format for headers. Valid entries are AVRO, JSON, PARQUET or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO.
- Type: string
- Default: AVRO
- Valid Values: AVRO, BYTES, JSON, PARQUET
- Importance: high
json.decimal.format
Controls which format json converter will serialize decimals in. This value can be either ‘BASE64’ (default) or ‘NUMERIC’ and is applicable only when the output format is JSON.
- Type: string
- Default: BASE64
- Importance: low
Organize my data by…¶
topics.dir
Configures the directory to store the data ingested from Kafka. For a file like
s3://<s3-bucket-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>
, settopics.dir=json_logs/daily
,path.format='dt'=YYYY-MM-dd/'hr'=HH
, andtime.interval=HOURLY
. For another file likes3://<s3-bucket-name>/<Topic-Name>/dt=2020-02-06/hr=09/<files>
, settopics.dir=" "
, but keeppath.format
andtime.interval
the same as in the previous example. This configures thetopics.dir
to a space. In the UI, enter a blank space, and use" "
for CLI and API configurations.- Type: string
- Default: topics
- Importance: high
path.format
This configuration is used to set the format of the data directories when partitioning with TimeBasedPartitioner. The format set in this configuration converts the Unix timestamp to a valid directory string. To organize files like this example, s3://<s3-bucket-name>/json_logs/daily/<Topic-Name>/dt=2020-02-06/hr=09/<files>, use the properties: topics.dir=json_logs/daily, path.format=’dt’=YYYY-MM-dd/’hr’=HH, and time.interval=HOURLY.
- Type: string
- Default: ‘year’=YYYY/’month’=MM/’day’=dd/’hour’=HH
- Importance: high
time.interval
Partitioning interval of data, according to the time ingested to storage.
- Type: string
- Importance: high
rotate.schedule.interval.ms
Scheduled rotation uses rotate.schedule.interval.ms to close the file and upload to storage on a regular basis using the current time, rather than the record time. Setting rotate.schedule.interval.ms is nondeterministic and will invalidate exactly-once guarantees. Minimum value is 600000ms (10 minutes).
- Type: int
- Default: -1
- Importance: medium
rotate.interval.ms
The connector’s rotation interval specifies the maximum timespan (in milliseconds) a file can remain open and ready for additional records. In other words, when using rotate.interval.ms, the timestamp for each file starts with the timestamp of the first record inserted in the file. The connector closes and uploads a file to the blob store when the next record’s timestamp does not fit into the file’s rotate.interval time span from the first record’s timestamp. If the connector has no more records to process, the connector may keep the file open until the connector can process another record (which can be a long time). Minimum value is 600000ms (10 minutes). If no value for this property is provided, the value specified for the ‘time.interval’ property is used.
- Type: int
- Importance: high
flush.size
Number of records written to storage before invoking file commits.
- Type: int
- Default: 1000
- Valid Values: [1000,…] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: high
timestamp.field
Sets the field that contains the timestamp used for the TimeBasedPartitioner
- Type: string
- Default: “”
- Importance: high
compression.codec
Compression type for files written to S3.
- Type: string
- Valid Values: AVRO - bzip2, AVRO - deflate, AVRO - snappy, BYTES - gzip, JSON - gzip, PARQUET - gzip, PARQUET - none, PARQUET - snappy
- Importance: high
timezone
Sets the timezone used by the TimeBasedPartitioner.
- Type: string
- Default: UTC
- Importance: high
behavior.on.null.values
How to handle records with null values, e.g Kafka tombstone records. Valid options are ‘ignore’, ‘fail’ and ‘write’. Default is ‘ignore’
- Type: string
- Default: ignore
- Importance: low
subject.name.strategy
Strategy used for deriving subject name from topic and record schema name.
- Type: string
- Default: TopicNameStrategy
- Valid Values: TopicNameStrategy, TopicRecordNameStrategy
- Importance: low
tombstone.encoded.partition
Output s3 folder to write the tombstone records to. The configured partitioner would map tombstone records to this output folder.
- Type: string
- Default: tombstone
- Importance: low
s3.compression.level
Gzip compression level for files written to S3. Applied when using JSON or BYTES input.
- Type: int
- Valid Values: [-1,…,9]
- Importance: high
locale
Sets the locale to use with TimeBasedPartitioner.
- Type: string
- Default: en
- Importance: high
enhanced.avro.schema.support
When set to true, this property preserves Avro schema package information and Enums when going from Avro schema to Connect schema. This information is added back in when going from Connect schema to Avro schema.
- Type: boolean
- Default: true
- Importance: low
s3.schema.partition.affix.type
Append the record schema name to prefix or suffix in the s3 path after the topic name. None will not append the schema name in the s3 path.
- Type: string
- Default: NONE
- Valid Values: NONE, PREFIX, SUFFIX
- Importance: low
s3.acl.canned
An S3 canned ACL header value to apply when writing objects.
- Type: string
- Valid Values: authenticated-read, aws-exec-read, bucket-owner-full-control, bucket-owner-read, log-delivery-write, private, public-read, public-read-write
- Importance: low
schema.compatibility
The schema compatibility rule to use when the connector is observing schema changes.
- Type: string
- Default: NONE
- Importance: high
value.converter.connect.meta.data
Toggle for enabling/disabling connect converter to add its meta data to the output schema or not.
- Type: boolean
- Default: true
- Importance: medium
store.kafka.keys
Enable or disable writing record keys to storage
- Type: boolean
- Default: false
- Importance: low
store.kafka.headers
Enable or disable writing record headers to storage.
- Type: boolean
- Default: false
- Importance: low
s3.object.tagging
Tag S3 objects with start and end offsets, as well as record count.
- Type: boolean
- Default: false
- Importance: low
Consumer configuration¶
max.poll.interval.ms
The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).
- Type: long
- Default: 300000 (5 minutes)
- Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
- Importance: low
max.poll.records
The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.
- Type: long
- Default: 500
- Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
- Importance: low
Number of tasks for this connector¶
tasks.max
Maximum number of tasks for the connector.
- Type: int
- Valid Values: [1,…]
- Importance: high
Next Steps¶
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.
Try Confluent Cloud on AWS Marketplace with $1000 of free usage for 30 days, and pay as you go. No credit card is required.