This feature is available as a preview feature. A preview feature is a component of Confluent Platform that is being introduced to gain early feedback from developers. This feature can be used for evaluation and non-production testing purposes or to provide feedback to Confluent.
With the Tiered Storage Preview in Confluent Platform 5.4, brokers can utilize a limited version of Tiered Storage. The Tiered Storage Preview requires a bucket in Amazon S3 that can be written to and read from, and it requires the credentials to access the bucket.
Enabling Tiered Storage on a Broker¶
Enable the Tiered Storage Preview on a cluster running Confluent Platform 5.4.
To enable Tiered Storage, add the following properties in your
confluent.tier.feature=true confluent.tier.enable=true confluent.tier.backend=S3 confluent.tier.s3.bucket=<BUCKET_NAME> confluent.tier.s3.region=<REGION>
Adding the above properties enables the Tiered Storage components with default parameters on all of the possible configurations.
confluent.tier.featureenables Tiered Storage for a broker. Setting this to
trueallows a broker to utilize Tiered Storage.
confluent.tier.enablesets the default value for created topics. Setting this to
truewill cause all non-compacted topics to be tiered. See Known Limitations with compacted topics.
confluent.tier.backendrefers to the cloud storage service to which a broker will connect. For the preview, only Amazon S3 is supported.
REGIONare the S3 bucket name and its region, respectively. A broker will interact with this bucket for writing and reading tiered data.
For example, a bucket named
tiered-storage-testlocated in the
us-west-2region would have these properties:
The brokers need AWS credentials to connect to the S3 bucket. You can set these through
server.propertiesor through environment variables. Either method is sufficient. The brokers prioritize using the credentials supplied through
server.properties. If they are unable to use the credentials from properties, they will use environment variables instead.
Server Properties - Add the following properties to your
YOUR_SECRET_ACCESS_KEYare used by the broker to authenticate its access to the bucket. These fields are hidden from the server log files.
Environment Variables - Specify AWS credentials with these environment variables:
$ export AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY_ID> $ export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_ACCESS_KEY>
server.propertiesdoes not contain the two properties for credentials, the broker will use the above environment variables to connect to the S3 bucket.
The S3 bucket should allow the broker to perform the following actions. These operations are required by the broker to properly enable and use the Tiered Storage Preview.
s3:DeleteObject s3:GetObject s3:PutObject s3:AbortMultipartUpload s3:GetBucketLocation
GCS, Azure, JBOD¶
These storage solutions are currently unsupported.
Creating a Topic with Tiered Storage¶
You can create a topic using the Kafka CLI command
bin/kafka-topics. The command is used in the same way as previous versions, with added support for topic configurations related to Tiered Storage.
<path-to-confluent>/bin/kafka-topics --bootstrap-server localhost:9092 --create --topic trades --partitions 6 --replication-factor 3 --config confluent.tier.enable=true --config confluent.tier.local.hotset.ms=3600000 --config retention.ms=604800000
confluent.tier.local.hotset.ms- controls the maximum time non-active segments are retained on broker-local storage before being discarded to free up space. Segments deleted from local disks will exist in object storage and remain available according to the retention policy. If set to -1, no time limit is applied.
retention.msworks similarly in tiered topics to non-tiered topics, but will expire segments from both object storage and local storage according to the retention policy.
Notable Broker Configs¶
Log Segment Size¶
We recommended decreasing the segment size configuration,
log.segment.bytes, for topics with tiering enabled from the default size of 1GB. The archiver waits for a log segment to close before attempting to upload the segment to object storage. Using a smaller segment size, such as 100MB, allows segments to close at a more frequent rate. Also, smaller segments sizes help with page cache behavior, improving the performance of the archiver.
Topic Deletion Interval¶
When a topic is deleted, the deletion of the log segment files that have been uploaded to S3 does not immediately begin. There is a time interval for which the deletion of those files takes place. The default value for this time interval is 3 hours. You can modify the configuration,
confluent.tier.topic.delete.check.interval.ms, to change the value of this interval. If left unchanged, deleting a topic may not show immediate results of deleting the topic’s files from object storage.
Tiered Storage Broker Metrics¶
Tiered Storage has additional broker metrics that can be monitored
Tier Archiver Metrics¶
The archiver is a component of Tiered Storage that is responsible for uploading non-active segments to cloud storage.
- Rate of bytes per second that is being uploaded by the archiver to cloud storage.
- Number of bytes in non-active segments not yet uploaded by the archiver to cloud storage. As the archiver steadily uploads to cloud storage, the total lag will decrease towards 0.
- Number of times the archiver has reattempted to upload a non-active segment to cloud storage.
- Number of partitions that the archiver was unable to upload that are in an error state. The partitions in this state are skipped by the archiver and uploaded to cloud storage.
Tier Fetcher Metrics¶
The fetcher is a component of Tiered Storage that is responsible for retrieving data from cloud storage.
- Rate of bytes per second that the fetcher is retrieving data from cloud storage.
Example Performance Test¶
We have recorded the throughput performance of a Kafka cluster with the Tiered Storage feature activated as a reference for expected metrics. The high-level details and configurations of the cluster and environment are listed.
- 6 brokers
- r5.xlarge AWS instances
- GP2 disks, 2 TB
- 104857600 (100 MB) segment.bytes
- a single topic with 24 partitions and replication factor 3
Producer and Consumer Details:
- 6 producers each with a target rate of 20 MB/s (120 MB/s total)
- 12 total consumers
- 6 consumers read records from the end of the topic with a target rate of 20 MB/s
- 6 consumers read records from the beginning of the topic from S3 through the fetcher with an uncapped rate
Values may differ based on the configurations of producers, consumers, brokers, and topics. For example, using a log segment size of 1 GB instead of 100 MB may result in lower archiver throughput.
|Throughput Metric||Value (MB/s)|
|Producer (cluster total)||115|
|Producer (average across brokers)||19|
|Consumer (cluster total)||623|
|Archiver (average across brokers)||19|
|Fetcher (average across brokers)||104|
Use of the Tiered Storage preview is not recommended for production use cases. Some Kafka features are not supported in this preview.
- The Tiered Storage preview must run on a new standalone cluster. No upgrade path should be assumed. Do not use with data that you need in the future.
- Compacted topics are not yet supported by Tiered Storage.
- Topics with unclean leader election are not supported by Tiered Storage. There is a risk of losing data for topics that have both unclean leader election and Tiered Storage enabled.
- Inter-broker SSL cannot be used with Tiered Storage in this preview.
- Currently, only AWS with Amazon S3 is supported with Tiered Storage. GCS and Azure Blob Storage are not supported in this preview.
- The Confluent Platform 5.4 Kubernetes Operator does not support the Tiered Storage previews.