Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Tiered Storage¶
Important
This feature is available as a preview feature. A preview feature is a component of Confluent Platform that is being introduced to gain early feedback from developers. This feature can be used for evaluation and non-production testing purposes or to provide feedback to Confluent.
With the Tiered Storage Preview in Confluent Platform 5.4, brokers can utilize a limited version of Tiered Storage. The Tiered Storage Preview requires a bucket in Amazon S3 that can be written to and read from, and it requires the credentials to access the bucket.
Enabling Tiered Storage on a Broker¶
Enable the Tiered Storage Preview on a cluster running Confluent Platform 5.4.
AWS¶
To enable Tiered Storage, add the following properties in your
server.properties
file:confluent.tier.feature=true confluent.tier.enable=true confluent.tier.backend=S3 confluent.tier.s3.bucket=<BUCKET_NAME> confluent.tier.s3.region=<REGION>
Adding the above properties enables the Tiered Storage components with default parameters on all of the possible configurations.
confluent.tier.feature
enables Tiered Storage for a broker. Setting this totrue
allows a broker to utilize Tiered Storage.confluent.tier.enable
sets the default value for created topics. Setting this totrue
will cause all non-compacted topics to be tiered. See Known Limitations with compacted topics.confluent.tier.backend
refers to the cloud storage service to which a broker will connect. For the preview, only Amazon S3 is supported.BUCKET_NAME
andREGION
are the S3 bucket name and its region, respectively. A broker will interact with this bucket for writing and reading tiered data.
For example, a bucket named
tiered-storage-test
located in theus-west-2
region would have these properties:confluent.tier.s3.bucket=tiered-storage-test confluent.tier.s3.region=us-west-2
The brokers need AWS credentials to connect to the S3 bucket. You can set these through
server.properties
or through environment variables. Either method is sufficient. The brokers prioritize using the credentials supplied throughserver.properties
. If they are unable to use the credentials from properties, they will use environment variables instead.Server Properties - Add the following properties to your
server.properties
file:confluent.tier.s3.aws.access.key.id=<YOUR_ACCESS_KEY_ID> confluent.tier.s3.aws.secret.access.key=<YOUR_SECRET_ACCESS_KEY>
YOUR_ACCESS_KEY_ID
andYOUR_SECRET_ACCESS_KEY
are used by the broker to authenticate its access to the bucket. These fields are hidden from the server log files.Environment Variables - Specify AWS credentials with these environment variables:
export AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY_ID> export AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_ACCESS_KEY>
If
server.properties
does not contain the two properties for credentials, the broker will use the above environment variables to connect to the S3 bucket.
The S3 bucket should allow the broker to perform the following actions. These operations are required by the broker to properly enable and use the Tiered Storage Preview.
s3:DeleteObject s3:GetObject s3:PutObject s3:AbortMultipartUpload s3:GetBucketLocation
GCS, Azure, JBOD¶
These storage solutions are currently unsupported.
Creating a Topic with Tiered Storage¶
You can create a topic using the Kafka CLI command bin/kafka-topics
. The command is used in the same way as previous versions, with added support for topic configurations related to Tiered Storage.
<path-to-confluent>/bin/kafka-topics \
--bootstrap-server localhost:9092 \
--create \
--topic trades \
--partitions 6 \
--replication-factor 3 \
--config confluent.tier.enable=true \
--config confluent.tier.local.hotset.ms=3600000 \
--config retention.ms=604800000
confluent.tier.local.hotset.ms
- controls the maximum time non-active segments are retained on broker-local storage before being discarded to free up space. Segments deleted from local disks will exist in object storage and remain available according to the retention policy. If set to -1, no time limit is applied.retention.ms
works similarly in tiered topics to non-tiered topics, but will expire segments from both object storage and local storage according to the retention policy.
Notable Broker Configs¶
Log Segment Size¶
We recommended decreasing the segment size configuration, log.segment.bytes
, for topics with tiering enabled from the default size of 1GB. The archiver waits for a log segment to close before attempting to upload the segment to object storage. Using a smaller segment size, such as 100MB, allows segments to close at a more frequent rate. Also, smaller segments sizes help with page cache behavior, improving the performance of the archiver.
Topic Deletion Interval¶
When a topic is deleted, the deletion of the log segment files that have been uploaded to S3 does not immediately begin. There is a time interval for which the deletion of those files takes place. The default value for this time interval is 3 hours. You can modify the configuration, confluent.tier.topic.delete.check.interval.ms
, to change the value of this interval. If left unchanged, deleting a topic may not show immediate results of deleting the topic’s files from object storage.
Tiered Storage Broker Metrics¶
Tiered Storage has additional broker metrics that can be monitored
Tier Archiver Metrics¶
The archiver is a component of Tiered Storage that is responsible for uploading non-active segments to cloud storage.
kafka.tier.tasks.archive:type=TierArchiver,name=BytesPerSec
- Rate of bytes per second that is being uploaded by the archiver to cloud storage.
kafka.tier.tasks.archive:type=TierArchiver,name=TotalLag
- Number of bytes in non-active segments not yet uploaded by the archiver to cloud storage. As the archiver steadily uploads to cloud storage, the total lag will decrease towards 0.
kafka.tier.tasks.archive:type=TierArchiver,name=RetriesPerSec
- Number of times the archiver has reattempted to upload a non-active segment to cloud storage.
kafka.tier.tasks:type=TierTasks,name=NumPartitionsInError
- Number of partitions that the archiver was unable to upload that are in an error state. The partitions in this state are skipped by the archiver and uploaded to cloud storage.
Tier Fetcher Metrics¶
The fetcher is a component of Tiered Storage that is responsible for retrieving data from cloud storage.
kafka.server:type=TierFetcher
- Rate of bytes per second that the fetcher is retrieving data from cloud storage.
Example Performance Test¶
We have recorded the throughput performance of a Kafka cluster with the Tiered Storage feature activated as a reference for expected metrics. The high-level details and configurations of the cluster and environment are listed.
Cluster Configurations:
- 6 brokers
- r5.xlarge AWS instances
- GP2 disks, 2 TB
- 104857600 (100 MB) segment.bytes
- a single topic with 24 partitions and replication factor 3
Producer and Consumer Details:
- 6 producers each with a target rate of 20 MB/s (120 MB/s total)
- 12 total consumers
- 6 consumers read records from the end of the topic with a target rate of 20 MB/s
- 6 consumers read records from the beginning of the topic from S3 through the fetcher with an uncapped rate
Values may differ based on the configurations of producers, consumers, brokers, and topics. For example, using a log segment size of 1 GB instead of 100 MB may result in lower archiver throughput.
Throughput Metric | Value (MB/s) |
---|---|
Producer (cluster total) | 115 |
Producer (average across brokers) | 19 |
Consumer (cluster total) | 623 |
Archiver (average across brokers) | 19 |
Fetcher (average across brokers) | 104 |
Known Limitations¶
Use of the Tiered Storage preview is not recommended for production use cases. Some Kafka features are not supported in this preview.
- The Tiered Storage preview must run on a new standalone cluster. No upgrade path should be assumed. Do not use with data that you need in the future.
- Compacted topics are not yet supported by Tiered Storage.
- Topics with unclean leader election are not supported by Tiered Storage. There is a risk of losing data for topics that have both unclean leader election and Tiered Storage enabled.
- Inter-broker security is not supported out-of-the box with Tiered Storage for this preview. However, inter-broker traffic for topic replication can use SSL, or other secure mechanisms. You can configure
confluent.tier.metadata.bootstrap.servers
to point to a plaintext listener as an alternative to using a non-secured inter-broker protocol listener. - Currently, only AWS with Amazon S3 is supported with Tiered Storage. GCS and Azure Blob Storage are not supported in this preview.
- The Confluent Platform 5.4 Kubernetes Operator does not support the Tiered Storage previews.