Confluent Platform 6.0.0 Release Notes¶
6.0.0 is a major release of Confluent Platform that provides you with Apache Kafka® 2.6.0, the latest stable version of Kafka.
The technical details of this release are summarized below.
For more information about the 6.0.0 release, check out the release blog and the Streaming Audio podcast.
Confluent Control Center¶
- Self-Balancing UI
- Introduces the ability to balance workloads and dynamically expand and shrink clusters. From Control Center you can remove brokers, control self-balancing, and track balancing progress. Note: if you are managing Confluent Platform using Confluent for Kubernetes, you should also manage Self-Balancing using CFK. The Self-Balancing UI is not exposed in CFK-based environments. For more information, see Self-Balancing Tutorial.
- Tiered Storage UI
- Gives you the flexibility of adding and scaling storage without having to invest in additional brokers (compute). This is achieved by allowing you to use less expensive object storage as an efficient way to retain data.
- Auto-updating user interface enabled by default
- Real-time updates to the Control Center UI to bring you the latest UI bug fixes and improvements. For details, see Check Control Center Version and Enable Auto-Update.
- Tiered Storage
- Tiered Storage is now generally available and ready for production use. Tiered Storage makes Confluent Platform more elastic, while giving platform operators the ability to cost-effectively increase retention periods, all on a per-topic basis. For more information, see Tiered Storage.
- Tiered Storage now supports GCP’s Google Cloud Storage (GCS)
- In addition to AWS S3, Tiered Storage now allows users to tier data directly to Google Cloud Storage.
- Tiered Storage now supports AWS Credential Files
- Tiered Storage users now have the ability to use an AWS credentials file to authenticate to AWS S3.
- Self-Balancing Clusters
- Confluent Server is not self-balancing. By turning on Self-Balancing in your Confluent Server cluster, partitions will automatically rebalance when brokers are added or when the cluster detects skew making your cluster elastic and ensuring it’s running efficiently. For more information, see Self-Balancing Tutorial.
- Embedded Rest Admin APIs
- Confluent Server now includes RESTful admin APIs that make it easier to perform basic Kafka operations. For more information, see Confluent REST Proxy API Reference.
- Cluster Linking (preview)
- Confluent Platform users can now directly connect two Kafka clusters together to mirror topics, consumer offsets, and ACLs without the need for Connect, Replicator, or MirrorMaker 2.0. This feature is currently in preview and not certified for production use case because the APIs are not stable. For more information, see Cluster Linking.
- Confluent Server is now default broker
- As of Confluent Platform 6.0.0, Confluent Server is the default broker found in the RPMs, debs,
and Confluent Platform download. If you want to use the Kafka broker, download
the Confluent Community package. For more information, see Migrate to Confluent Server.
When using Confluent Server, be sure to add your Confluent License Key to your
server.propertiesfile via the
Confluent Community / Apache Kafka¶
<<<<<<< HEAD Confluent Platform 6.0 features Apache Kafka® 2.6. For a full list of the 15 KIPs, features, and bug fixes, take a look at the official Apache Kafka release notes, the release overview podcast, and the release blog post. Or watch Confluent’s very own Tim Berglund give an overview of Apache Kafka 2.6.
- Configuring Audit Logs using the CLI
- Added support for CLI and API configuration to simplify setup. These new features enable you to dynamically update an audit log configuration. The changes you make through the Confluent CLI are pushed from the Metadata Service (MDS) out to all registered clusters. This allows you to centrally manage the audit log configuration, and assure that all registered clusters publish their audit log events to the same destination Kafka cluster. The audit log UI will become available in near future. For more information, see Configuring Audit Logs using the CLI.
- Cluster Registry
- Provides Kafka cluster administrators the ability to centrally register Kafka clusters in the Metadata Service (MDS). This provides a more user-friendly RBAC role binding experience, and enables centralized audit logging. It also makes it easier to get the cluster information and provides simplified Kafka cluster names instead of difficult-to-remember cluster IDs. For more information, see Cluster Registry.
- Utilizes the cluster registry to make it easier to get the cluster information and provides simplified Kafka cluster names instead of difficult-to-remember cluster IDs, which simplifies the process of viewing and managing role bindings. For more information, see Confluent Platform Metadata Service.
- TLS 1.0 and 1.1 is disabled for Kafka. However, support for TLS 1.3 is added. This feature will be extended to the entire Confluent Platform in future.
Cluster Registry and user-friendly names are not integrated with Control Center. Support for this integration is planned in the near future.
The 1.5.0 release brings usability improvements, enhancements and fixes to librdkafka. For more details, see the librdkafka 1.5 release notes.
Admin REST APIs¶
- A majority of the Java AdminClient operations are now available via HTTP. The new REST APIs allow you to manage your brokers, topics , ACLs, consumer groups, and topic and broker configs.
- The Admin REST APIs are available on existing REST Proxy deployments under a new v3 namespace. For the first time, they’re also available directly on Confluent Server without any external dependencies.
For more information, see Confluent REST APIs.
- Centralized Enterprise License for connectors
- Previously, connectors that require an Enterprise License needed to have a license topic configuration at each connector level. However, this enhancement enables you to configure a license topic at Connect worker level, so you don’t have to add a license topic configuration at the connector level. For more information, see Licensing Connectors.
- KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics
- Enhances usability of source connectors by enabling developers and operators to set topic-specific settings for new topics easily. Kafka Connect will optionally create any topics used by source connectors using topic setting rules declared in the source connector’s configuration. For more information, see Configuring Auto Topic Creation for Source Connectors and the configuration properties documentation for individual source connectors.
- KIP-585: Filter and Conditional SMTs
- Allows you to apply SMTs conditionally. Previously, SMTs were configured as part of a connector configuration and were applied to all records in topics that the connector reads from or writes them to. This limitation was challenging when you want to use SMTs with a certain condition. For more information, see Filter (Apache Kafka).
- KIP-610: Error Reporting in Sink Connectors
- Extends an Error Reporting capability for sink connectors. Previously, error handling in Kafka Connect included functionality such as retrying, logging, and sending errant records to a dead letter queue. However, the dead letter queue functionality from KIP-298 only supports error reporting within contexts of the transform operation, and key, value, and header converter operation. After records are sent to the connector for processing, there is no support for dead letter queue or error reporting functionality. KIP-610 allows sink connectors to report individual records as being problematic, and they will be sent to the DLQ. For more information, see Dead Letter Queue.
- Beginning with Confluent Platform 6.0, connectors are no longer packaged natively with Confluent Platform. All formerly packaged connectors must be downloaded directly from Confluent Hub. To clearly delineate this change in distribution, we have bumped versions to 10.0.0. For more information, see Supported Connectors.
- The Kafka client API for SSL changed in Kafka 2.6. If you use SSL configurations Confluent Platform 6.0.x, the
following version limitations apply for these connectors:
- Elasticsearch Sink version 10 or above
- RabbitMQ Source and Sink version 1.3.1
- Connectors that bundle the Avro converter or Schema Registry and are using dependency versions 5.4.0, 5.4.1, or 5.4.2 do not work on Confluent Platform 6.0. These dependencies need to be updated to a later version, such as 5.4.3, 5.5.2 or 6.0.0.
Confluent for Kubernetes¶
- TLS Encryption for Internal Communication
- You can now provide TLS certificates and keys to secure inter-broker communication and inter-component communication. This means that all communication within the platform can be securely encrypted. This is in addition to the existing ability to configure all communication from external clients of the platform with TLS encryption .
- LoadBalancer-less Environments
- There are now other options for enabling external access to Operator-deployed Confluent Platform components, such as NodePort Services and Ingress controllers. This is to support users who cannot use Kubernetes Services of type LoadBalancer (which provides dynamic management of load balancers) in their environment.
- 1-Command Parallel Disk Scale-Up
- With a single configuration change in your Helm configuration file and a single
helm upgradecommand, you can expand the disk across every broker in an entire Kafka cluster in a matter of seconds. You can also apply this to non-Kafka clusters.
- 1-Command Parallel Cluster Scale-Out
- With a single configuration change in your Helm configuration file (
$VALUES_FILE) and a single
helm upgradecommand, you can scale up the size of any Confluent Platform component cluster. This is especially powerful when applied to Kafka clusters, and combined with Self-Balancing.
- Confluent for Kubernetes can enable self-balancing for Kafka clusters, ensuring topic partitions are automatically rebalanced to new brokers when added to the cluster. For more information, see Scale Kafka clusters and balance data.
- Tiered Storage
- With Confluent for Kubernetes, you can enable Tiered Storage to reap the benefits of Infinite Kafka. When deploying Confluent Platform to AWS EKS and using AWS S3 for Tiered Storage, or when deploying to Google Kubernetes Engine (GKE) and using GCS for Tiered Storage, you can configure Tiered Storage without passing any cloud provider credentials.
- Client applications can now authenticate to RBAC-enabled clusters using both the SASL/PLAIN mechanism and mTLS.
While each of the previous features provide significant value on their own, together they create a truly cloud-native, elastic experience for managing Kafka. See these features in action together with this Demo: Elastically Scaling Kafka on Kubernetes with Confluent.
- Kubernetes-native Scheduling via Affinity
- You now have much greater flexibility and a more Kubernetes-native configuration UX for defining affinity and anti-affinity scheduling rules for Confluent Platform component pods.
- Plaintext Credentials Eliminated
- All sensitive configuration data such as credentials and Confluent Platform license keys are managed using Kubernetes Secret objects and are made available to Confluent Platform components at runtime only in memory. This addresses a previous issue where some credentials were present in plaintext ConfigMaps or Status subresources of some objects.
- Connectors Removed from Connect image
- Confluent Platform Docker images for Connect no longer include any built-in connectors. If you have been using CFK to deploy Connect clusters, and have relied on the built-in connectors rather than building your own Docker images atop Confluent’s base Connect images for CFK, you have to build images with your desired connectors before deploying the latest Connect images using the CFK.
When RBAC is enabled, Confluent REST API is automatically enabled. When RBAC is not enabled, Confluent REST API is disabled. As a workaround, with RBAC disabled, you can use configuration overrides to enable Confluent REST API without authentication. In this case, we strongly recommend that you configure network rules to restrict access to the Confluent REST API endpoints.
Ansible Playbooks for Confluent Platform¶
- Tiered Storage
- Ansible Playbooks enable configuration of Tiered Storage to reap the benefits of Infinite Kafka. When deploying Confluent Platform to AWS EC2 and using AWS S3 for Tiered Storage, or when deploying to GCP Compute and using GCS for Tiered Storage, you can configure Tiered Storage without having to pass in any cloud provider credentials. For more information, see Advanced Confluent Platform Configurations with Ansible Playbooks.
- Ansible Playbooks enable self-balancing for Kafka clusters. This ensures that topic partitions are automatically rebalanced to new brokers when added to the cluster.
- Copy Files
- Ansible Playbooks now support a generic ability to copy files from the Ansible bastion host to Confluent Platform hosts. This enables a multitude of new abilities, such as providing Basic Authentication Realm configuration for Control Center, providing a GCP credentials JSON file for Tiered Storage when using Google Cloud Storage, usage of the Secret Protection feature of Confluent Platform, and more.
- Centralized RBAC Management
- You can configure Kafka clusters to point to a Metadata Service (MDS) hosted in another Kafka cluster. By using multiple Inventories, you can deploy multiple Kafka clusters that are all pointed at a single central MDS-hosting Kafka cluster, thereby enabling central management of RBAC across multiple Kafka clusters.
- Secure ZooKeeper Communication
- New Ansible installations can configure secure network communications for ZooKeeper with TLS encryption (both for peer-to-peer communication and client-server communication to ZooKeeper) and new authentication mechanisms, including SASL MD5 and Kerberos.
- Confluent REST API
- Ansible Playbooks can enable embedded REST Proxy (v3 API only) within the Kafka brokers, in addition to continued support for deploying REST Proxy as a separate cluster.
- Smarter Rolling Restarts
- Rolling restarts of Confluent Platform components now run serially to minimize downtime. Rolling restarts of ZooKeeper and Kafka that are orchestrated by the Ansible Playbooks follow best practices of rolling the leader or controller last.
- Tarball Package Installation
- You can install Confluent Platform packages from a Confluent Platform tarball present on the Ansible bastion host. This is especially useful for users in disconnected environments.
- Jolokia TLS
- Jolokia endpoints can be configured for secure communication over TLS.
- Java 11
- You can choose whether to use Java 8 or Java 11 for Confluent Platform components deployed via Ansible Playbooks.
- RHEL 8
- You can now deploy Confluent Platform components via Ansible Playbooks to hosts with RHEL 8 operating systems.
- JMX Prometheus Exporter
- You can enable the JMX Prometheus Exporter now on all Confluent Platform components to enable monitoring of the platform via Prometheus. In previous versions this was only supported for ZooKeeper and Kafka.
- Improved Hybrid Support
- You can configure and deploy Confluent Platform components to connect to services in Confluent Cloud such as Kafka and Schema Registry. This can be achieved using the Ansible Playbooks’ support for Comprehensive Configuration Overrides for all Confluent Platform components - which has been fixed in this release.
- Enhanced debugging logs
- If a component health check fails, the playbook will fetch log and property files from that
component back to the Ansible Control Node inside the
error_files/directory at the root of cp-ansible.
- Comprehensive Configuration Overrides
All Confluent Platform component properties can be configured and overridden via Ansible Playbook configuration. In previous versions, Ansible Playbooks would respect some but not all user-provided configuration overrides.
- Sensitive Information in Log Output
When debug is enabled with the
-vvvAnsible option, sensitive information, such as passwords, certificates, and keys, are printed in the output. Ansible does not provide a way to suppress sensitive information with the
-vvv. Therefore, it is not recommended to use the debug mode in production environments.
As an alternative, use the playbook with the
--diffoption when troubleshooting issues. With this release, Ansible Playbooks for Confluent Platform no longer prints sensitive information, such as passwords, certificates, and keys, in the output of the
For details, see Troubleshoot Ansible Playbooks for Confluent Platform.
- KIP-441: Smooth Scaling Out for Kafka Streams
- Improves Kafka Streams availability during scaling operations. When new instances are added, state will not be reassigned until new tasks are fully caught up.
- KIP-447: Producer scalability for exactly once semantics
- Improves the scalability of Kafka Streams exactly-once semantics.
- Pull queries are now GA.
- ksqlDB-Connect integration is now GA.
- Multi-way joins
- You can now express multiple joins within a single query.
- Improved LIKE expressions
- The underscore (
_) wildcard character is now supported for matching single characters.
- Tunable retention and grace period
- The amount of time that window data is persisted on disk is now configurable via the RETENTION clause. You can also now have explicit control over how “late” input events can arrive while still being included in a window. This is accomplished using a window’s GRACE PERIOD clause.
- Rate limiting for pull queries
- A per-node rate limit for pull query requests may now be set in terms of maximum queries per
second using the
- New built-in functions
- 24 new built-in functions have been added to simplify working with maps, arrays, strings, and regular expressions.
- Native Java client
- ksqlDB now has a native Java client for programmatically interacting with ksqlDB.
- Enhanced key support
- You can now cleanly specify the row key of a given stream and table by adding a [PRIMARY] KEY modifier to a column definition.
- WITH (key=…) syntax has been removed
- New queries must use [PRIMARY] KEY syntax for designating keys.
- Keys now required in projections
- Key columns must be explicitly projected in the SELECT expression list of a query.
- Keys stored in Kafka message key
- ksqlDB now stores row keys in the underlying Kafka message’s key. If the key field must be copied into the Kafka message’s value, the AS_VALUE hint may be used.
- ROWTIME now longer implicitly included in query output
- For a query’s result to include ROWTIME, it must be explicitly included in the projection list to be returned in the output.
- Changes to generated column names
- Auto-generated column names for push queries may be named differently than they were in previous releases of ksqlDB.
A new component is added to Confluent Platform called the Confluent Telemetry Reporter. This component runs in every Confluent Platform service, providing a standard metrics collection model. It streams data either locally to Kafka, to Confluent directly or both simultaneously.
- Send Broker metrics to a topic with an industry standard format
- Confluent Telemetry Reporter pushes metrics from brokers to the
_confluent-telemetry-metricstopic where they are stored in an OpenCensus format for consumption by Confluent Platform services and users of Confluent Platform.
- Send performance metrics to Confluent to facilitate troubleshooting
- Confluent Telemetry Reporter can be used with any Confluent Platform component to send performance metrics to Confluent over HTTP to facilitate troubleshooting.
Beginning with Confluent Platform 6.0, connectors will no longer be packaged natively with Confluent Platform. All formerly packaged connectors must be downloaded directly from Confluent Hub.
How to Download¶
Confluent Platform is available for download at https://www.confluent.io/download/. See the On-Premises Deployments section for detailed information.
The Confluent Platform package includes Confluent Server by default and requires a
confluent.license key in your
server.properties file. Starting with
Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must
supply a license string in each broker’s properties file using the
confluent.license property as below:
If you want to use the Kafka broker, download the
The Kafka broker is the default in all Debian or RHEL and CentOS packages.
For more information about migrating to Confluent Server, see Migrate to Confluent Server.
To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.
Supported Versions and Interoperability¶
For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.
If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.