Confluent Platform 6.2.0 Release Notes¶
6.2.0 is a major release of Confluent Platform that provides you with Apache Kafka® 2.8.0, the latest stable version of Kafka.
The technical details of this release are summarized below.
Confluent Control Center¶
- Message Browser now includes a Single Message Producer that allows you to produce a message from within the UI. You can specify both a message’s value and key.
- Message Browser also now includes an improved message selection and download experience. Individual message selection is supported and a select-all or deselect-all toggle. When one or more messages are selected, you have the option to download them via CSV or JSON formats.
- A new script,
/bin/control-center-cleanup, walks you through the identification and deletion of the residual internal topics that are left behind from previous versions of Control Center.
- The topics page now includes a column for the
Last Producedmessage timestamp.
- The topics page has consolidated the
Under Replicated Partitions,
Out of Sync Observers, and
Out of Sync Followerscolumns into a single
Statuscolumn. The table’s default sort value is now based on issue severity:
- Unavailable - when a topic has exceeded its limit of under-replicated partitions.
- Warning - when at least one partition is under replicated, at least one observer is out of sync, or at least one follower is out of sync.
- Healthy - when replicated partitions, observers, and followers are all nominal.
- Removed the following deprecated configuration properties:
- The Self-Balancing Clusters card has been expanded to offer additional visibility into the current state of the balancer. For more information, see Work with Self-Balancing Clusters.
- After changing cluster settings in Control Center, you must explicitly refresh the Control Center interface for the settings to take effect.
- Cluster Linking (Preview) now includes cluster links where the connection is initiated by the source cluster to the destination cluster, called source initiated links. You now have the flexibility to choose if the connection should be initiated by the destination cluster (the default) or the source cluster (this new feature). Source initiated cluster links make it easier to establish links with Confluent Cloud clusters when using a stateful firewall. For more information, see Cluster Linking.
- Cluster Linking (Preview) now includes two new commands for migration (
bin/kafka-mirrors/promote) and for disaster recovery (
bin/kafka-mirrors/promotecommand will convert a mirror topic to a writable topic only if the source cluster can be reached and if there is no mirroring lag between the mirror topic and its source topic. This is useful for migrations, when you want to move workloads from the source cluster to the destination cluster with no data loss.
bin/kafka-mirrors/failovercommand will convert a mirror topic to a writable topic regardless of the mirroring lag or connectivity to the source cluster. This is useful in a disaster when the source cluster is not available and you want to move operations to the destination cluster.
- The pause (
bin/kafka-mirrors/pause) and resume (
bin/kafka-mirrors/resume) commands can be used to temporarily pause and resume mirroring.
Cluster Linking is in preview and should not be used in production.
Confluent Platform 6.2.0 includes the latest versions of the Go (confluent-kafka-go), Python (confluent-kafka-python), and .NET (confluent-kafka-dotnet) clients, which are all based on librdkafka v1.6 (also included in the release). For more information on what’s new in the clients, check out the librdkafka v1.6 release notes
Admin REST APIs¶
The Admin REST APIs now includes a v3 produce API that makes it easier than ever to write events to Kafka.
This API is in preview and therefore could still change pending feedback.
Confluent for Kubernetes (formerly Confluent Operator)¶
Confluent for Kubernetes 2.0.0 allows you to deploy and manage Confluent Platform versions 6.0.x, 6.1.x and 6.2.0 on Kubernetes.
To learn about the notable features, enhancements, or fixes for Confluent for Kubernetes 2.0.0, see the release notes.
Ansible Playbooks for Confluent Platform¶
- Connect to Confluent Cloud: You can use Ansible Playbooks for Confluent Platform to configure and deploy Connect, ksqlDB, Control Center, and REST Proxy to utilize Confluent Cloud Kafka and Confluent Cloud Schema Registry.
- Basic Authentication Support: You can use the Ansible Playbooks for Confluent Platform to configure basic authentication for Confluent component REST APIs.
- Multiple Connect workers on a single host node: You can use the Ansible Playbooks for Confluent Platform to configure and deploy more than one Connect worker instance on the same host node.
- Idempotent Upgrades: You can use the Ansible Playbooks for Confluent Platform to upgrade Confluent Platform clusters by running the reconfiguration playbooks. This enables you to end with your Ansible inventory files to match the resultant Confluent Platform configuration state.
- Improved pre-flight checks.
- SCRAM-256 support: You can use Ansible Playbooks for Confluent Platform to configure Kafka to use either SCRAM-256 or SCRAM-512 for Kafka listener authentication.
- You can now specify a non-local Linux user (managed by Active Directory, for example) to own the Confluent component process. Before, this would error out.
You cannot configure Replicator to read from or write to an RBAC-enabled Kafka cluster. This will be addressed in a future update release.
StreamJoinedto accept more configuration parameters.
- KIP-663 - Introduced an API to start and shut down stream threads without requiring that Kafka Streams applications are reconfigured and subsequently restarted.
TimeWindowedSerdeto properly handle window size.
- KIP-671 - Introduced Kafka Streams specific uncaught exception handler.
- TopologyTestDriver no longer requires a
- Confluent Platform 6.2 packages ksqlDB standalone release 0.17.0.
- Introduced query migration tooling (#6988): The ksql-migrations tool can now be used to automate schema migrations and management.
- Introduced support for lambda functions (#6919): Inline function expressions can now be applied to collection types in order to simplify transformations and reduce some of the need to implement user-defined functions.
- Multi-column row keys are now stored internally as separate columns (#6544): It is no longer necessary to parse concatenated strings representing multi-column keys.
- Non-aggregate tables via
CREATE TABLE AS SELECTare now materialized and can serve pull queries (#7085): Note that tables of the form
CREATE TABLE (x INTEGER, …)are still not currently materialized and therefore will not serve pull queries.
DROP STREAM | TABLEnow automatically terminates underlying persistent query: It is no longer necessary to manually
TERMINATEpersistent queries before dropping them.
- Pull queries can now be configured to target any number of rows via full table scans
ksql.query.pull.table.scan.enabledconfiguration parameter must be set to
true, either in the server configuration or request-level configuration.
- All serialization formats may now be used for row keys (#6708, #6694, #6692).
- Introduced support for
PARTITION BYsupport over multiple expressions (#6803).
TIMESTAMPtype support with associated builtins and operators (#6806).
- Added standard deviation built-in aggregate (#6845). The stddev_samp aggregate computes the standard deviation of all input values.
- Custom configuration parameters are now passed to user-defined aggregate functions. For example:
For Schema Registry users: a method overload was deleted between Avro version 1.9.1 and 1.10.1. If you are using an Avro version earlier than 1.10.1, you might encounter an error similar to:
Caused by: java.lang.NoSuchMethodError: org.apache.avro.Schema.toString(Ljava/util/Collection;Z)Ljava/lang/String;
How to Download¶
The Confluent Platform package includes Confluent Server by default and requires a
confluent.license key in your
server.properties file. Starting with
Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must
supply a license string in each broker’s properties file using the
confluent.license property as below:
If you want to use the Kafka broker, download the
The Kafka broker is the default in all Debian or RHEL and CentOS packages.
For more information about migrating to Confluent Server, see Migrate to Confluent Server.
To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.
Supported Versions and Interoperability¶
For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.