Confluent Platform 6.2 Release Notes¶
6.2 is a major release of Confluent Platform that provides you with Apache Kafka® 2.8.0, the latest stable version of Kafka.
The technical details of this release are summarized below.
For more information about the 6.2 release, check out the release blog and the Streaming Audio podcast.
Confluent Control Center¶
- Message Browser now includes a Single Message Producer that allows you to produce a message from within the UI. You can specify both a message’s value and key.
- Message Browser also now includes an improved message selection and download experience. Individual message selection is supported and a select-all or deselect-all toggle. When one or more messages are selected, you have the option to download them via CSV or JSON formats.
- A new script,
/bin/control-center-cleanup
, walks you through the identification and deletion of the residual internal topics that are left behind from previous versions of Control Center. - The topics page now includes a column for the
Last Produced
message timestamp. - The topics page has consolidated the
Under Replicated Partitions
,Out of Sync Observers
, andOut of Sync Followers
columns into a singleStatus
column. The table’s default sort value is now based on issue severity:- Unavailable - when a topic has exceeded its limit of under-replicated partitions.
- Warning - when at least one partition is under replicated, at least one observer is out of sync, or at least one follower is out of sync.
- Healthy - when replicated partitions, observers, and followers are all nominal.
- Removed the following deprecated configuration properties:
confluent.controlcenter.ksql.url
,confluent.controlcenter.ksql.advertised.url
, andconfluent.controlcenter.connect.cluster
. - The Self-Balancing Clusters card has been expanded to offer additional visibility into the current state of the balancer. For more information, see Work with Self-Balancing Clusters.
Known Issues
- After changing cluster settings in Control Center, you must explicitly refresh the Control Center interface for the settings to take effect.
Kafka Brokers¶
Confluent Server¶
- Cluster Linking (Preview) now includes cluster links where the connection is initiated by the source cluster to the destination cluster, called source initiated links. You now have the flexibility to choose if the connection should be initiated by the destination cluster (the default) or the source cluster (this new feature). Source initiated cluster links make it easier to establish links with Confluent Cloud clusters when using a stateful firewall. For more information, see Cluster Linking.
- Cluster Linking (Preview) now includes two new commands for migration (
bin/kafka-mirrors/promote
) and for disaster recovery (bin/kafka-mirrors/failover
).- The
bin/kafka-mirrors/promote
command will convert a mirror topic to a writable topic only if the source cluster can be reached and if there is no mirroring lag between the mirror topic and its source topic. This is useful for migrations, when you want to move workloads from the source cluster to the destination cluster with no data loss. - The
bin/kafka-mirrors/failover
command will convert a mirror topic to a writable topic regardless of the mirroring lag or connectivity to the source cluster. This is useful in a disaster when the source cluster is not available and you want to move operations to the destination cluster. - The pause (
bin/kafka-mirrors/pause
) and resume (bin/kafka-mirrors/unpause
) commands can be used to temporarily pause and resume mirroring.
- The
Important
Cluster Linking is in preview and should not be used in production.
Known issues
After upgrading from 5.5.x to 6.1.x, customers with high partition counts (greater than 100,000) may experience cluster instability. Therefore, Confluent doesn’t recommend using the 2.7 inter broker protocol (IBP). Contact support if you have any questions about this known issue.
Confluent Community / Apache Kafka¶
Confluent Platform 6.2 features Apache Kafka® 2.8.0. For a full list of the 15 KIPs, features, and bug fixes, take a look at the official Apache Kafka release notes.
Clients¶
Confluent Platform 6.2 includes the latest versions of the Go (confluent-kafka-go), Python (confluent-kafka-python), and .NET (confluent-kafka-dotnet) clients, which are all based on librdkafka v1.6 (also included in the release). For more information on what’s new in the clients, check out the librdkafka v1.6 release notes
Admin REST APIs¶
The Admin REST APIs now includes a v3 produce API that makes it easier than ever to write events to Kafka.
Important
This API is in preview and therefore could still change pending feedback.
Cluster Management¶
Confluent for Kubernetes (formerly Confluent Operator)¶
Confluent for Kubernetes 2.0.0 allows you to deploy and manage Confluent Platform versions 6.0.x, 6.1.x and 6.2.0 on Kubernetes.
To learn about the notable features, enhancements, or fixes for Confluent for Kubernetes 2.0.0, see the release notes.
Ansible Playbooks for Confluent Platform¶
New features
- Connect to Confluent Cloud: You can use Ansible Playbooks for Confluent Platform to configure and deploy Connect, ksqlDB, Control Center, and REST Proxy to utilize Confluent Cloud Kafka and Confluent Cloud Schema Registry.
- Basic Authentication Support: You can use the Ansible Playbooks for Confluent Platform to configure basic authentication for Confluent component REST APIs.
- Multiple Connect workers on a single host node: You can use the Ansible Playbooks for Confluent Platform to configure and deploy more than one Connect worker instance on the same host node.
- Idempotent Upgrades: You can use the Ansible Playbooks for Confluent Platform to upgrade Confluent Platform clusters by running the reconfiguration playbooks. This enables you to end with your Ansible inventory files to match the resultant Confluent Platform configuration state.
Notable enhancements
- Improved pre-flight checks.
- SCRAM-256 support: You can use Ansible Playbooks for Confluent Platform to configure Kafka to use either SCRAM-256 or SCRAM-512 for Kafka listener authentication.
Notable fixes
You can now specify a non-local Linux user (managed by Active Directory, for example) to own the Confluent component process. Before, this would error out.
When debug is enabled with the
-vvv
Ansible option, sensitive information, such as passwords, certificates, and keys, are printed in the output. Ansible does not provide a way to suppress sensitive information with the-vvv
. Therefore, it is not recommended to use the debug mode in production environments.As an alternative, use the playbook with the
--diff
option when troubleshooting issues. With this release, Ansible Playbooks for Confluent Platform no longer prints sensitive information, such as passwords, certificates, and keys, in the output of the--diff
option.For details, see Troubleshoot Ansible Playbooks for Confluent Platform.
Known issues
You cannot configure Replicator to read from or write to an RBAC-enabled Kafka cluster. This will be addressed in a future update release.
Kafka Streams¶
- KIP-689
- Extended
StreamJoined
to accept more configuration parameters. - KIP-663 - Introduced an API to start and shut down stream threads without requiring that Kafka Streams applications are reconfigured and subsequently restarted.
- KIP-659
- Improved
TimeWindowedDeserializer
andTimeWindowedSerde
to properly handle window size. - KIP-671 - Introduced Kafka Streams specific uncaught exception handler.
- KIP-622
- Added
currentSystemTimeMs
andcurrentStreamTimeMs
toProcessorContext
. - KIP-680
- TopologyTestDriver no longer requires a
Properties
argument.
ksqlDB¶
- Confluent Platform 6.2 packages ksqlDB standalone release 0.17.0.
- Introduced query migration tooling (#6988): The ksql-migrations tool can now be used to automate schema migrations and management.
- Introduced support for lambda functions (#6919): Inline function expressions can now be applied to collection types in order to simplify transformations and reduce some of the need to implement user-defined functions.
- Multi-column row keys are now stored internally as separate columns (#6544): It is no longer necessary to parse concatenated strings representing multi-column keys.
- Non-aggregate tables via
CREATE TABLE AS SELECT
are now materialized and can serve pull queries (#7085): Note that tables of the formCREATE TABLE (x INTEGER, …)
are still not currently materialized and therefore will not serve pull queries. DROP STREAM | TABLE
now automatically terminates underlying persistent query: It is no longer necessary to manuallyTERMINATE
persistent queries before dropping them.- Pull queries can now be configured to target any number of rows via full table scans
(#6726): The
ksql.query.pull.table.scan.enabled
configuration parameter must be set totrue
, either in the server configuration or request-level configuration. - All serialization formats may now be used for row keys (#6708, #6694, #6692).
- Introduced support for
ARRAY
andSTRUCT
keys (#6722). - Introduced
PARTITION BY
support over multiple expressions (#6803). - Introduced
TIMESTAMP
type support with associated builtins and operators (#6806). - Added standard deviation built-in aggregate (#6845). The stddev_samp aggregate computes the standard deviation of all input values.
- Custom configuration parameters are now passed to user-defined aggregate functions. For example:
ksql.functions.my_udaf.my_property=1000000
.
Deprecation Warnings¶
For Schema Registry users: a method overload was deleted between Avro version 1.9.1 and 1.10.1. If you are using an Avro version earlier than 1.10.1, you might encounter an error similar to:
Caused by: java.lang.NoSuchMethodError: org.apache.avro.Schema.toString(Ljava/util/Collection;Z)Ljava/lang/String;
How to Download¶
Confluent Platform is available for download at https://www.confluent.io/download/. See the On-Premises Deployments section for detailed information.
Important
The Confluent Platform package includes Confluent Server by default and requires a
confluent.license
key in your server.properties
file. Starting with
Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must
supply a license string in each broker’s properties file using the
confluent.license
property as below:
confluent.license=LICENCE_STRING_HERE_NO_QUOTES
If you want to use the Kafka broker, download the confluent-community
package.
The Kafka broker is the default in all Debian or RHEL and CentOS packages.
For more information about migrating to Confluent Server, see Migrate to Confluent Server.
To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.
Supported Versions and Interoperability¶
For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.
Questions?¶
If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.