Release Notes

5.3.0 is a major release of Confluent Platform that provides you with Apache Kafka® 2.3.0, the latest stable version of Kafka.

The technical details of this release are summarized below.

Commercial Features

Confluent CLI

  • The Confluent CLI was completely rewritten to support a wider set of features and platforms.
  • The Confluent CLI was removed from Confluent Platform packaging. The Confluent CLI is now installed separately from the Confluent Platform package. For more information, see Installing and Configuring the CLI.
  • The Confluent CLI development commands have been moved to confluent local. For example, the syntax for confluent start is now confluent local start. For more information, see confluent local.
  • Added support for managing role-based access control (RBAC) permissions and role bindings.
  • Added support for masking secrets and configuration files.
  • Added “self-update” capability via the confluent update command so that improvements and fixes can be released out-of-band from the Confluent Platform release schedule.
  • The Confluent CLI is a commercially licensed feature, but it can be downloaded and used against both Confluent Community and Confluent Platform.

Confluent Control Center

Control Center introduces several changes and new features, including:

  • Redesigned UI: The Control Center UI has been redesigned from end-to-end to provide an easier-to-use, multi-cluster-friendly, and global view of Kafka that supports working with Confluent at scale. The new homepage provides a global view of all of your Kafka clusters and connected services, including Connect and KSQL. You can see a health roll-up for each cluster and use a switcher to view only the unhealthy clusters so you can focus on what needs attention.

  • New broker overview: Clicking one of the cards on the new cluster homepage takes you into the cluster and connected services, showing broker metrics or topics depending on your permissions if RBAC is enabled. You can drill down further into broker metrics by clicking on a card, which shows time-series charts with interactive legends and shared cursors for easier metrics correlation.

  • New topics pages: The redesigned Control Center includes a new topics index that shows you all the topics in a cluster with key health information for each in a searchable table with sortable columns. You can click on a topic name to view an overview for it, including relevant throughput and health information, and access the message browser, schemas, and topic configuration. Topic metrics, including a topic-centric view of consumer lag, can be accessed by clicking on the metrics cards. Under consumer lag, you can select which consumer group you are interested in using a drop-down so you can focus on the specific application you care about.

  • Message browser enhancements: Message browser performance has been improved to eliminate throttling and message loss. You can also now access the full topic history by seeking, either using offset or timestamp, on a specific partition. You can also now download messages in JSON format, including using the typical Ctrl/Shift-select shortcuts to select and download more than one message.

  • Connect redesign and enhancements: The new UI also features an index page for all of the Connect clusters associated with a Kafka cluster. You can search for a Connect cluster by name/ID and see how many connectors are running, degraded, or failed on each cluster. Degraded connectors have at least one failed task; whereas failed connectors have failures on all of their tasks. Clicking on one of the clusters in the Connect index takes you to the new Connect cluster overview page. Here, you can see all the connectors running on the cluster, including their status, type, category, and number of tasks. Clicking on one of the connectors in the Connect cluster overview takes you to a page showing the connector itself where you can see a breakdown of the connector’s tasks and their status. This new Connect experience makes it much easier to view all of the Connect clusters and drill down to see how each task on a connector is performing.

  • KSQL redesign and enhancements: The new UI also features an index page for all of the KSQL clusters associated with a Kafka cluster. You can search for a KSQL cluster by name/ID, see how many queries are running, and see how many streams or tables are on the cluster. Clicking on one of the clusters in the KSQL index takes you to a page where you can access the KSQL editor, streams, tables, and persistent queries. The editor itself features similar improvements as the message browser, including performance improvements to eliminate throttling and message loss and JSON downloading of results. In addition, the KSQL editor area is resizeable to accommodate larger queries and statements. It also includes a data discovery side-panel, making it much easier to find streams and tables to run KSQL against.

  • Consumers index: Control Center also includes a new index to show you all the consumer groups associated with a selected Kafka cluster, including the number of consumers per group and the number of topics being consumed. Clicking on a consumer group takes you to a page where you can view Consumer Lag across all relevant topics and a redesigned Streams Monitoring page, making it much easier to understand consumption metrics in context.

  • Alerting enhancements: Control Center features several highly requested alerting enhancements. To date, Control Center only supported email-based alert actions. In Confluent Platform 5.3.0, you can now use webhooks for Slack and PagerDuty, making it much easier to integrate Control Center with a range of monitoring approaches. There is also a new Consumer Lead alert, which you can customize to fire when a consumer is within a certain number of offsets of the partition tail. You can also now pause and resume actions globally, silencing alerts when needed. Finally, the alert messages themselves have become more informative and now include better contextual information about the alert, including the cluster, metric, condition, and trigger.

  • Feature flagging for System Health and Data Streams: While Confluent Platform 5.3.0 introduces a significantly redesigned Control Center, a feature flag allows you to view the older System Health and Data Streams pages.

    Note

    These pages cannot be viewed if RBAC is enabled.

  • RBAC enforcement: As part of the introduction of RBAC to Confluent Platform, Control Center respects the rolebindings assigned using the CLI, so you have platform-wide consistency for your security strategy. For more information, see Role-based access control (RBAC) in Confluent Control Center.

For more information, see the documentation.

Control Center Known Issues

  • Multi-cluster alert triggers: You can currently select more than one cluster when defining a Cluster or Broker alert trigger in Control Center but doing so results in a trigger firing against all of the associated clusters rather than the one that actually meets the alert definition. To avoid this issue in the interim, Cluster and Broker alerts should be defined against individual clusters.
  • Direct access to each external cluster is required under RBAC: When RBAC is enabled, Control Center requires management to be enabled on each cluster. This means that metrics reporters and interceptors must have a direct connection for Control Center to operate properly in an RBAC environment at this time. See Control Center connections to external clusters for more details.
  • Connect and KSQL require at least minimal user access to the underlying Kafka clusters: See Connect and KSQL clusters user access for details.
  • The service principal for Control Center requires SystemAdmin access: The Control Center user must be set up as a privileged user SystemAdmin on each cluster. Due to the underlying architecture for consumer lag, elevated privileges are required to guarantee users access to all consumer groups and continued support for consumer group alerts for consumer lag. The consumer lag offsets are currently not obtained from metrics reporters.
  • Prefix search in the message browser cannot currently search column names that contain a period.
  • There is a mismatch between products in enforced default values for max message bytes: When connecting Control Center to Confluent Cloud, the confluent.metrics.topic.max.message.bytes must be set to 8388608 rather than the current Control Center default of 10485760. For more information, see Control Center cannot connect to Confluent Cloud and Connecting Kafka Streams to Confluent Cloud.

Security

Role-based access control (RBAC)

Important

This feature is available as a preview feature. A preview feature is a component of Confluent Platform that is being introduced to gain early feedback from developers. This feature can be used for evaluation and non-production testing purposes or to provide feedback to Confluent.

  • Provides the secure authorization of access to resources by users and groups.
  • Uses a set of predefined roles that give you depth and granularity in its configuration of managing access.
  • Provides centralized authorization for all the components including Control Center, Connect, Schema Registry, REST Proxy, and KSQL.
  • Provides connector level access control.

For more information, see the RBAC documentation:

AD/LDAP enhancements

  • Centralized configuration.
  • Centralized authentication for Control Center, Connect, Schema Registry, REST Proxy, and KSQL.

For more information, see Confluent LDAP Authorizer.

Secret Protection

  • To enable masking of sensitive information in the configuration file and logs.
  • Across the whole platform including Kafka, Connect, KSQL, Schema Registry, REST Proxy, and Control Center.
  • Utilize the new CLI to create/manage secrets.

For more information, see Secrets.

Community Features

Clients

librdkafka v1.1.0 is a security feature release with added support for:
  • OAUTHBEARER SASL authentication.
  • In-memory SSL certificates.
  • Windows Root Certificate store.
  • Pluggable broker SSL certificate verification callback.
  • Improved SASL GSSAPI/Kerberos ticket refresh.

Upgrade considerations:

  • Windows SSL users will no longer need to specify a CA certificate file/directory (ssl.ca.location), librdkafka will load the CA certs by default from the Windows Root Certificate Store.
  • SSL peer (broker) certificate verification is now enabled by default (disable with enable.ssl.certificate.verification=false)
  • %{broker.name} is no longer supported in sasl.kerberos.kinit.cmd since kinit refresh is no longer executed per broker, but per client instance.

See the full librdkafka v1.1.0 release notes for more information.

confluent-kafka-python, confluent-kafka-dotnet, confluent-kafka-go

Clients now use librdkafka v1.1.0 and include all the improvements from the latest version.

Connect

  • PR-6363 - KAFKA-5505: Incremental cooperative rebalancing in Connect (KIP-415) (#6363) When a rebalance happened in Kafka 2.2 or earlier, it stops all tasks in a Connect cluster and restarts them. This can be a hard stop for users who run multiple connectors in a Connect cluster. With KIP-415, a rebalance happens more gracefully. It stops only the tasks that need to move between workers (if any), leaving the rest running on their assigned worker
  • PR-5743 - KAFKA-3816: Add MDC logging to Connect runtime (#5743) With KIP-449, The SLF4J API includes “Mapped Diagnostic Contexts” (MDC) that allow injection of a series of parameters that can be included in every log message written using that thread, regardless of how the SLF4J Logger instance was obtained.
  • PR-6584 - KAFKA-8231: Expansion of ConnectClusterState interface (#6584)
  • PR-6789 - KAFKA-8407: Fix validation of class and list configs in connector client overrides (#6789)
  • PR-6658 - KAFKA-8309: Add Consolidated Connector Endpoint to Connect REST API (#6658)

Connectors

  • HDFS Connector:
    HDFS connector is removed from the Confluent Platform package. However, the connector will continue to be downloadable from Confluent Hub and supported by Confluent.
  • JDBC Connector:
    PR-641 - CC-349: Added delete support for sink. (#641) A record with null value is considered a tombstone event and result in deleting the corresponding row in the destination table.
  • Elasticsearch Connector:
    PR-239 - Support for document upsert. (#239) In some cases, records in Kafka topics consist only subset of fields ElasticSearch needs. With this feature, ElasticSearch Sink Connector can perform an upsert operation.

Kafka 2.3.0

  • KIP-339: A new IncrementalAlterConfigs API in AdminClient Changes configurations incrementally, only modifying the configuration values that are specified to prevent lost updates. AlterConfigs API has been marked for deprecation. For more information, see KIP-339
  • KIP-341: Update Sticky Assignor’s User Data Protocol Improves stability of consumer partition assignments by preventing assigning the same partition to multiple consumers in the same consumer group. For more information, see KIP-341
  • KIP-351: Add –under-min-isr option to describe topics command Adds the --under-min-isr option in the describe topics command, which allows users to see precisely which topic partitions are below min.insync.replicas need to be addressed. For more information, see KIP-351
  • KIP-354: Maximum Log Compaction Lag Compaction allows Kafka to remove messages that are are older than min.compaction.lag.ms, which ensures that a segment is rolled and remains uncompacted for a given period ( “lag”). Regulations such as GDPR require that data is deleted in a timely manner. max.compaction.lag.ms sets the maximum lag time for which a segment may remain uncompacted, after which the corresponding log partition becomes eligible for log compaction. For more information, see KIP-354
  • KIP-361: Add Consumer Configuration to Disable Auto Topic Creation Both auto.create.topics.enable on the broker and allow.auto.create.topics on the consumer need to be set for auto-topic creation to happen. For more information, see KIP-361
  • KIP-402: Improve fairness in SocketServer processors The first part of KIP-402 was introduced in Kafka 2.2, which changed how connections were prioritized. Now existing connections are given more priority over new connection requests. The KIP introduces max.connections, which limits the total number of connections that may be active on the broker at any time. This is in addition to the existing max.connections.per.ip configuration that will continue to limit the number of connections from each host IP address. For more information, see KIP-402
  • KIP-417: Allow JMXTool to connect to to a secure RMI port Adds --jmx-ssl-enable and --jmx-auth-prop to allow connecting to a secure Java VM. For more information, see KIP-417
  • KIP-421: Support resolving externalized secrets in AbstractConfig Enhances the AbstractConfig base class to automatically resolve variables of the form specified in KIP-297. For more information, see KIP-421
  • KIP-425: Add Log4J Kafka Appender Properties to Producing to Secure Brokers Extends the Log4J Kafka appender to support PLAIN mechanism and configuration of jaas via a property passed to the producer. For more information, see KIP-425
  • KIP-427: Add AtMinIsr topic partition category (new metric & TopicCommand option) Introduces a new topic partition category in the metrics group between UnderReplicated and UnderMinIsr called AtMinIsr. For more information, see KIP-427
  • KIP-430 - Return Authorized Operations in Describe Responses The AdminClient now allows users to determine what operations they are authorized to perform on topics. For more information, see KIP-430
  • KIP-436: Add a metric indicating start time Useful for detecting restarts. For more information, see KIP-436
  • KIP-461: Improve Replica Fetcher Behavior at handling partition failure In previous versions, if a partition fails, the replica fetcher thread associated with that partition will terminate. Because the replica fetcher threads handle multiple partitions, this led to under-replicated partitions. Now, whenever a partition crashes, the concerned thread stops tracking the crashed parition and continues handling rest of the partitions. This all reduces the chance of under-replicated partitions and improves cluster stability. For more information, see KIP-461

The release includes 157 new features, improvements, and fixes. For a full list of changes in this release of Kafka, see the Apache Kafka 2.3.0 Release Notes.

Also, see the blog post covering What’s New in Apache Kafka 2.3 or this video.

Kafka Streams

  • Stored Record Timestamps in RocksDB: To offer better timestamp processing semantics and introduce new time-related KTable features, more information about the timeliness of records must be persisted. Kafka Streams now stores the timestamps for each record that contributes to state stores inside RocksDB. These timestamps are now accessible through Interactive Queries, too. This functionality paves the way for potential future features, like KTable TTLs and support for out-of-order KTable data. See KIP-258 for more information.
  • In-memory window and session stores: Kafka Streams supports pluggable storage of its tables. Until this release, Kafka Streams offered only durable versions out of the box of its window and session store abstractions. Kafka Streams now ships with in-memory versions implemented to support high performance, transient operations. See KIP-428 and KIP-445 for more information.
  • New flatTransformValues method: Kafka Streams now supports a new method in its API, flatTransformValues. It is the equivalent of flatMapValues for the Processor API. Unlike the traditional transformValues method, this method is able to ensure strong typing by specifying in its signature a list of key-value pairs (i.e. Iterable) as output records for each input record. See KIP-313 for more information.
  • Default implementation to close() and configure() for Serializer, Deserializer and Serde: When serializers, deserializers, and serdes are authored, the close and configure methods are typically implemented with no operation behind them. Kafka Streams now leverages Java’s new default interface inheritance feature, and provides a sensible, default implementation for these methods. See KIP-331 for more information.
  • Better defaults for increased stability: To better improve footprint and threading stability, a number of defaults have been changed to more sensible values. These include max.poll.interval.ms, segment.ms, and segment.index.bytes. See KIP-442 and KIP-443 for more information.
  • New close() method on RocksDBConfigSetter: A new close() method has been added to support cleanup of RocksDB configs. This helps to evade inadvertent memory leaks. See KIP-453 for more information.

KSQL

  • KEY no longer required for CREATE TABLE: Tables can now be created without manually specifying a KEY parameter. For example, CREATE TABLE T1 (ID INT, OTHER INT). KEY can still be specified as a hint, but is no longer required (#2745).
  • CREATE STREAM/TABLE now creates topics if they don’t exist: Previously a topic underlying a stream or table had to exist already to create a stream or table over it. KSQL now creates the underlying topic automatically if it doesn’t already exist (#2771).
  • Improved UDF interfaces: Two of the four components of KLIP-1 have made it into this release: UDFs can now be defined with variable-length arguments as well as custom Struct argument types (#2503).
  • Data can now be written to streams via INSERT INTO: Users can now write events directly to streams from within KSQL using standard SQL INSERT INTO syntax. While not designed for high-performance production usage, simplifying the process of getting data into KSQL should greatly improve the initial KSQL experience and make it much easier for new users to experiment with the system (KLIP-2, #2723).
  • KSQL test runner tool: The ksql-test-runner tool is a command line utility that enables testing KSQL statements without requiring any infrastructure, which means that Kafka and KSQL don’t need to be running. It’s a lightweight way to design and iterate on your KSQL-based applications and ensure that the expected results are generated (#2802).
  • CLI sessions can now be recorded to text files: SPOOL is a new CLI command that allows users to record a KSQL CLI session’s inputs and outputs, writing the session content out to a file (#2789).
  • ELT and FIELD functions: Both the ELT and FIELD functions are based on the SQL standard and are available in many other SQL-based systems (#2627). ELT (N, x, y, ...) returns the Nth element of the given list of strings. This is the complement to FIELD. FIELD (string, str1, str2, ...) returns the index of string within the list of given strings. This is the complement to ELT.
  • More date/timestamp formats now supported: KSQL’s date/timestamp parser is now more robust and understands more date/timestamp formats (#2499).
  • Quotes in string literals now escaped: This protects KSQL from malicious Java code injection attacks (#2545).
  • Bug fixes:
    • Fixed GEO_DISTANCE function (#2700).
    • Fixed bug causing tabs in multi-line input strings to cause unnecessary errors (#2734).
    • Fixed bug that caused command topic replay during KSQL startup to recreate deleted topics unintentionally (#2329).
    • Fixed bug with performing joins on ROWKEY (#2735).
    • Fixed bug causing potential crash during KSQL REST server shutdown (#2507).

REST Proxy

  • FIX: REST Proxy now responds with a 401/403 for authentication/authorization errors.

REST Utils

The default value for ssl.endpoint.identification.algorithm was changed to https. This setting performs hostname verification to prevent man-in-the-middle attacks. You can set ssl.endpoint.identification.algorithm to an empty string to restore the previous behaviour.

Schema Registry

  • PR-1124 - CC-4775: logical type preservation for hdfs connector when output format is parquet (#1124)

Other Improvements & Changes

  • KAFKA-8336: Enable dynamic update of client-side SSL factory in brokers When mutual authentication is enabled for inter-broker-communication (ssl.client.auth=required), broker restarts are no longer needed when updating client-side keystores.
  • FIX: Added better open file limit to systemd files

Deprecation Warnings

  • OS support for RHEL 6, Ubuntu 12.04, Ubuntu 14.04, and Debian 7
  • Confluent Rest Proxy v1 API

How to Download

Confluent Platform is available for download at https://www.confluent.io/download/. See the On-Premises Deployments section for detailed information.

To upgrade Confluent Platform to a newer version, check the Upgrade documentation.

Supported Versions and Interoperability

For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.

Questions?

If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.