You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Release Notes

Confluent Platform 5.2.0

Commercial Features


  • Schema migration: Confluent Replicator now supports migration of schemas from a self-managed, on-prem Schema Registry to a fully managed Schema Registry in Confluent Cloud. Specifically, Replicator supports the following two scenarios:
    • Continuous migration: You can use your self-managed Schema Registry as a primary and Confluent Cloud Schema Registry as a secondary. New schemas are registered directly to the self-managed Schema Registry, and Replicator will continuously copy schemas from it to the Confluent Cloud Schema Registry, which should be set to IMPORT mode.
    • One-time migration: You can migrate your existing self-managed Schema Registry to Confluent Cloud Schema Registry as a primary. All new schemas are registered to Confluent Cloud Schema Registry. In the scenario, there is no migration from Confluent Cloud Schema Registry back to the self-managed Schema Registry.

Control Center

Confluent Control Center introduces several new features, including:

  • Message browser enhancements: The message browser has been improved to allow seeking to an offset on a specific partition, switching between table and card output formats, viewing topic metadata, and more easily viewing messages using feed auto-loading and pausing on hover.
  • KSQL UI enhancements: The KSQL UI has been improved to allow you to more easily understand query status, including query errors. Similar to the message browser enhancements, you can now switch between table and card output formats, view output metadata, and more easily inspect output using feed auto-loading and pausing on hover.
  • Scalability up to 40k partitions (assuming a replication factor of 3): System Health monitoring now scales up to support 40k topic partitions. With a replication factor of 3, this scaling improvement allows monitoring up to 120k total replicas. If your topics have a smaller replication factor, then Control Center could scale beyond 40k partitions. For more information, see Partition limitations.
  • Schema management: Since CP 5.0.0, Control Center has supported viewing key and value schemas, as well as comparing older schema versions with the current one. In 5.2.0, Schema Registry support has been extended to allow creating and editing schemas, validating schemas against compatibility policy, and managing the compatibility policy itself.
  • Dynamic broker configuration: Since CP 5.0.0, Control Center has supported viewing broker configurations. With 5.2.0, you can now make changes to those dynamic configurations that don’t require broker restarts.
  • License management and expiration notification: Control Center now notifies you of impending expiration of your subscription enterprise license (3 months out, 1 month out, weekly for the last month, and daily for the last week). You can also submit a new enterprise license key in the Control Center UI License tab, which makes it easier to manage your subscription.
  • Support for multiple KSQL clusters: In the 5.2 release, Control Center supports running, managing, and monitoring KSQL queries and statements on multiple KSQL clusters. Previous versions only supported a single KSQL cluster.
  • Support for multiple Connect clusters: Connect support in 5.2.0 has been extended to multiple Connect clusters. In previous releases, Control Center supported creating, managing, and monitoring connectors on only one Connect cluster. Combined with multi-cluster KSQL support, this makes Control Center much more useful for monitoring deployments at scale.

Community Features


  • SSL support: KSQL can be configured to use HTTPS rather than the default HTTP for all communication. This includes both the server and client. For more information, see Configuring KSQL for HTTPS.
  • CASE expressions: KSQL now supports CASE conditional expressions in searched form, where KSQL evaluates each condition from left to right. It returns the result for the first condition that evaluates to true. If no condition evaluates to true, the result for the ELSE clause is returned. If there is no ELSE clause, null is returned. For more information, see KSQL Syntax Reference.
  • Improved debuggability of KSQL queries: A log of record processing events is now available to help debug KSQL queries. The log can be configured to log to Kafka to be consumed directly as a KSQL stream.
  • URI UDF functions: A new family of UDFs has been added for improved handling of URIs (e.g. extracting information/decoding information). For more information, see Scalar functions.
  • Read after write consistency: New commands will not execute until previous commands have finished executing. This feature is enabled by default in the CLI and can be implemented by the user for the REST API.
  • GROUP BY enhancements: KSQL now supports GROUP BY for more than just simple columns, including:
    • fields within structs
    • arithmetic results
    • functions
    • string concatenations
    • literals
  • Improved support for multi-line requests in interactive mode deployments: Prior to this version, KSQL parsed the full request before attempting to execute any statements. Requests that contained later statements that were dependent the execution of prior statements may have failed. In this version and later, this is no longer an issue.
  • Improved headless mode deployments: Prior this version, KSQL parsed the full script before attempting to execute any statements. The full parse would often fail when later statements relied on the execution of earlier statements. In this version and later, this is no longer an issue.
  • RUN SCRIPT has been deprecated: The use of the RUN SCRIPT statement via the REST API is now deprecated and will be removed in the next major release. The feature circumnavigates certain correctness checks and is unnecessary, given the script content can be supplied in the main body of the request. If you are using the RUN SCRIPT functionality from the KSQL CLI you will not be affected, as this will continue to be supported. If you are using the RUN SCRIPT functionality directly against the REST API your requests will work with this version of the server, but will be rejected after the next major version release. Instead, include the contents of the script in the main body of your request.

Kafka Streams

  • Improved RocksDB lookup performance: RocksDB supports using bloom filters for optimizing lookup performance of keys. Kafka Streams has enabled this behavior to increase the performance of its state stores. See KAFKA-4850 for more information.
  • New flatTransform method: Kafka Streams now supports a new method in its API, flatTransform. It is the equivalent of flatMap for the Processor API. Unlike the traditional transform method, this method is able to ensure strong typing by specifying in its signature a list of key-value pairs (i.e. Iterable) as output records for each input record. This is a partial feature from KIP 313.


  • Elasticsearch Connector: Added 2-way TLS/SSL support to Elasticsearch sink connector.

  • HDFS Connector: * HDFS Connector is based on Hadoop v2.7.3 which uses Netty and Groovy libraries which have the following CVEs. Due to the Hadoop CVEs, the connector will be removed from the Confluent Platform package when a major/minor version is released. However, it will continue to be downloadable from Confluent Hub and supported by Confluent. The affected libraries are required by the latest Hadoop 2.7.3 client included in the kafka-connect-hdfs directory and are not loaded unless the HDFS connector is used. If you are really concerned, you can remove the kafka-connect-hdfs directory.

    • The connector now returns committed offsets to the consumer offsets topic. This feature should only be used for tracking lag, and does not affect the connector’s exactly-once behavior.
  • JDBC Connector: The ability to set the timezone when querying a database was added to both source and sink JDBC connectors.


librdkafka v1.0.0
  • Idempotent Producer Provides exactly-once producer message delivery and guaranteed message ordering.
  • Sparse connections Clients now connect to a single bootstrap server to acquire metadata. This allows clients to communicate only with the brokers they need to, greatly reducing the number of connections between clients and brokers.
  • ZSTD compression Support for the Zstandard real-time compression algorithm. To enable use compression.type=zstd.
  • Allows users to set the session timeout to be significantly lower. This supports faster detection of process crashes. See KIP-62 for more details.

For a complete list of features and improvements found in librdkafka v1.0.0, see the release notes here

Confluent-Kafka-Python and Confluent-Kafka-Go

Both clients use librdkafka and include all the improvements in librdkafka v1.0.0.

  • Includes benefits from all the librdkafka improvements
  • Includes new producer and consumer serialization functionality that implements the .NET serializer interfaces. Consumers, Producers, and the AdminClient are constructed with Builder classes.
  • Avro SerDes no longer make blocking calls to ICachedSchemaRegistryClient - everything is awaited

Kafka Connect

  • JMX metrics: The ConfluentMetricsReporter metrics reporter will pull out producer and consumer JMX metrics from each worker JVM and write them into a Kafka topic (default: _confluent-connect-metrics) once the following properties are added to the Connect worker config file.
    • metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
    • confluent.metrics.reporter.bootstrap.servers=metrics-kafka:9092
    • confluent.metrics.reporter.topic=_confluent-connect-metrics
    • confluent.metrics.reporter.topic.replicas=1
    • confluent.metrics.reporter.whitelist=.*

Apache Kafka 2.2.0

  • More use of AdminClient in commands and now use the AdminClient, so no need to pass in --zookeeper! Note that command is not available in the Confluent Cloud CLI. KIP-183 KIP-337
  • Improved the default group id behavior in KafkaConsumer Default has been change from “ “ to null, so that unnecessary or unintentional offset fetches / commits are avoided. KIP-289
  • Separate controller and data planes Separates controller connections and requests from data plane, which helps ensure controller stability even if the cluster is overloaded due to produce / fetch requests. This feature is disabled by default. KIP-291
  • Detection ofoutdated control requests and bounced brokers Kafka now detects outdated control requests to increase cluster stability when a broker bounces. KIP-380
  • Introduction of configurable consumer group size limit Introduces a configurable consumer group size limit that protects the cluster resources from too many consumers joining a consumer group. KIP-389

18 new features and configuration changes have been added with 150 resolved bug fixes and improvements. For a full list of changes in this release of Apache Kafka, see the Apache Kafka 2.2.0 Release Notes.

Known Issues

  • Firefox browser issue: The Control Center System Health page in Firefox browser may not render correctly For more information on supported browsers, see Web Browsers.
  • Partitions limitations: Environments with greater than 40K topic partitions and 120K total replicas causes Control Center to start lagging. Using Control Center is not recommended in those environments until the issue is resolved. Control Center versions prior to 5.2.0 supported up to 10K partitions.

How to Download

Confluent Platform is available for download at See the On-Premises Deployments section for detailed information.

To upgrade Confluent Platform to a newer version, check the Upgrade documentation.

Supported Versions and Interoperability

For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.


If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.