You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Release Notes

Confluent Platform 5.1.2

This is a bugfix release of Confluent Platform that provides you with Apache Kafka® 2.1.1, the latest stable version of Kafka and additional bug fixes.

You are encouraged to upgrade to Confluent Platform 5.1.2 as it includes important bug fixes. The technical details of this release are summarized below.


Community Features

Kafka 2.1.1
  • PR-5986 - KAFKA-6388: Error while trying to roll a segment that already exists (#5986)
  • PR-5997 - KAFKA-7697: Possible deadlock in kafka.cluster.Partition (#5997)
  • PR-6049 - KAFKA-7755: Kubernetes - Kafka clients are resolving DNS entries only one time (#6049)
  • PR-6215 - KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if the hostname of the broker changes (#6215)
  • PR-6126 - KAFKA-7741: streams-scala - document dependency workaround (#6126)
  • PR-6232 - KAFKA-7897: Disable leader epoch cache when older message formats are used (#6232)
  • PR-6233 - KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh login fails (#6233)
JDBC Connector
  • PR-593 - CC-3578: Logging query for source connector at INFO level only the first time it’s used
Schema Registry
  • PR-1020 - CC-3803: Corrected support for handling defaults of logical schema types
  • PR-1009 - CC-3741: Remove use of IdentityHashMap

Confluent Platform 5.1.1

This is a bugfix release of Confluent Platform that provides you with Kafka 2.1.0, the latest stable version of Kafka and additional bug fixes.

You are encouraged to upgrade to Confluent Platform 5.1.1 as it includes important bug fixes. The technical details of this release are summarized below.


Commercial Features

Confluent CLI
  • Consult first JAVA_HOME and then PATH when checking java version
  • Properly set JMX_PORT for each service based on ${SERVICE}_JMX_PORT prefix
  • Reflect CCL change in code variables
Hub Client
  • commons-collections: 4-4.0 -> 4-4.2, airline*: 2.2.0 -> 2.6.0, jackson-annotations: 2.9.0 -> 2.9.6.
JMS Connector
  • Added NPE handling and info message.
  • Backport changes for CC-3639 to 4.1.x and update corresponding docs
  • Use client acknowledge mode to ack produced messages and maintain one inflight message
Connect Replicator
  • Improve message when offset translation fails
  • Fix off-by-one-error during commitSync
  • Fix the number of Replicator tasks to spawn
  • Ignore missing TPs when calling offsetsForTimes

Community Features

Kafka 2.1.0-cp2
  • PR-6215 - KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if the hostname of the broker changes. (#6215)
  • PR-6203 - KAFKA-7873: Always seek to beginning in KafkaBasedLog (#6203)
  • PR-6202 - KAFKA-7837: Ensure offline partitions are picked up as soon as possible when shrinking ISR (#6202)
  • PR-5989 - KAFKA-7693: Fix SequenceNumber overflow in producer (#5989)
  • PR-5990 - KAFKA-7692: Fix ProducerStateManager SequenceNumber overflow (#5990)
  • PR-6134 - KAFKA-7652: Part I; Fix SessionStore’s findSession(single-key) (#6134)
  • PR-6121 - KAFKA-7741: Streams exclude javax dependency (#6121)
  • PR-6070 - KAFKA-7773: Add end to end system test relying on verifiable consumer (#6070)
  • PR-6101 - KAFKA-7786: Ignore OffsetsForLeaderEpoch response if epoch changed while request in flight (#6101)
  • PR-6106 - KAFKA-7799: Fix flaky test RestServerTest.testCORSEnabled (#6106)
  • PR-6073 - KAFKA-6833: Producer should await metadata for unknown partitions (#6073)
  • PR-5470 - KAFKA-7253: The returned connector type is always null when creating connector (#5470)
  • PR-6094 - KAFKA-7768: Add version to java html urls (#6094)
  • PR-6049 - KAFKA-7755: Look up client host name since DNS entry may have changed (#6049)
  • PR-6085 - KAFKA-6928: Refactor StreamsPartitionAssignor retry logic (#6085)
  • PR-6032 - KAFKA-7734: Metrics tags should use LinkedHashMap to guarantee ordering (#6032)
  • PR-5881 - KAFKA-5503: Idempotent producer ignores shutdown while fetching ProducerId (#5881)
  • PR-6027 - KAFKA-3832: Connect’s JSON Converter never outputs a null value (#6027)
  • PR-6051 - KAFKA-7759: Disable WADL output in the Connect REST API (#6051)
  • PR-5929 - KAFKA-7655: Metadata spamming requests from Kafka Streams under some circumstances, potential DOS (#5929)
  • PR-6000 - KAFKA-7705: Fix and simplify producer config in javadoc example (#6000)
  • PR-5946 - KAFKA-7443: OffsetOutOfRangeException in restoring state store from changelog topic when start offset of local checkpoint is smaller than that of changelog topic (#5946)
  • PR-5962 - KAFKA-7610: Proactively timeout new group members if rebalance is delayed (#5962)
  • PR-5925 - KAFKA-7549: Old ProduceRequest with zstd compression does not return error to client (#5925)
  • PR-6005 - KAFKA-7709: Fix ConcurrentModificationException when retrieving expired inflight batches on multiple partitions. (#6005)
  • PR-5998 - KAFKA-7704: MaxLag.Replica metric is reported incorrectly (#5998)
  • PR-5986 - KAFKA-6388: Recover from rolling an empty segment that already exists (#5986)
  • PR-5993 - KAFKA-7678: Avoid NPE when closing the RecordCollector (#5993)
  • PR-5999 - KAFKA-7697: Process DelayedFetch without holding leaderIsrUpdateLock (#5999)
  • PR-5979 - KAFKA-7660: fix streams and Metrics memory leaks (#5979)
  • PR-5994 - KAFKA-7702: Fix matching of prefixed ACLs to match single char prefix (#5994)
  • PR-5943 - KAFKA-7389: Enable spotBugs with Java 11 and disable false positive warnings (#5943)
  • PR-5959 - KAFKA-7671: Stream-Global Table join should not reset repartition flag (#5959)
  • PR-5923 - KAFKA-7536: Initialize TopologyTestDriver with non-null topic (#5923)
JDBC Connector
  • PR-572 - CC-3638: Added logic to optionally quote identifiers in SQL statements
Schema Registry
  • PR-1008 - CC-3740: Add support for ignoring metadata during toConnectMetadata
  • PR-1003 - CC-3732: Prevent SR from registering duplicate IDs
  • PR-985 - CC-3562: Corrected reconstruction of default value from short and bytes

Confluent Platform 5.1.0

This is a major release of Confluent Platform that provides you with Kafka 2.1.0, the latest stable version of Kafka.

The technical details of this release are summarized below.


Commercial Features

Control Center
  • Improved browser support: In previous releases, Control Center only supported Chrome. Browser support in 5.1.0 has been extended to Chrome, Firefox, and Safari. For more information, see Web Browsers.
  • New theme: Control Center now uses a new, more readabable light theme

Community Features

  • New functions for working with windowed data: KSQL now supports WindowStart() and WindowEnd() functions that expose the start and end times of windows, respectively, allowing developers to easily refer to this information in other functions and in subsequent queries. Start and end times are expressed in milliseconds, like ROWTIME.
  • Metric names have changed: See KSQL upgrade notes for details on the changes.
Kafka Streams
  • Sending a single, final output for windowed operations: Windowed operations can now be defined to send only a single, final output per window by suppressing intermediate outputs. For example, if an application computes five-minute averages of temperature readings, it can be told to send only one output record every 5 minutes that represents the latest 5-minute temperature average. This new feature makes it much easier to implement use cases such as alerting and metrics pipelines and to integrate Kafka Streams applications with systems that don’t support continuous streaming updates. Note that, by design, using the feature will increase the end-to-end processing latency of applications because no output records will be send until a window is closed. See the section Window Final Results in the Kafka Streams documentation and KIP-328 for more information.
  • Better topology compatibility when updating an application: Repartition topics for join and aggregation operators can now be named explicitly by developers. This makes it easier to change and evolve an application while still allowing for rolling app re-deployments to prevent production outages when updating your apps. See KIP-372 for more information.
  • Topology optimization: Where possible, Kafka Streams will now merge repartitioning topics, which reduces the Kafka storage footprint of Kafka Streams applications. Users must explicitly enable these optimizations in their application’s configuration by setting the configuration StreamsConfig.TOPOLOGY_OPTIMIZATION to StreamsConfig.OPTIMIZE (default: StreamsConfig.NO_OPTIMIZATION, i.e., disabled). See the section Optimizing Kafka Streams Topologies for more detailed information.
  • Improved timestamp synchronization: The new configuration parameter StreamsConfig.MAX_TASK_IDLE_MS_CONFIG allows developers to tune the behavior of when Kafka Streams picks the next record to process, based on the stream-time. It allows for trade-offs between lower processing latency (lower idle settings, the default is 0 to keep pre-5.1 behavior) vs. reducing the probability of out-of-order data (higher idle settings). See KIP-353 for more information.
Kafka Clients
  • Added intuitive timeouts for producer: Adds a new configuration parameter for producers, ‘’, that represents a guarantee on the upper bound on when a record will either get sent, fail or expire from the point when the send returns. This timeout shields users from using workarounds that expose the internals of the producer. Notably, this setting effectively enables retries by default. Bear in mind that reordering is possible if the idempotent producer is not enabled or is not set to 1. See KIP-91 for more information.
  • Improved behavior for expiring committed offsets: Consumer group offsets no longer expire as long as there are active consumers. The expiration time of offsets in a group will be when the group becomes “empty” plus retention time of offsets.retention.minutes, unless that group becomes active again. See KIP-211 for more information.
Confluent Clients

Confluent provides high quality clients for the most popular non-Java languages. These clients are all based on the underlying librdkafka C/C++ implementation, which provides high performance and stability. The non-Java clients follow their own versioning scheme. The current clients release is 0.11.6.

Highlights from 0.11.6:

  • librdkafka
    • Introduces stability fixes and performance improvements for Windows. #1930, #1980
    • Fixes several producer issues pertaining to queuing, flushing, and timeouts.
    • Fixes consumer inadvertently raising error about message size. #1472
  • Python client
    • Adds experimental Windows support to Python client.
    • Includes Confluent C3 monitoring interceptors with the Python client.
Availability and Resilience
  • Improved unclean log truncation handling KIP-320 improves consumer handling of unclean log truncation and introduces fencing of zombie replicas. This ensures consistency of replicated data under certain failure conditions.
  • ZStandard compression Kafka now supports ZStandard compression. ZStandard is a real-time compression algorithm developed by Facebook with features that make it highly performant and effective for small data. Kafka now accepts “zstd” as a new compression type for configuring producers, topics, and brokers. See KIP-110 for more information.
Kafka 2.1.0-cp1

28 new features / config changes have been added with 161 resolved bug fixes and improvements. For a full list of changes in this release of Kafka, see the 2.1.0 Release Notes.

How to Download

Confluent Platform is available for download at See the On-Premises Deployments section for detailed information.

To upgrade Confluent Platform to a newer version, check the Upgrade documentation.

Supported Versions and Interoperability

For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.


If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.