Release Notes for Confluent Platform 8.0

8.0 is a major release of Confluent Platform that provides you with Apache Kafka® 4.0, the latest stable version of Kafka.

The technical details of this release are summarized below.

Ready to build?

Sign up for Confluent Cloud with your cloud marketplace account and unlock $1000 in free credits: AWS Marketplace, Google Cloud Marketplace, or Microsoft Azure Marketplace.

Kafka brokers

Confluent Platform 8.0 features Kafka 4.0.

There are many changes in this version of Confluent Platform. Before you upgrade to Confluent Platform 8.0, you should review Upgrade Confluent Platform and the Kafka 4.0 upgrade guide. These guides provide detailed, step-by-step instructions for performing the upgrade, considerations for rolling upgrades, and crucial information about potential breaking changes or compatibility issues that may arise during the upgrade process.

Confluent Community software / Kafka

In Confluent Platform 8.0, new features include:

KRaft mode general availability: Kafka 4.0 marks a significant architectural shift with the General Availability of KRaft. KRaft replaces Apache ZooKeeper as the default and only metadata management system for Kafka clusters, integrating metadata management directly within Kafka brokers. For more information, see KRaft Overview for Confluent Platform.

Next generation Consumer Rebalance Protocol general availability (KIP-848): The new consumer rebalance protocol is now generally available. This new protocol improves the stability and performance of consumer groups during rebalances, reducing “stop-the-world” scenarios and helping improve overall responsiveness by centralizing partition assignment logic in the broker. The protocol is enabled by default on the server-side. Clients must explicitly opt-in by setting group.protocol=consumer in their consumer configurations.

New consumer and share group types (KIP-1043 and KIP-1099 ): These changes enable users to work with the new consumer and share group types. Due to Admin Client API limitations with newer group types, the kafka-groups.sh command-line tool and updates to kafka-consumer-groups.sh and kafka-share-groups.sh were introduced in Kafka 4.0. These tools enable users to accurately view all groups in a cluster, along with their types and protocols.

Queues for Kafka Early Access (KIP-932): This KIP introduces share groups, which enable queue-like semantics for cooperative message consumption within a topic. Multiple consumers in a share group can collaboratively process messages from the same partition, supporting per-message acknowledgement and higher consumer parallelism than partition count. To enable this feature for testing, you must add the following undocumented configurations to the broker properties file:

  • unstable.api.versions.enable=true
  • group.coordinator.rebalance.protocols=classic,consumer,share

If you are using a Docker image to start the broker, you must add the configuration values to enable share groups:

  • KAFKA_GROUP_COORDINATOR_REBALANCE_PROTOCOLS: 'classic,consumer,share'
  • KAFKA_UNSTABLE_API_VERSIONS_ENABLE: true

And pass in these additional environment variables (for a single broker environment):

  • KAFKA_SHARE_COORDINATOR_STATE_TOPIC_REPLICATION_FACTOR: 1
  • KAFKA_SHARE_COORDINATOR_STATE_TOPIC_MIN_ISR: 1

A new KafkaShareConsumer client API and kafka-console-share-consumer.sh tool are also available for experimentation.

Important

This feature is in early access and not recommended for production use. You should only try it out in clusters specifically created for testing purposes. If you use share groups in Kafka 4.0, your broker cannot be upgraded to Kafka 4.1 due to evolving record formats.

Transactions Server-Side Defense (Phase 2) (KIP-890): Completes the second phase of strengthening the transaction protocol in Kafka, further reducing the chances of “zombie transactions” during producer failures.

Eligible Leader Replicas (Part 1) (KIP-966): Introduces the concept of Eligible Leader Replicas (ELR) as a preview feature. ELRs are a subset of in-sync replicas (ISRs) guaranteed to have complete data up to the high-watermark, ensuring safer leader elections and preventing potential data loss during failovers.

Pre-Vote (KIP-996): To enhance the stability of KRaft-based Kafka clusters, this KIP introduces a Pre-Vote mechanism. This allows controller nodes to check their eligibility for leadership before initiating an election, minimizing disruptions caused by transient network issues or other temporary problems.

Introduce Delayed Remote List Offsets Purgatory (KIP-1075): With the introduction of Tiered Storage, handling LIST_OFFSETS requests might require brokers to read indexes from remote storage. This KIP introduces a delayed purgatory to make these operations asynchronous, helping improve broker responsiveness under high load.

Java Version Requirements: Users must ensure their environments meet these requirements before upgrading.

  • Confluent Platform now supports Java 21.
  • Kafka Brokers, Kafka Connect, and Kafka tools now require a minimum of Java 17.
  • Kafka Clients and Kafka Streams now require a minimum of Java 11.
  • Kafka 4.0 has upgraded to Jakarta and JavaEE 10 (KIP-1032), which may affect custom integrations or extensions.

Config Defaults Changes (KIP-1030): Several default property settings changed in Kafka 4.0.

  • segment.bytes and log.segment.bytes: Default unchanged, new minimum value changed from 1 MB from 14 bytes
  • num.recovery.threads.per.data.dir: Default increased from 1 to 2
  • linger.ms: Default increased from 0 to 5 milliseconds.
  • message.timestamp.after.max.ms and log.message.timestamp.after.max.ms: Default changed to 1 hour from unlimited.
  • remote.log.manager.copier.thread.pool.size: Changed from -1 to 10 and removed -1 from possible values for the configuration.
  • remote.log.manager.expiration.thread.pool.size: Changed default from -1 to 10 and removed -1 from possible values for the configuration.
  • remote.log.manager.thread.pool.size: Changed default from 10 to 2.

You must validate how the new default settings impact your existing configurations and any new clusters you create before running them. This includes clusters that are automatically created, such as those in CI/CD pipelines. If you have specific configurations for any of the new configurations with new defaults, ensure they are still valid.

Old Client API Support Removed (KIP-896): Support for client API versions older than 2.1 has been removed. Client applications using older Kafka client libraries (including Java and non-Java clients with equivalent API versions) must upgrade to version 2.1 or newer to maintain compatibility with Kafka 4.0 brokers.

Log4j 2 Upgrade (KIP-653): Kafka now uses Log4j 2 as its logging framework. The Apache log4j-transform-cli tool provides automatic conversion of existing Log4j configuration files.

JmxReporter changes (KIP-830): auto.include.jmx.reporter has been removed. The metric.reporters configuration now defaults to org.apache.kafka.common.metrics.JmxReporter.

Support changes: Effective with Confluent Platform 8.0, Confluent Community software has transitioned to follow the Kafka release cycle more closely. The Kafka community provides about one year of patch support for a Kafka version, from the minor version release date, and Confluent Community software software now follows a similar support schedule. Confluent customers using Confluent Enterprise will continue to receive patch updates for three years following minor version release. For more details, see Supported Versions and Interoperability for Confluent Platform.

For a full list of the KIPs, features, and bug fixes, see the Apache Kafka release notes, Introducing Apache Kafka 4.0 on the Confluent blog.

Clients

Non-Java Clients

  • Librdkafka v2.10.0 Release: KIP-848 has transitioned from Early Access to Preview: For more information, see the librdkafka release notes.
  • Go, Python, .NET, and Javascript Clients: These clients have been updated to v2.10.0, v2.10.0, v2.10.0, and v1.3.0, respectively, to include support for KIP-848.

Improvements

  • Duration Based Offset Reset Option (KIP-1106): The auto.offset.reset consumer configuration now supports specifying a time duration (e.g., “30d”) to reset offsets to the earliest offset within that duration.
  • Client-Generated IDs for Consumer Heartbeats Required (KIP-1082): Consumer clients are now required to include a unique, client-generated ID with each heartbeat request for consumer group management. Standard Kafka client libraries will handle this automatically, but custom client implementations need to comply with this new requirement.
  • Client Rebootstrap on Timeout or Error Code (KIP-1102): Clients can now initiate a rebootstrap process not only when bootstrap servers are unreachable but also upon connection timeouts or receiving specific error codes.
  • Return Fenced Brokers in DescribeCluster Response (KIP-1073): The DescribeCluster Admin API now includes information about brokers that are temporarily “fenced” in KRaft clusters.
  • Replication of User Internal Topics Allowed (KIP-1074): MirrorMaker 2 can now be configured to replicate topics ending with .internal or -internal, which were previously excluded by default.
  • Disabling Heartbeats Replication in MirrorSourceConnector (KIP-1089): A new configuration option allows disabling the replication of heartbeat topics in the MirrorSourceConnector.

Deprecations

The next major release, Kafka 5.0, will officially remove all deprecated components. Until Kafka 5.0, a warning will be logged when these components are used.

  • delete-config of TopicCommand Deprecated (KIP-1079): The delete-config option in kafka-topics.sh CLI tool is now deprecated. Use --alter --delete-config with kafka-configs.sh or the Admin API to remove topic configurations.

Removals

The previously deprecated components have been removed in Kafka 4.0.

  • JMS Client removed: This client has been removed per its deprecation notice.
  • offsets.commit.required.acks removed (KIP-1041): Deprecated configuration option (Kafka 3.8) has been removed. If you are explicitly setting this in your consumer clients, you must review their configurations.

Confluent Control Center

Starting with Confluent Platform 8.0 release, the Confluent Control Center packages will be a separate download, hosted in the confluent-control-center-next-gen repository. The new and improved Control Center starts with version 2.0 and is shipped independent of Confluent Platform Releases.

For support plans and compatibility, see Control Center Compatibility. Customers using Control Center (Legacy) packaged with Confluent Platform 7.9 or earlier are advised to migrate to the new version of Confluent Control Center for better performance.

This change will not affect the functionality or support of Confluent Control Center 7.9 or earlier versions packaged with Confluent Platform 7.9 or earlier versions.

For more information, see the following blog post: Introducing the Next Generation of Control Center.

Cluster Management

Confluent for Kubernetes

For Confluent for Kubernetes release notes, see Confluent for Kubernetes Release Notes.

Ansible Playbooks for Confluent Platform

Ansible Playbooks for Confluent Platform will now follow an independent release cadence outside of Confluent Platform adding support to multiple Confluent Platform versions starting with Confluent Platform 8.0.

For Ansible Playbooks for Confluent Platform release notes, see the Ansible Playbooks for Confluent Platform.

Kafka Streams

Kafka Streams has the following changes in Confluent Platform 8.0:

Improvements

  • Metrics for Client Applications (KIP-1076): Kafka Streams applications can now register their own custom metrics alongside standard Kafka client metrics, providing more comprehensive application monitoring.
  • Improved Kafka Streams Operator Metrics (KIP-1091): New metrics provide better visibility into the runtime state of Kafka Streams applications, including state metrics for StreamThread and the client instance.
  • Allow Foreign Key Extraction from Both Key and Value in KTable Joins (KIP-1104): The KTable join API now allows extracting the foreign key using a BiFunction that can access both the key and the value of the primary record.
  • Allow Custom Processor Wrapping (KIP-1112): Introduces a new ProcessorWrapper interface that simplifies applying cross-cutting logic, such as logging, monitoring, or security checks, to multiple processors within a Kafka Streams topology.
  • “retry” Return-Option to ProductionExceptionHandler (KIP-1065): The ProductionExceptionHandler now supports a RETRY option, allowing users to implement custom error handling logic for production exceptions.

Code Hygiene

  • Leaking Getter Methods in Joined Helper Class Removed (KIP-1078): Internal getter methods in the Joined class have been removed as part of internal API cleanup.
  • Leaking *_DOC Variables in StreamsConfig Fixed (KIP-1085): An internal issue causing documentation variables to leak in StreamsConfig has been resolved.

Deprecations

The next major release, Kafka 5.0, will officially remove all deprecated components. Until Kafka 5.0, a warning will be logged when these components are used.

  • MockProcessorContext Deprecated (KIP-1070): The MockProcessorContext utility for testing Kafka Streams topologies is now deprecated. Users should explore alternative testing methods.
  • ForeachProcessor Deprecated and Moved (KIP-1077): The ForeachProcessor interface has been deprecated and moved to an internal package. Use the foreach method on KStream, KTable, and GlobalKTable instead.
  • intermediateTopicsOption Removed from StreamsResetter (KIP-1087): This option has been removed from the StreamsResetter tool.
  • default. Prefix Removed for Exception Handler StreamsConfig (KIP-1056): The default. prefix has been removed from the configuration properties for custom exception handlers in Kafka Streams.

ksqlDB

ksqlDB has the following changes in Confluent Platform 8.0:

  • KafkaAppender upgraded to log4j2
  • Java 21 support added
  • Antlr upgraded to 4.10.1
  • Bug and CVE fixes

Kafka Connect

Select deprecations

In line with the support policy for self-managed connectors, effective upon the release of Confluent Platform 8.0, the following changes will be made:

Schema Registry/Governance

Client-side field level encryption (CSFLE) is now generally available in Confluent Platform 8.0. CSFLE offers an additional security layer to protect sensitive data, ensuring the data remains safeguarded throughout its lifecycle across all producers and consumers in motion. This feature enhances data security in transit and helps in meeting stringent compliance and data protection requirements. Customers need to procure an enterprise license and use the new property as part of the Schema Registry properties file to provide the license details in order to enable this feature. For more information, see Protect Sensitive Data Using Client-Side Field Level Encryption on Confluent Platform.

REST Proxy

  • Jetty upgrade: An upgrade from Jetty 9 to Jetty 12 means that you must include Server Name Indicator (SNI) headers for all REST calls. To disable this check, you can set confluent.http.server.sni.host.check.enabled to false in the properties file. For more information, see Admin REST APIs Configuration Options for Confluent Server.
  • New configuration to enable null request body: Starting with Confluent Platform 8, a null request body will only be allowed if the schema allows null values or the schema is null itself. Earlier, all null request bodies were treated as empty records. To enable the old behaviour, add null.request.body.always.publishes.empty.record=true to the properties file. For more information, see Admin REST APIs Configuration Options for Confluent Server.

Telemetry Reporter and Confluent Metrics Reporter

  • Configuration deprecations: confluent.telemetry.metrics.collector.whitelist and confluent.metrics.reporter.whitelist configuration properties are deprecated. confluent.metrics.reporter.whitelist.include and confluent.metrics.reporter.include configuration properties should be used instead. A restart of the component is required after making the configuration changes.
  • Monitoring Interceptors removed: Effective with Confluent Platform 8.0, monitoring interceptors are removed.

Security

  • Brownfield deployment for using mTLS identities with RBAC authorization is now generally available. Both Confluent for Kubernetes and Ansible Playbooks for Confluent Platform support the upgrade. For more information, see Migrate from LDAP to mutual TLS (mTLS) authentication in a RBAC-enabled Confluent Platform Cluster.
  • Passwordless authentication is now generally available for Confluent Server and Schema Registry. This feature is added on top of the Confluent Platform OAuth client credentials grant type, which uses a pre-signed assertion rather than a static client credential. Confluent for Kubernetes and Ansible Playbooks for Confluent Platform can be used for deployment with Confluent Server and Schema Registry for passwordless authentication.

Confluent Platform Docker Images

The Confluent Platform Docker images have the following changes:

  • ubi9-minimal image will be used instead of ubi8-minimal as the base image for all Confluent Platform images
  • A new Confluent base docker image is being introduced: cp-base-java. The new cp-base-java will be used as the base image for the cp-ksqldb-server docker image. This image uses the JRE and not the JDK. In addition, this new image will not contain the following packages that are currently installed as a part of cp-base-new :
    • wget
    • nmap-ncat
    • python3
    • python3-pip
    • tar
    • procps-ng
    • krb5-workstation
    • iputils
    • xz-libs
    • glibc
    • glibc-common

For more information, see Docker Image Reference for Confluent Platform.

Other improvements and changes

Confluent CLI changes:

  • The Confluent CLI for Confluent Platform 8.0 has added mTLS support for login, MDS and Schema Registry commands. Also, new CONFLUENT_PLATFORM_CLIENT_CERT_PATH and CONFLUENT_PLATFORM_CLIENT_KEY_PATH mTLS environment variables were added.
  • Confluent CLI now reliably refreshes the OAuth authentication token for long running commands.

Supported versions and interoperability

Java 8 has been removed for Confluent Platform 8.0. Java 11 is now supported only for Kafka Streams and Kafka clients. Brokers, Kafka Connect, and Kafka tools now require a minimum of Java 17.

Effective with Confluent Platform 8.0, Confluent Platform, Community version has transitioned to follow the Kafka release cycle more closely. The Kafka community provides about one year of patch support for a Kafka version, from the minor version release date, and Confluent Community software will soon follow a similar support schedule. Confluent customers using Confluent Enterprise will continue to receive patch updates for three years following minor version release. For more details, see Confluent Platform and Apache Kafka compatibility.

For the full list of supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability for Confluent Platform.

How to download

Confluent Platform is available for download at https://confluent.io/download/. See the Install Confluent Platform On-Premises section for detailed information.

Important

The Confluent Platform package includes Confluent Server by default and requires a confluent.license key in your server.properties file. The Confluent Server broker checks for a license during start up. You must supply a license string in each broker’s properties file using the confluent.license property as shown in the following code:

confluent.license=LICENCE_STRING_HERE_NO_QUOTES

If you want to use the Kafka broker, download the confluent-community package. The Kafka broker is the default in all Debian or RHEL and CentOS packages.

For more information about migrating to Confluent Server, see Migrate Confluent Platform to Confluent Server.

To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.

Questions?

  • If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.
  • To provide feedback on the Confluent documentation, click the Give us feedback button located near the footer of each page.