Release Notes for Confluent Platform 8.1¶
8.1 is a major release of Confluent Platform that provides you with Apache Kafka® 4.1, the latest stable version of Kafka.
The technical details of this release are summarized below.
For more information about the Confluent Platform 8.1 release, check out the release blog.
Ready to build?
Sign up for Confluent Cloud with your cloud marketplace account and unlock $1000 in free credits: AWS Marketplace, Google Cloud Marketplace, or Microsoft Azure Marketplace.
Kafka brokers¶
Confluent Platform 8.1 features Kafka 4.1.
There are many changes in this version of Confluent Platform. Before you upgrade to Confluent Platform 8.1, you should review Upgrade Confluent Platform and the Kafka 4.1 upgrade guide. These guides provide detailed, step-by-step instructions for performing the upgrade, considerations for rolling upgrades, and crucial information about potential breaking changes or compatibility issues that may arise during the upgrade process.
Confluent Server¶
This release includes following updates for Confluent Server.
Migration from confluent-server to confluent-kafka in KRaft mode: You can now migrate your deployment from the
confluent-server
package to theconfluent-kafka
package. For detailed instructions, see the Confluent Kafka migration guide.Intelligent Replication for Confluent Private Cloud: Intelligent Replication, which was previously exclusive to Confluent Cloud, is now available for self-managed customers through Confluent Private Cloud. This feature delivers up to a 10x performance improvement for large-scale Kafka workloads.
It consists of two main components:
- A new replication implementation that complements Kafka’s existing replication protocol.
- An intelligent algorithm that monitors cluster activity and dynamically switches between replication implementations to maximize performance and ensure replica consistency.
Confluent Community software / Kafka¶
New features in Confluent Platform 8.1 include the following:
KIP-932 Queues for Kafka: Queues for Kafka is now available as a preview. You can test this feature, which can be upgraded to production-ready clusters when Queues becomes generally available. Note that some features, such as the partition assignor, are still in development. For configuration and testing details, see the Apache Queues for Kafka Preview documentation.
The preview includes KIP-1103, which adds metrics for Share Groups.
To enable share groups, set
$share.version = 1$
by using thekafka-features.sh
tool.bin/kafka-features.sh --bootstrap-server <bootstrap URL> upgrade --feature share.version=1
To provide feedback or to discuss the Queues for Kafka preview, contact Confluent.
KIP-853 KRaft Controller Membership Changes: You can now upgrade KRaft voters from a static to a dynamic configuration.
KIP-1166 Improve high-watermark replication: This KIP fixes issues where pending fetch requests could fail to complete, which previously impacted high-watermark progression.
KIP-890 Transactions Server-Side Defense: To prevent an infinite “out-of-order sequence” error, idempotent producers now reject non-zero sequences when no producer ID state exists on the partition for the transaction.
KIP-848 Consumer Group Protocol: This KIP includes several enhancements, such as a new rack-aware assignor (KIP-1101). The new assignor makes rack-aware partition assignment significantly more memory-efficient, which supports hundreds of members in a single consumer group.
KIP-1131 Improved controller-side metrics for monitoring broker states: This KIP adds new controller-side metrics to improve monitoring of broker health and status.
KIP-1109 Unifying Kafka consumer topic metrics: Kafka consumer metrics now preserve periods (
.
) in topic names instead of replacing them with underscores (_
). This change aligns their behavior with producer metrics. The old metrics that use underscores in topic names will be removed in a future release.KIP-1118 Add deadlock protection on the producer network thread: Calling
KafkaProducer.flush()
from within theKafkaProducer.send()
callback now raises an exception to prevent a potential deadlock in the producer.KIP-1143 Deprecate Optional<String> and return String from public Endpoint: The
Endpoint.listenerName()
method that returnsOptional<String>
is now deprecated. You should update your code to use the new method that returns aString
.KIP-1152 Add transactional ID pattern filter to ListTransactions API: You can now filter transactions by a transactional ID pattern when using the
kafka-transactions.sh
tool or the Admin API. This feature avoids the need to retrieve all transactions and filter them on the client side.
For a full list of KIPs, features, and bug fixes, see the Apache Kafka 4.1 release notes.
Clients¶
This release updates client libraries with new authentication methods, improved error handling, and more flexible APIs.
- KIP-1139 OAuth support enhancements: Kafka clients now support the
jwt-bearer
grant type for OAuth, in addition toclient_credentials
. This grant type is supported by many identity providers and avoids the need to store secrets in client configuration files. - KIP-877 Register metrics for plugins and connectors: With KIP-877,
your client-side plugins can implement the
Monitorable
interface to register their own metrics. Tags that identify the plugin are automatically injected and the metrics use thekafka.CLIENT:type=plugins
naming convention, whereCLIENT
is eitherproducer
,consumer
, oradmin
. - KIP-1050 Consistent error handling for transactions: KIP-1050 groups
all transactional errors into five distinct types to ensure consistent error handling across all client SDKs and Producer APIs.
The five types are as follows:
- Retriable: retry only
- Retriable: refresh metadata and retry
- Abortable
- Application-Recoverable
- Invalid-Configuration
- KIP-1092 Extend Consumer#close for Kafka Streams: KIP-1092 adds a new
Consumer.close(CloseOptions)
method. This new method lets Kafka Streams control whether a consumer explicitly leaves its group on shutdown, which gives you finer control over rebalances. TheConsumer.close(Duration)
method is now deprecated. - KIP-1142 List non-existent groups with dynamic configurations: KIP-1142 enables the
ListConfigResources
API to retrieve configurations for non-existent consumer groups that have dynamic configurations defined. - UAMI support: You can now configure your client to use an Azure User-Assigned Managed Identity (UAMI) to
authenticate with an IdP like Microsoft Entra ID. This feature uses Azure’s native identity management to fetch tokens
automatically, which eliminates the need to manage static client IDs and secrets. If you are a Java client user,
import the latest
kafka-client-plugin
Maven artifact. If you use Confluent-provided non-Java clients, you can use this feature with the latest version of your client.
Cluster Management¶
Confluent for Kubernetes (formerly Confluent Operator)¶
For Confluent for Kubernetes release notes, see Confluent for Kubernetes Release Notes.
Ansible Playbooks for Confluent Platform¶
For Ansible Playbooks for Confluent Platform release notes, see the Ansible Playbooks for Confluent Platform.
Confluent Control Center¶
Starting with Confluent Platform 8.0, Confluent Control Center packages are hosted in a separate repository under the name
confluent-control-center-next-gen
, beginning with version 2.0. Control Center is now shipped
independently of Confluent Platform releases. For more information, see the support plans and compatibility documentation
and the Control Center Next-Gen Release Notes.
Kafka Streams¶
Kafka Streams includes the following changes in Confluent Platform 8.1.
KIP-1111 Enforcing explicit naming for Kafka Streams internal topics: The new
ensure.explicit.internal.resource.naming
configuration to enforce explicit naming of all internal topics. This configuration makes internal topic names predictable and lets you alter a topology while preserving existing state topics.KIP-1020 Configuration relocation: The
window.size.ms
andwindowed.inner.class.serde
configurations are now part of theTimeWindowedDe/Serializer
andSessionWindowedDe/Serializer
classes instead ofStreamsConfig
.KIP-1071 Streams rebalance protocol - Early Access: Confluent Platform 8.1 includes an early access version of the new Streams Rebalance Protocol, a broker-driven rebalancing system for Kafka Streams. This version is not intended for production use.
To enable the protocol, set
group.protocol=streams
in the Kafka Streams client configuration. This setting requires Confluent Platform 8.1 or later brokers with unstable feature flags enabled (unstable.feature.versions.enable=true
).This protocol doesn’t support migration from the classic consumer protocol. You must configure applications that use this protocol with a new
application.id
that has not been used previously.A new command-line tool,
kafka-streams-groups.sh
, is available to manage streams groups. You now configure settings, such asnum.standby.replicas
, on the broker using thekafka-configs.sh
tool.To roll back an application that uses the new streams protocol, do the following:
Delete the streams group by using the
kafka-streams-groups.sh
tool:kafka-streams-groups.sh --delete --group <application.id>
Remove the
group.protocol=streams
configuration from your application.Switch your application back to its original
application.id
.
Important
The early access version doesn’t yet support static membership (
instance.id
), pattern-based topic subscriptions, CLI offset resets, or the High Availability assignor. For more details, see the Apache Kafka Streams Upgrade Guide.
REST Proxy¶
To receive Enterprise support for REST Proxy, you must install the confluent-security
package on all
REST Proxy nodes and configure a valid Confluent Enterprise license key. For more information, see Other improvements and changes.
Schema Registry¶
If you use an Confluent Enterprise license for Customer-Managed Confluent Platform for Confluent Cloud subscription with a self-managed Schema Registry, Schema ID validation is not available when using Kafka brokers on Confluent Platform. Schema exporters are functional provided that the source and destination schema registries possess the necessary license on a self-managed schema registry, or if the destination Schema Registry resides within Confluent Cloud.
Kafka Connect¶
In line with the Support policy for self-managed connectors, a specific minimum version of connectors is required for support in Confluent Platform 8.1. For details, see Supported Connector Versions in Confluent Platform 8.1.
If you are using Confluent Control Center, apply the Confluent Enterprise license key in Confluent Control Center (recommended). For instructions, see
Manage Confluent Platform Licenses using Control Center.
Alternatively, you can configure the confluent.license
parameter in the Connect worker configuration, or
at the individual connector level for enterprise support of connectors and Connect worker. For details, see License for Self-Managed Connectors.
Confluent recommends validating in lower environments before production upgrades.
Enterprise support is not provided for the Confluent Community licensed distributions of Kafka Connect workers,
such as, CCS, cp-kafka-connect
, or cp-kafka-connect-base
. For Enterprise support, Confluent recommends using
the Confluent Enterprise distribution of Kafka Connect workers, such as, CE or cp-server-connect
or cp-server-connect-base
.
Confluent Enterprise distribution of Kafka Connect workers will fail without a valid license key. If you are currently running cp-server-connect
,
cp-server-connect-base
, or CE without a valid Enterprise license key and attempt to upgrade to version 8.1.0,
the service will fail to start. This is due to new license enforcement requirements introduced in this release.
As described in the Install a plugin,
Confluent does not recommend exporting CLASSPATH
environment variables to create a path to plugins.
This method can result in library conflicts that may cause Kafka Connect and connectors to fail.
Instead, use the plugin.path
configuration property, which properly isolates each plugin.
Confluent Platform Connect workers now include a pre-check mechanism that runs during the initialisation phase
to ensure there are no incompatibilities with critical Confluent licensing libraries. If these
libraries are missing or incompatible, the Confluent Platform Connect worker will fail to start.
Unified Stream Manager¶
Unified Stream Manager (USM) is now generally available with Confluent Platform 8.1. USM registers your on-premises Confluent Platform cluster with Confluent Cloud to provide a single pane of glass for your data streams.
With USM, you can do the following:
- Use a global policy catalog to enforce data contracts and encryption rules.
- View unified data lineage across all your clusters and infrastructures.
- View and troubleshoot topics and connectors across Confluent Platform and Confluent Cloud from a single, centralized interface.
This release introduces the USM agent component to enable a secure, private network connection to Confluent Cloud:
- All communication occurs over private networking.
- The agent initiates connections from your private environment over a limited set of endpoints.
- Only telemetry and resource metadata are shared with Confluent Cloud.
Kafka brokers and Connect workers are included to update embedded reporters to send the necessary telemetry and resource metadata.
To support the global policy catalog, Confluent Platform Schema Registry provides a read-only mode and a Schema Importer. With these features, Confluent Cloud Schema Registry can serve as the source of truth, while the on-premises Confluent Platform Schema Registry acts as a read-only cache and forwards all write requests to Confluent Cloud.
USM is designed to share only telemetry and metadata between Confluent Platform and Confluent Cloud. This limited data sharing lets you adopt USM without needing to accept the full Confluent Cloud data processing addendum.
For more information about USM, see Unified Stream Manager overview.
Other improvements and changes¶
Starting October 1, 2025, a new Confluent Enterprise license is available for the Customer-Managed Confluent Platform for Confluent Cloud subscription. This license allows you to use self-managed Confluent Platform components exclusively with Confluent Cloud services.
Before installing the new Confluent Enterprise license, you must meet the following prerequisites:
- Ensure all your self-managed components are connected with Confluent Cloud and used for Confluent Cloud broker-related use cases. Confluent does not provide support for any self-managed Confluent Platform components that are used exclusively for Confluent Platform broker use cases under the Customer-Managed Confluent Platform for Confluent Cloud subscription.
- You must upgrade your Confluent Platform to the recommended minimum patch version. This is an additional requirement for license configuration process.
Warning
If you apply the new Confluent Enterprise license before upgrading your Confluent Platform components, the license update will fail. The failure is recorded as an error in the component logs, and it is shown on the Control Center if the license key was applied using Control Center.
If you are an existing customer, your existing license will continue to work until its expiration date.
Note the following for Customer-Managed Confluent Platform for Confluent Cloud:
- This license is solely for the use of self-managed Confluent Platform components with Confluent Cloud. Confluent does not provide support for any self-managed Confluent Platform components that are used exclusively for Confluent Platform broker use cases under the Customer-Managed Confluent Platform for Confluent Cloud subscription.
- To enable enterprise support for connectors and Connect workers, you must apply your Confluent license, either by applying it in the Confluent Control Center (recommended). For instructions, see Manage Confluent Platform Licenses using Control Center. Alternatively, you must configure the confluent.license` parameter in your Connect worker configuration, or at the individual connector level. For details, see License for Self-Managed Connectors.
Supported versions and interoperability¶
For the full list of supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability for Confluent Platform.
How to download¶
You can download Confluent Platform at https://confluent.io/download/. For detailed information, see the Install Confluent Platform On-Premises section.
Important
The Confluent Platform package includes Confluent Server by default and requires a
confluent.license
key in your server.properties
file.
The Confluent Server broker checks for a license during start up. You must
supply a license string in each broker’s properties file using the
confluent.license
property as shown in the following code:
confluent.license=LICENCE_STRING_HERE_NO_QUOTES
If you want to use the Kafka broker, download the confluent-community
package.
The Kafka broker is the default in all Debian or RHEL and CentOS packages.
For more information about migrating to Confluent Server, see Migrate Confluent Platform to Confluent Server.
To upgrade Confluent Platform to a newer version, see the Upgrade Confluent Platform documentation.
Questions?¶
If you have questions about this release, you can reach out through the community mailing list or community Slack. If you are a Confluent customer, you are encouraged to contact our support team directly.
To provide feedback on the Confluent documentation, click the Give us feedback button located near the footer of each page.