Confluent Platform 7.0 Release Notes

7.0 is a major release of Confluent Platform that provides you with Apache Kafka® 3.0.0, the latest stable version of Kafka.

The technical details of this release are summarized below.

For more information about the 7.0 release, check out the release blog and the Streaming Audio podcast.

Confluent Control Center

  • Confluent Control Center has added a Reduced infrastructure mode that allows the service to be run without monitoring. This compliments the Health+ product launch, which allows Confluent Platform you to move your monitoring to the cloud. Management mode is not enabled by default and is toggled by updating a new Control Center property. For more information, see Management services and Reduced infrastructure mode.
  • A new UI to assist with the setup of Health+ was added to Control Center. This wizard helps you understand the benefits of Health+ and walks you through the setup process to enable Health+ on a given cluster. For more information, see Confluent Health+.
  • Fixed an issue with the topic messages viewer which caused it to lose precision on large numbers.
  • The active controller count is now retrieved from the Kafka API. This improves the stability of the active controller.
  • Control Center has a completely new user interface for adjusting both cluster and broker configurations. The new cluster configuration interface replaces the original configuration interface in cluster settings. Now when there’s a broker-specific configuration override detected, Control Center will show you what’s been changed so that you can make sure that the right settings are being applied. This can help avoid situations where you might have unintended skews in clusters settings leading to unwanted behavior. For more information, see Broker settings UI.

Kafka Brokers

Confluent Server

  • Cluster Linking is now generally available and suitable for production applications. This feature can be used to share or aggregate data between different clusters, migrate to a new cluster or to Confluent Cloud, or set up a disaster recovery cluster. Major updates to Cluster Linking include:
    • A REST API in Kafka REST v3.
    • The source-initiated links allow for the connection to be initiated by the source cluster. This is useful when Cluster Linking from on-premises to Confluent Cloud, and provides stability and performance improvements.
    • The ACL sync sub-feature that was previously in preview has been removed from Confluent Platform 7.0.x.
  • Self-Balancing Clusters uses a new broker removal engine under the hood that offers better performance and reliability while offering first-class support for Confluent for Kubernetes. There are no command or API changes so you can immediately take advantage of the benefits.

Confluent Community / Apache Kafka

Confluent Platform 7.0 features Apache Kafka® 3.0.0. For a full list of the KIPs, features, and bug fixes, take a look at the official Apache Kafka release notes or watch Confluent’s very own Tim Berglund give an overview of Kafka 3.0.0.

Clients

  • librdkafka v1.8.0 and v.1.8.2:

    • This release refines and improves the stability of librdkafka and the derived libraries. The zlib version was updated to fix dependent CVEs, including CVE-2016-9840, CVE-2016-9841, CVE-2016-9842, and CVE-2016-9843. vcpkg is now used for up-to-date Windows dependencies in librdkafka.redist.
    • Additional checks are added to verify upstream dependencies (OpenSSL, zstd,zlib) for builds that are bundled with derived libraries (confluent-kafka-go, confluent-kafka-python, and confluent-kafka-dotnet).
    • Enhancements to improve usability of librdkafka.
      • Improved producer behavior where flush() overrides the linger.ms setting momentarily to transmit queued messages immediately.
      • The transactional producer is enhanced with the Abortable transaction error state that allows it improved recovery from repeated leader changes. Previously these would erroneously trigger an epoch bump on the producer causing a mismatch in the transaction state between producer and coordinator and lead to message loss.
      • Bug fixes in the latest version of librdkafka that make it the preferred choice for new workloads. Existing workloads are encouraged to adopt this new version of librdkafka.
  • Producer defaults now enable the strongest delivery guarantee (ack=all and idempotence=true). This means that you now get ordering and durability by default.

  • Independent releases and additional platforms for librdkafka:

    • librdkafka is an open source community project available at: https://github.com/edenhill/librdkafka. However, Confluent maintains its own official, signed set of binaries which are packaged with Confluent Platform.
    • Starting with Confluent Platform 7.0, the librdkafka releases are also packaged independently from Confluent Platform and available at https://packages.confluent.io/clients/. Additionally, with this change, the librdkafka releases are available for additional Debian and RPM architectures.
    • Details on which operating systems and architectures are supported can be found under Operating Systems in Supported Versions and Interoperability.
    • The distribution of librdkafka derived clients for Python, Go and .NET is unchanged.

    To learn more about Confluent’s official clients, check out the the Build Client Applications for Confluent Platform documentation.

Cluster Management

Confluent for Kubernetes (formerly Confluent Operator)

For the list of Confluent for Kubernetes release versions compatible with this release of Confluent Platform, see the Supported Versions and Interoperability.

You can find the release notes for the latest release of Confluent for Kubernetes here.

Ansible Playbooks for Confluent Platform

New features

The Ansible Playbooks for Confluent Platform are now structured as Ansible Collections. This modernizes the structure of the Ansible Playbooks for Confluent Platform to conform with industry-standard best practices for Ansible. This will make it easier to compose using the Ansible Playbooks for Confluent Platform and other Ansible content, and improve the ability for your organization to provision and configure software holistically and consistently with Ansible. To understand how to work with the new structure, see the documentation on downloading Ansible Playbooks for Confluent Platform and using the Playbooks to install or upgrade Confluent Platform.

Notable enhancements

  • Installs Java version 11 by default; the previous default was Java version 8. If you want to use Java 8, you can use the inventory variable appropriate for your platform: ubuntu_java_package_name, debian_java_package_name, or redhat_java_package_name.
  • Adds support for Ubuntu 20.
  • Adds support for Debian 10.

Notable fixes

When debug is enabled with the -vvv Ansible option, sensitive information, such as passwords, certificates, and keys, are printed in the output. Ansible does not provide a way to suppress sensitive information with the -vvv. Therefore, it is not recommended to use the debug mode in production environments.

As an alternative, use the playbook with the --diff option when troubleshooting issues. With this release, Ansible Playbooks for Confluent Platform no longer prints sensitive information, such as passwords, certificates, and keys, in the output of the --diff option.

For details, see Troubleshoot.

Known issues

If you have deployed Confluent Platform with the Ansible Playbooks where Java 8 was installed, you cannot use Ansible Playbooks to update the Confluent Platform deployment to use Java 11. Even if your inventory file is configured to install Java 11, running the Ansible Playbooks will only install Java 11 but the Confluent Platform components will continue to use Java 8.

Upgrade considerations

  • If you are deploying Confluent Platform with the Ansible Playbooks configured for FIPS operational readiness, you must use Java 8. Confluent Platform FIPS operational readiness is not compatible with Java 11. For new installations or upgrades where FIPS operational readiness is desired, it is recommended that you explicitly configure your inventory file to use Java 8 by using the inventory variable appropriate for your platform: ubuntu_java_package_name, debian_java_package_name, or redhat_java_package_name.
  • The Ansible Playbooks are now structured as Ansible Collections. To understand how to work with the new structure, see the documentation on using the Playbooks to upgrade Confluent Platform.

Kafka Raft (KRaft)

Kafka Streams

  • Removed APIs that have been deprecated since Kafka Streams 2.5 or earlier
  • KIP-732: Deprecate eos-alpha and replace eos-beta with eos-v2
  • KIP-733: Change default replication factor changed from 1 to -1
  • KIP-741: Change default serde to be null
  • KIP-466: Generic List serialization and deserialization support has been added to Kafka
  • KIP-623: Add internal-topics option to application reset tool to specify a subset of internal topics to delete
  • KIP-633: Remove default 24-hour grace period used in windows

ksqlDB

  • Confluent Platform 7.0 packages ksqlDB release 0.21.0
  • Added support for foreign-key table-table joins
  • Added SHOW CONNECTOR PLUGINS syntax to display which connectors are installed and available for use
  • Added connector management methods to Java client:
    • createConnector - Create a connector with the given configuration
    • dropConnector - Delete the given connector
    • describeConnector - Retrieve metadata associated with the given connector
    • listConnectors - Retrieve a listing of all connectors.
  • Added an idle timeout server configuration parameter (ksql.idle.connection.timeout.seconds) for push queries in order to keep connections alive for the given period when no events are being sent over the connection
  • Added a DATE type with associated helper functions:
    • DATEADD - Add an interval to the given DATE value
    • DATESUB - Subtract an interval from the given DATE value
    • PARSE_DATE - Convert a STRING value into a DATE value
    • FORMAT_DATE - Convert a DATE value into a STRING value
  • Added a TIME type with associated helper functions:
    • TIMEADD - Add an interval to the given TIME value
    • TIMESUB - Subtract an interval from the given TIME value
    • PARSE_TIME - Convert a STRING value into a TIME value
    • FORMAT_TIME - Convert a TIME value into a STRING value
  • Added a BYTES type with associated helper functions:
    • TO_BYTES - Convert a given string into a BYTES value
    • FROM_BYTES - Parse a given BYTES value into a string

Schema Registry

Schema Linking preview for Confluent Platform is available in 7.0. This feature, which was first released in Confluent Cloud, keeps schemas in sync across two Schema Registry clusters and can be used in conjunction with Cluster Linking to keep both schemas and topic data in sync across two Schema Registry and Kafka clusters. For more information, see Schema Linking on Confluent Platform.

Security

For Confluent Platform versions 6.0.x or later, the AES CBC mode is replaced with the more secure AES GCM mode. Any new configurations are encrypted using the AES GCM mode.

Workflow to update existing encrypted secrets to AES GCM mode

You can update configurations previously encrypted with the AES CBC mode to the AES GCM mode by running the following secrets command:

confluent secret file rotate --data-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties

This command re-encrypts the secrets using AES GCM mode.

Confluent recommends rotating the master key by running the following command.

confluent secret file rotate --master-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties

Workflow to upgrade only the Confluent CLI version

If you only need to upgrade the Confluent CLI version, Confluent continues to support the AES CBC mode and no additional change is required.

Workflow to upgrade the |cp| version

The AES GCM mode is available only after you upgrade Confluent Platform. The Confluent CLI secrets command queries the Metadata Service (MDS) cluster to check for the supported AES mode.

You can use the new XX_SECRETS_GCM_MODE flag to enforce the AES GCM mode when upgrading the Kafka and the Metadata Service (MDS) cluster. Use the command as described below.

Workflow to upgrade the MDS cluster

To upgrade the Metadata Service (MDS) cluster, complete the following steps:

  1. Rotate the existing secrets using the following command:
XX_SECRETS_GCM_MODE=true confluent secret file rotate --master-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties
  1. Encrypt any new secrets using the following command:
XX_SECRETS_GCM_MODE=true confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config "config.storage.replication.factor,config.storage.topic"
  1. Boot the MDS cluster.

Workflow to upgrade the |ak| cluster after the |mds-long| cluster is upgraded

If you upgrade the Kafka cluster after upgrading the MDS cluster, you do not need to use the XX_SECRETS_GCM_MODE. The MDS cluster will notify Confluent CLI to use AES GCM mode.

  1. Rotate the existing secrets using the following command:
confluent secret file rotate --master-key --local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties
  1. Encrypt any new secrets using the following command:
confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config "config.storage.replication.factor,config.storage.topic"
  1. Boot the Kafka cluster.

Workflow to upgrade the |ak| cluster before upgrading the |mds-long| cluster

If you upgrade the Kafka cluster before upgrading the MDS cluster, you need to use the XX_SECRETS_GCM_MODE flag along with the secrets command to enforce AES GCM mode.

  1. Rotate the existing secrets using the following command.
XX_SECRETS_GCM_MODE=true confluent secret file rotate --master-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties
  1. Encrypt any new secrets using the following command.
XX_SECRETS_GCM_MODE=true confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config "config.storage.replication.factor,config.storage.topic"
  1. Boot the Kafka cluster.

Note

You need to use XX_SECRETS_GCM_MODE only during this upgrade. If you do not use the XX_SECRETS_GCM_MODE flag during the upgrade, the secrets are encrypted using the AES CBC mode. After the upgrade is complete, any new configurations will be encrypted using AES GCM mode.

The AES GCM mode is available only for Confluent Platform version 6.0.x or later. For earlier versions, the weaker AES CBC cipher mode will continue to be supported due to legacy reasons.

Connect

  • KIP-745: Connect API to restart connector and tasks: this feature restarts a connector and its tasks by passing a single API call. Previously, to recover from a connector or task failure, you had to issue a separate REST API call to manually restart each of the connector and task instances.
  • KIP-721: Enable connector log contexts by default in Connect Log4j configuration: This feature enables the addition of connector contexts in the Connect worker logs by default. KIP-449 added connector contexts to the Connect worker logs, but it was not enabled by default. You can disable the addition of connector contexts by changing the connect-log4j.properties configuration file.
  • KIP-738: Removal of Connect’s internal converter properties: This feature removes the following connect properties: internal.key.converter, internal.value.converter, and any properties prefixed with these two properties. If the removed properties are passed, the Connect worker will ignore.
  • KIP-722: Enable connector client overrides by default: This feature changes the default value for the connector.client.config.override.policy worker configuration property to All. KIP-458, added connector-specific client overrides, but it was not enabled by default. You can disable connector-specific client overrides by setting connector.client.config.override.policy to None.

Telemetry

In Confluent Platform 7.0 the Telemetry Reporter will begin collecting a select list of broker configuration details from Confluent Server. The inclusion of this additional telemetry data allows Confluent to provide accelerated support when opening tickets against clusters that are enabled with Health+. For more information, see Confluent Telemetry Reporter.

Other improvement and changes

Ubuntu 20.04 and Debian 10 are now both supported with Confluent Platform 7.0 and later. For more information, see Operating Systems.

Deprecation Warnings

  • KIP-724: Drop support for message formats v0 and v1. All new messages will be written as V2 (available since June 2017). It is recommended that you upgrade your clients to get the best cluster performance.
  • Confluent Platform is no longer supported for deployment on the Mesosphere DC/OS Platform.

How to Download

Confluent Platform is available for download at https://www.confluent.io/download/. See the On-Premises Deployments section for detailed information.

Important

The Confluent Platform package includes Confluent Server by default and requires a confluent.license key in your server.properties file. Starting with Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must supply a license string in each broker’s properties file using the confluent.license property as below:

confluent.license=LICENCE_STRING_HERE_NO_QUOTES

If you want to use the Kafka broker, download the confluent-community package. The Kafka broker is the default in all Debian or RHEL and CentOS packages.

For more information about migrating to Confluent Server, see Migrate to Confluent Server.

To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.

Supported Versions and Interoperability

For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.

Questions?

If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.