Confluent Platform 7.1.1 Release Notes

7.1.1 is a major release of Confluent Platform that provides you with Apache Kafka® 3.1.0, the latest stable version of Kafka.

The technical details of this release are summarized below.

For more information about the 7.1.1 release, check out the release blog and the Streaming Audio podcast.

Confluent Control Center

  • When SSL/TLS is enabled on Confluent Platform components, you must also configure Control Center to successfully proxy requests to those components. With Confluent Platform 7.1, Control Center offers a brand new set of properties that allows you to configure secured connections for individual components in a more organized and fine-grained fashion. For more information, see Configure TLS/SSL for Control Center.
  • You can now restart a connector and all its tasks from the Control Center interface. If a connector or its tasks have failed, you will now be presented with additional information and the ability to restart the failed connector and its tasks. For more information, see Restart a connector.
  • Added support for bigint format for numbers coming from ksqlDB JSON.
  • Added a new property confluent.controlcenter.ui.brokersettings.kafkarest.enable to enable new broker settings UI.
  • Added a new property that controls the segment file size for the log for command topic.
  • Updated the default value of the property that controls the retention period for command topic.
  • Updated property confluent.controlcenter.embedded.kafkarest.enable to be default enabled so that embedded REST Proxy is used for Control Center.
  • Updated the X-Content-Type-Options response HTTP header to nosniff by default to prevent attacks based on MIME sniffing in the browser.
  • When confluent.controlcenter.rest.access.control.allow.origin property is enabled, CORS check is also extended to the WebSocket connection.
  • Fixed broker details page to show correct values under Production latency and Consumption latency.
  • Fixed the “New action” button in the Actions list page.

Kafka Brokers

Confluent Server

  • Cluster Linking now includes the ability to sync ACLs for disaster recovery and cluster migration. Enabling acl.sync.enable and acl.sync.filters on a cluster link with a Confluent Platform 7.1 and later destination cluster will sync ACLs–existing ACLs, additions, modifications, and deletions–from a source cluster, which can be Confluent Platform 5.4 and later or Kafka 2.4 and later. For more information, see Migrating ACLs from Source to Destination Cluster.
  • Tiered Storage now supports Nutanix Objects, Dell EMC ECS, NetApp Storage Grid, and MinIO as S3-compatible backends.

Confluent Community / Apache Kafka

Confluent Platform 7.1.1 features Apache Kafka® 3.1.0. For a full list of the KIPs, features, and bug fixes, take a look at the official Apache Kafka release notes or watch this overview of Kafka 3.1.0.

Clients

  • Confluent Platform includes Apache Kafka® 3.1.0 whose client improvements include KIP-768 (Extend SASL/OAUTHBEARER with Support for OIDC) to enable support for OAuth/OIDC.
  • Librdkafka will be released independently at https://packages.confluent.io/clients/ along with being packaged with Confluent Platform 7.1.1.
  • A new detailed clients guide (with supported features and links to package managers) is available here.

Cluster Management

Confluent for Kubernetes (formerly Confluent Operator)

For the list of Confluent for Kubernetes release versions compatible with this release of Confluent Platform, see the Supported Versions and Interoperability.

You can find the release notes for the latest release of Confluent for Kubernetes here.

Ansible Playbooks for Confluent Platform

New features

  • Ansible Playbooks for Confluent Platform now have tag-based separation of tasks that require root permission from tasks that do not require root permission. You can take advantage of these tags to run tasks that do not require root permission. This enables users who have their own method to manage the prerequisites of Confluent Platform to use the Ansible Playbooks for Confluent Platform without root privileges.
  • You can customize the SSL principal name by extracting one of the fields from the long distinguished name.

Notable enhancements

  • Extended the support of the Ansible Playbooks for Confluent Platform to include Ansible 2.9 and Python 2.7.
  • Extended host validation for memory and storage validation during installation.

Upgrade considerations

  • The Confluent CLI v2 has a breaking change that impacts Confluent Platform upgrades performed using Ansible Playbooks for Confluent Platform. Specifically, if you are using secret protection without RBAC, you cannot upgrade to Confluent Platform 7.1 as RBAC is mandatory with secret protection. For additional details, see here.

Kafka Streams

  • KIP-763: Added open-ended range queries for ReadOnlyKeyValueStore
  • KIP-766: Added open-ended range queries for ReadOnlySessionStore and ReadOnlyWindowStore
  • KIP-775: Introduced support for custom partitioners in foreign key joins

ksqlDB

  • Introduced support for SOURCE tables and streams. SOURCE tables enable direct materialization of Kafka topic data to support lookups using pull queries.
  • Introduced pull query optimizations for range queries. Pull queries filtering for a range of values now have significantly enhanced performance.
  • Introduced support for pull queries over streams. Pull queries were previously only supported on tables, but you can now also run them against streams.
  • You can now access Kafka message headers in a read-only fashion using the ksqlDB syntax.
  • Added support for reusing existing Schema Registry schemas when creating ksqlDB objects (e.g. streams, tables, and persistent queries). Previously, ksqlDB implicitly created a dedicated schema for each associated object created, but now you can leverage existing schemas by referencing them when creating objects.
  • Exposed ksqlDB row partition and offset data via the ROWPARTITION and ROWOFFSET psuedo-columns.
  • [Preview] Significant scalability improvements have been made for certain forms of push queries when the ksql.query.push.v2.enabled configuration parameter is set to true. v2 push queries enable event input multiplexing across many instances of similar push queries, reducing scanning overhead for input events.

Known Issues

Important

If you are using Protobuf-wrapped primitive type structs in ksqlDB, for example google.protobuf.StringValue and similar types, do not upgrade to Confluent Platform 7.1.1. A fix will be provided in a future 7.1 release.

Schema Registry

Schema Linking is now generally available on both Confluent Platform and Confluent Cloud. Schema Linking provides an operationally simple way to maintain trusted, compatible data streams across cloud and hybrid environments with shared schemas that sync in real time for both active-passive and active-active setups. Schemas are shared everywhere they’re needed, providing a simple way to maintain high data integrity while deploying critical use cases including global data sharing, cluster migrations, and preparations for real-time failover in the event of disaster recovery.

LogRedaction

There is a known bug, where if the path string of the redaction RULES FILE in the log4j.properties file contains any trailing spaces, the logredactor fails to start. To avoid this issue, make sure the path string does not contain trailing spaces.

Security

For Confluent Platform versions 6.0.x or later, the AES CBC mode is replaced with the more secure AES GCM mode. Any new configurations are encrypted using the AES GCM mode.

Workflow to update existing encrypted secrets to AES GCM mode

You can update configurations previously encrypted with the AES CBC mode to the AES GCM mode by running the following secrets command:

confluent secret file rotate --data-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties

This command re-encrypts the secrets using AES GCM mode.

Confluent recommends rotating the master key by running the following command.

confluent secret file rotate --master-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties

Workflow to upgrade only the Confluent CLI version

If you only need to upgrade the Confluent CLI version, Confluent continues to support the AES CBC mode and no additional change is required.

Workflow to upgrade the |cp| version

The AES GCM mode is available only after you upgrade Confluent Platform. The Confluent CLI secrets command queries the Metadata Service (MDS) cluster to check for the supported AES mode.

You can use the new XX_SECRETS_GCM_MODE flag to enforce the AES GCM mode when upgrading the Kafka and the Metadata Service (MDS) cluster. Use the command as described below.

Workflow to upgrade the MDS cluster

To upgrade the Metadata Service (MDS) cluster, complete the following steps:

  1. Rotate the existing secrets using the following command:
XX_SECRETS_GCM_MODE=true confluent secret file rotate --master-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties
  1. Encrypt any new secrets using the following command:
XX_SECRETS_GCM_MODE=true confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config "config.storage.replication.factor,config.storage.topic"
  1. Boot the MDS cluster.

Workflow to upgrade the |ak| cluster after the |mds-long| cluster is upgraded

If you upgrade the Kafka cluster after upgrading the MDS cluster, you do not need to use the XX_SECRETS_GCM_MODE. The MDS cluster will notify Confluent CLI to use AES GCM mode.

  1. Rotate the existing secrets using the following command:
confluent secret file rotate --master-key --local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties
  1. Encrypt any new secrets using the following command:
confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config "config.storage.replication.factor,config.storage.topic"
  1. Boot the Kafka cluster.

Workflow to upgrade the |ak| cluster before upgrading the |mds-long| cluster

If you upgrade the Kafka cluster before upgrading the MDS cluster, you need to use the XX_SECRETS_GCM_MODE flag along with the secrets command to enforce AES GCM mode.

  1. Rotate the existing secrets using the following command.
XX_SECRETS_GCM_MODE=true confluent secret file rotate --master-key \
--local-secrets-file /usr/secrets/security.properties \
-–passphrase @/User/bob/secret.properties \
--passphrase-new @/User/bob/secretNew.properties
  1. Encrypt any new secrets using the following command.
XX_SECRETS_GCM_MODE=true confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config "config.storage.replication.factor,config.storage.topic"
  1. Boot the Kafka cluster.

Note

You need to use XX_SECRETS_GCM_MODE only during this upgrade. If you do not use the XX_SECRETS_GCM_MODE flag during the upgrade, the secrets are encrypted using the AES CBC mode. After the upgrade is complete, any new configurations will be encrypted using AES GCM mode.

The AES GCM mode is available only for Confluent Platform version 6.0.x or later. For earlier versions, the weaker AES CBC cipher mode will continue to be supported due to legacy reasons.

Connect

  • KIP-745: Connect API to restart connector and tasks: this feature restarts a connector and its tasks by passing a single API call. Previously, to recover from a connector or task failure, you had to issue a separate REST API call to manually restart each of the connector and task instances.
  • KIP-721: Enable connector log contexts by default in Connect Log4j configuration: This feature enables the addition of connector contexts in the Connect worker logs by default. KIP-449 added connector contexts to the Connect worker logs, but it was not enabled by default. You can disable the addition of connector contexts by changing the connect-log4j.properties configuration file.
  • KIP-738: Removal of Connect’s internal converter properties: This feature removes the following connect properties: internal.key.converter, internal.value.converter, and any properties prefixed with these two properties. If the removed properties are passed, the Connect worker will ignore.
  • KIP-722: Enable connector client overrides by default: This feature changes the default value for the connector.client.config.override.policy worker configuration property to All. KIP-458, added connector-specific client overrides, but it was not enabled by default. You can disable connector-specific client overrides by setting connector.client.config.override.policy to None.

Telemetry

In Confluent Platform 7.1.1 the Telemetry Reporter will begin collecting a select list of broker configuration details from Confluent Server. The inclusion of this additional telemetry data allows Confluent to provide accelerated support when opening tickets against clusters that are enabled with Health+. For more information, see Confluent Telemetry Reporter.

Other improvement and changes

Ubuntu 20.04 and Debian 10 are now both supported with Confluent Platform 7.1.1 and later. For more information, see Operating Systems.

Deprecation Warnings

This is a warning of a future deprecation. Java 8 support will be deprecated in the next minor version of Confluent Platform. This warning is meant to provide as much time as possible to upgrade your applications. Java 11 is recommended .

How to Download

Confluent Platform is available for download at https://www.confluent.io/download/. See the On-Premises Deployments section for detailed information.

Important

The Confluent Platform package includes Confluent Server by default and requires a confluent.license key in your server.properties file. Starting with Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must supply a license string in each broker’s properties file using the confluent.license property as below:

confluent.license=LICENCE_STRING_HERE_NO_QUOTES

If you want to use the Kafka broker, download the confluent-community package. The Kafka broker is the default in all Debian or RHEL and CentOS packages.

For more information about migrating to Confluent Server, see Migrate to Confluent Server.

To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.

Supported Versions and Interoperability

For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.

Questions?

If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.