Confluent Platform 5.2.2 Release Notes¶
This is a bugfix release of Confluent Platform that provides Confluent users with Apache Kafka 2.2.1, the latest stable version of Kafka, and additional bug fixes.
You are encouraged to upgrade to Confluent Platform 5.2.2, because it includes important bug fixes. The technical details of this release are summarized below.
- CC-4930: Adjust expected calls in case schedulePeriodicRefreshMetadata gets scheduled by NewReplicatorAdminClient
- CC-4928: During translation retries, save newer timestamps for retry
- CC-4536: Adapt ReplicatorApp to upstream changes in AK
- CC-4576: De-duplicate in timestamp interceptors: add configs for batching in Replicator
- CC-4474: Use committed offsets even if dest topic doesn't exist
- CC-4347: Start REST server when using Replicator app
- CC-3405: Close admin client if constructed locally
- CC-4257: Allow consumer offset to be preferred over connect offset
- PR-6840 - KAFKA-8418: Wait until REST resources are loaded when starting a Connect Worker. (#6840)
- PR-6818 - KAFKA-8187: Add wait time for other thread in the same jvm to free the locks (#6818)
- PR-6722 - KAFKA-8351: Cleaner should handle transactions spanning multiple segments (#6722)
- PR-6579 - KAFKA-8229: Reset WorkerSinkTask offset commit interval after task commit (#6579)
- PR-6636 - KAFKA-8290: Close producer for zombie task (#6636)
- PR-6719 - KAFKA-8347: Choose next record to process by timestamp (#6719)
- PR-6675 - KAFKA-8320: fix retriable exception package for source connectors (#6675)
- PR-6726 - KAFKA-8363: Fix parsing bug for config providers (#6726)
- PR-6715 - KAFKA-8335: Clean empty batches when sequence numbers are reused (#6715)
- PR-5918 - KAFKA-7633: Allow Kafka Connect to access internal topics without cluster ACLs (#5918)
- PR-6713 - KAFKA-8352: Fix Connect System test failure 404 Not Found (#6713)
- PR-6707 - KAFKA-8348: Fix KafkaStreams JavaDocs (#6707)
- PR-5578 - KAFKA-6789: Handle retriable group errors in AdminClient API (#5578)
- PR-6685 - KAFKA-8240: Fix NPE in Source.equals() (#6685)
- PR-6651 - KAFKA-8304: Fix registration of Connect REST extensions (#6651)
- PR-6652 - KAFKA-8306: Initialize log end offset accurately when start offset is non-zero (#6652)
- PR-6672 - KAFKA-8323: Close RocksDBStore's BloomFilter (#6672)
- PR-6670 - KAFKA-8289: Fix Session Expiration and Suppression (#6654) (#6670)
- PR-6568 - KAFKA-7601: Clear leader epoch cache on downgraded format in append (#6568)
- PR-6613 - KAFKA-8248: Ensure time updated before sending transactional request (#6613)
- PR-6402 - KAFKA-8066: Always close the sensors in Selector.close() (#6402)
- PR-6643 - KAFKA-8298: Fix possible concurrent modification exception (#6643)
- PR-6602 - KAFKA-8254: Pass Changelog as Topic in Suppress Serdes (#6602)
- PR-6305 - Fix for KAFKA-7974: Avoid zombie AdminClient when node host isn't resolvable (#6305)
- PR-6615 - KAFKA-7895: fix Suppress changelog restore (#6536) (#6615)
- PR-6555 - KAFKA-8204: fix Streams store flush order (#6555)
- PR-6550 - KAFKA-8277: Fix NPEs in several methods of ConnectHeaders (#6550)
- PR-6570 - KAFKA-7866: Ensure no duplicate offsets after txn index append failure (#6570)
- PR-6585 - KAFKA-8241: Handle configs without truststore for broker keystore update (#6585)
- PR-6573 - KAFKA-8210: Fix link for streams table duality (#6573)
- PR-6564 - KAFKA-8209: Wrong link for KStreams DSL in core concepts doc (#6564)
- PR-6572 - KAFKA-8208: Change paper link directly to ASM (#6572)
- PR-6581 - KAFKA-8232: Test topic delete completion rather than intermediate state (#6581)
- PR-6384 - KAFKA-8058: Fix ConnectClusterStateImpl.connectors() method (#6384)
- PR-6547 - KAFKA-8157: fix the incorrect usage of segment.index.bytes (2.2) (#6547)
- PR-6539 - KAFKA-8190: Don't update keystore modification time during validation (#6539)
- PR-6475 - KAFKA-8126: Flaky Test org.apache.kafka.connect.runtime.WorkerTest.testAddRemoveTask (#6475)
- PR-305 - SEC-168: Exclude jackson-core from packaging.
- PR-302 - CC-4054: Remove the use of null_value (default values) for text and binary based fields
- PR-425 - CC-4318: Fix off-by-one error for offset reporting to Kafka topic
- PR-642 - CC-4423: Remove semicolon from Db2 dialect timestamp query
- PR-66 - CC-4422: JMS Connector Performance Updates
- PR-243 - CC-4228: Configuration option to override expect-continue in upload protocol
Confluent Platform 5.2.1 Release Notes¶
This is a bugfix release of Confluent Platform that provides Confluent users with Apache Kafka 2.2.0, the latest stable version of Kafka and additional bug fixes.
You are encouraged to upgrade to Confluent Platform 5.2.1, because it includes important bug fixes. The technical details of this release are summarized below.
- PR-6489 - KAFKA-8150: Fix bugs in handling null arrays in generated RPC code (#6489)
- PR-6342 - KAFKA-8014: Extend Connect integration tests to add and remove workers dynamically (#6342)
- PR-6484 - KAFKA-8142: Fix NPE for nulls in Headers (#6484)
- PR-6482 - KAFKA-7989: RequestQuotaTest should wait for quota config change before running tests (#6482)
Confluent Platform 5.2.0 Release Notes¶
- Schema migration:
Confluent Replicator now supports migration of schemas from a self-managed, on-prem Schema Registry to a fully managed Schema Registry in Confluent Cloud. Specifically, Replicator supports the following two scenarios:
- Continuous migration: You can use your self-managed Schema Registry as a primary and Confluent Cloud Schema Registry as a secondary. New schemas are registered directly to the self-managed Schema Registry, and Replicator will continuously copy schemas from it to the Confluent Cloud Schema Registry, which should be set to IMPORT mode.
- One-time migration: You can migrate your existing self-managed Schema Registry to Confluent Cloud Schema Registry as a primary. All new schemas are registered to Confluent Cloud Schema Registry. In the scenario, there is no migration from Confluent Cloud Schema Registry back to the self-managed Schema Registry.
Confluent Control Center introduces several new features, including:
- Message browser enhancements: The message browser has been improved to allow seeking to an offset on a specific partition, switching between table and card output formats, viewing topic metadata, and more easily viewing messages using feed auto-loading and pausing on hover.
- KSQL UI enhancements: The KSQL UI has been improved to allow you to more easily understand query status, including query errors. Similar to the message browser enhancements, you can now switch between table and card output formats, view output metadata, and more easily inspect output using feed auto-loading and pausing on hover.
- Scalability up to 40k partitions (assuming a replication factor of 3): System Health monitoring now scales up to support 40k topic partitions. With a replication factor of 3, this scaling improvement allows monitoring up to 120k total replicas. If your topics have a smaller replication factor, then Control Center could scale beyond 40k partitions. For more information, see Partition limitations.
- Schema management: Since CP 5.0.0, Control Center has supported viewing key and value schemas, as well as comparing older schema versions with the current one. In 5.2.0, Schema Registry support has been extended to allow creating and editing schemas, validating schemas against compatibility policy, and managing the compatibility policy itself.
- Dynamic broker configuration: Since CP 5.0.0, Control Center has supported viewing broker configurations. With 5.2.0, you can now make changes to those dynamic configurations that don't require broker restarts.
- License management and expiration notification: Control Center now notifies you of impending expiration of your subscription enterprise license (3 months out, 1 month out, weekly for the last month, and daily for the last week). You can also submit a new enterprise license key in the Control Center UI License tab, which makes it easier to manage your subscription.
- Support for multiple KSQL clusters: In the 5.2 release, Control Center supports running, managing, and monitoring KSQL queries and statements on multiple KSQL clusters. Previous versions only supported a single KSQL cluster.
- Support for multiple Connect clusters: Connect support in 5.2.0 has been extended to multiple Connect clusters. In previous releases, Control Center supported creating, managing, and monitoring connectors on only one Connect cluster. Combined with multi-cluster KSQL support, this makes Control Center much more useful for monitoring deployments at scale.
- SSL support: KSQL can be configured to use HTTPS rather than the default HTTP for all communication. This includes both the server and client. For more information, see Configuring KSQL for HTTPS.
- CASE expressions:
KSQL now supports
CASEconditional expressions in searched form, where KSQL evaluates each condition from left to right. It returns the result for the first condition that evaluates to
true. If no condition evaluates to
true, the result for the
ELSEclause is returned. If there is no
nullis returned. For more information, see KSQL Syntax Reference.
- Improved debuggability of KSQL queries: A log of record processing events is now available to help debug KSQL queries. The log can be configured to log to Kafka to be consumed directly as a KSQL stream.
- URI UDF functions: A new family of UDFs has been added for improved handling of URIs (e.g. extracting information/decoding information). For more information, see Scalar functions.
- Read after write consistency: New commands will not execute until previous commands have finished executing. This feature is enabled by default in the CLI and can be implemented by the user for the REST API.
- GROUP BY enhancements:
KSQL now supports
GROUP BYfor more than just simple columns, including:
- fields within structs
- arithmetic results
- string concatenations
- Improved support for multi-line requests in interactive mode deployments: Prior to this version, KSQL parsed the full request before attempting to execute any statements. Requests that contained later statements that were dependent the execution of prior statements may have failed. In this version and later, this is no longer an issue.
- Improved headless mode deployments: Prior this version, KSQL parsed the full script before attempting to execute any statements. The full parse would often fail when later statements relied on the execution of earlier statements. In this version and later, this is no longer an issue.
- RUN SCRIPT has been deprecated: The use of the RUN SCRIPT statement via the REST API is now deprecated and will be removed in the next major release. The feature circumnavigates certain correctness checks and is unnecessary, given the script content can be supplied in the main body of the request. If you are using the RUN SCRIPT functionality from the KSQL CLI you will not be affected, as this will continue to be supported. If you are using the RUN SCRIPT functionality directly against the REST API your requests will work with this version of the server, but will be rejected after the next major version release. Instead, include the contents of the script in the main body of your request.
- Improved RocksDB lookup performance: RocksDB supports using bloom filters for optimizing lookup performance of keys. Kafka Streams has enabled this behavior to increase the performance of its state stores. See KAFKA-4850 for more information.
- New flatTransform method:
Kafka Streams now supports a new method in its API,
flatTransform. It is the equivalent of
flatMapfor the Processor API. Unlike the traditional
transformmethod, this method is able to ensure strong typing by specifying in its signature a list of key-value pairs (i.e. Iterable) as output records for each input record. This is a partial feature from KIP 313.
Elasticsearch Connector: Added 2-way TLS/SSL support to Elasticsearch sink connector.
HDFS Connector: * HDFS Connector is based on Hadoop v2.7.3 which uses Netty and Groovy libraries which have the following CVEs. Due to the Hadoop CVEs, the connector will be removed from the Confluent Platform package when a major/minor version is released. However, it will continue to be downloadable from Confluent Hub and supported by Confluent. The affected libraries are required by the latest Hadoop 2.7.3 client included in the
kafka-connect-hdfsdirectory and are not loaded unless the HDFS connector is used. If you are really concerned, you can remove the
- The connector now returns committed offsets to the consumer offsets topic. This feature should only be used for tracking lag, and does not affect the connector's exactly-once behavior.
JDBC Connector: The ability to set the timezone when querying a database was added to both source and sink JDBC connectors.
- Idempotent Producer Provides exactly-once producer message delivery and guaranteed message ordering.
- Sparse connections Clients now connect to a single bootstrap server to acquire metadata. This allows clients to communicate only with the brokers they need to, greatly reducing the number of connections between clients and brokers.
- ZSTD compression
Support for the Zstandard real-time compression algorithm. To enable use
- max.poll.interval.ms Allows users to set the session timeout to be significantly lower. This supports faster detection of process crashes. See `KIP-62<https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread>`_ for more details.
For a complete list of features and improvements found in librdkafka v1.0.0, see the release notes here
Confluent-Kafka-Python and Confluent-Kafka-Go¶
Both clients use librdkafka and include all the improvements in librdkafka v1.0.0.
- Includes benefits from all the librdkafka improvements
- Includes new producer and consumer serialization functionality that implements the .NET serializer interfaces. Consumers, Producers, and the AdminClient are constructed with Builder classes.
- Avro SerDes no longer make blocking calls to ICachedSchemaRegistryClient - everything is awaited
- JMX metrics:
ConfluentMetricsReportermetrics reporter will pull out producer and consumer JMX metrics from each worker JVM and write them into a Kafka topic (default:
_confluent-connect-metrics) once the following properties are added to the Connect worker config file.
- More use of AdminClient in commands
kafka-topics.shnow use the AdminClient, so no need to pass in
--zookeeper! Note that kafka-preferred-replica-election.sh command is not available in the Confluent Cloud CLI. KIP-183 KIP-337
- Improved the default group id behavior in KafkaConsumer Default group.id has been change from “ “ to null, so that unnecessary or unintentional offset fetches / commits are avoided. KIP-289
- Separate controller and data planes Separates controller connections and requests from data plane, which helps ensure controller stability even if the cluster is overloaded due to produce / fetch requests. This feature is disabled by default. KIP-291
- Detection ofoutdated control requests and bounced brokers Kafka now detects outdated control requests to increase cluster stability when a broker bounces. KIP-380
- Introduction of configurable consumer group size limit Introduces a configurable consumer group size limit that protects the cluster resources from too many consumers joining a consumer group. KIP-389
18 new features and configuration changes have been added with 150 resolved bug fixes and improvements. For a full list of changes in this release of Apache Kafka, see the Apache Kafka 2.2.0 Release Notes.
- Firefox browser issue: The Control Center System Health page in Firefox browser may not render correctly For more information on supported browsers, see Web Browsers.
- Partitions limitations: Environments with greater than 40K topic partitions and 120K total replicas causes Control Center to start lagging. Using Control Center is not recommended in those environments until the issue is resolved. Control Center versions prior to 5.2.0 supported up to 10K partitions.
How to Download¶
To upgrade Confluent Platform to a newer version, check the Upgrade documentation.
Supported Versions and Interoperability¶
For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.