Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Confluent Platform 5.5.15 Release Notes

5.5.15 is a bug fix release of Confluent Platform that provides you with Apache Kafka® 2.5.1, the latest stable version of Kafka.

Changelogs for the patch releases can be found in Confluent Platform Component Changelogs.

The technical details of this release are summarized below. For more information about the 5.5.0 release, check out the release blog and the Streaming Audio podcast.

Commercial Features

Confluent Control Center

New ksqlDB topology viewer
ksqlDB Applications can now be represented as a topology graph. With this view you can quickly see and make sense of how data flows across your ksqlDB application and how existing queries interact with each other. Each node in the graph represents either a stream, table or query while each connecting line (or edge) in the graph represents one node’s input source and another node’s output destination. Selecting a node allows you to view real-time messages and schemas.
UI Rebrand
The Control Center UI has a brand new look! There are new colors and fonts in the UI, and more emphasis on the most important and relevant pieces of information. The fonts have been updated to make the metrics more legible and improve the hierarchy of data, to make it easier to find what you need. For more information, see Control Center User Guide.
Protobuf and JSON support in message viewer
With the addition of Protocol Buffers and JSON Schema support in Schema Registry, the Control Center schema viewer has been updated allowing you to create, inspect, edit and version these additional schema types.
Seek to human readable time/date in message viewer
Human-readable timestamps have been added to the topic message viewer UI. This makes it easier to jump backwards to a date and time on a given topic and see messages from that point forward. Use the feature to easily inspect the messages flowing through a topic without having to know a particular offset or the epoch timestamp
“Messages behind” added to Consumer Group table
Back by popular demand, the “Messages behind” has been added as a column in our Consumer Groups overview table. This column allows you to see the total sum of messages behind for all the consumers in a given consumer group without having to drill into each one.

Confluent Server

  • Multi-Region Clusters improvements:
    • Default replica placement constraints with confluent.log.placement.constraints can now be set on the brokers to make it easier to use multi-region with ksqlDB and Kafka Streams. You are still required to manually set replica placement constraints when setting up a multi-region cluster on __consumer_offsets and __transaction.
    • The kafka-topic command now has an --invalid-replica-placement-partitions flag that allows you to quickly see all partitions that are not meeting their replication placement constraints.
    • Confluent Auto Data Balancer now includes -topics and --replica-placement-only flags that make it easier to change replica placement constraints. This procedure is sometimes required to properly sequence disaster recovery failback procedures.
  • Tiered Storage (Preview) now supports inter-broker SSL.

Security

  • Added support for JSON and JAAS file types for secret protection. For more information, see Encrypt JAAS configuration parameters.

    Important

    Only embedded JAAS is supported. Static JAAS files are not supported. Static JAAS files are only used by ZooKeeper clients.

  • Enabled TLS to ZooKeeper so that all the network communication to ZooKeeper is secure. For more information, see ZooKeeper authentication overview.

  • Added support for AD/LDAP based authentication for clients when SASL/PLAIN is used. For more information, see Configuring Client Authentication with LDAP.

Known issue

You cannot encrypt multiple passwords in the same JAAS configuration using the Confluent CLI secrets encryption command for encrypting the JAAS, otherwise some of the encrypted configuration get replaced with empty string.

Symptoms

You specify multiple passwords in the same JAAS configuration:

listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret" \
user_admin="admin-secret";

When you try to encrypt password and then encrypt user_admin, the encrypted configuration looks like this (an empty value for password):

listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="" \
user_admin=$

{securepass:/etc/secrets/secrets.properties:server.properties/listener.name.sasl_ssl.plain.sasl.jaas.config/org.apache.kafka.common.security.plain.PlainLoginModule/user_admin}
;
Solution

If you encounter any empty password strings, after encrypting the JAAS configuration, follow these steps to fix:

  1. Open the secrets.properties file located at /etc/secrets/secrets.properties.

  2. Find the key for the empty password in /etc/secrets/secrets.properties. For example, listener.name.sasl_ssl.plain.sasl.jaas.config/org.apache.kafka.common.security.plain.PlainLoginModule/password.

  3. Replace the empty string in server.properties file with tuple {securepass:/path/to/secrets-remote.txt:key}. For example:

    listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username="admin" \
    password=$
    
    {securepass:/etc/secrets/secrets.properties:server.properties/listener.name.sasl_ssl.plain.sasl.jaas.config/org.apache.kafka.common.security.plain.PlainLoginModule/password}
    \
    user_admin=$
    
    {securepass:/etc/secrets/secrets.properties:server.properties/listener.name.sasl_ssl.plain.sasl.jaas.config/org.apache.kafka.common.security.plain.PlainLoginModule/user_admin}
    ;
    

This issue will be fixed in the next Confluent CLI release.

Confluent Operator

New features

  • Secures the platform by enabling role-based access control (RBAC) across Confluent Platform for greenfield installations via the Operator, including partial automation of the full configuration and bootstrapping. For more information, see Role-based Access Control.
  • Secures the platform by enabling configuration of TLS for encryption and mTLS for authentication on all JMX and Jolokia endpoints associated with each CP component.
  • Integrates better with Kubernetes-native workflows by enabling users to set Kubernetes Pod Annotations on all Confluent Platform components managed by Operator.
  • Simplifies platform configuration management by allowing users to specify the Kubernetes Storage Class to be used for the Persistent Volumes associated with each Confluent Platform component deployed by the Operator, and automatically uses the default Storage Class if none is specified.

Upgrade considerations

  • For this release of Operator, the minimum supported Kubernetes version is now 1.13. Ensure that your underlying Kubernetes environment is upgraded and conforms to the documented supported environments for Operator.
  • By default, upgrading to Operator to this latest release will automatically perform a rolling update of all Confluent Platform clusters (e.g. ZooKeeper) managed by the Operator. If you need greater control over when individual clusters get updated, and don’t want all clusters to be updated upon upgrading Operator, see the documentation for upgrading to Operator for 5.5.
  • Upgrading KSQL 5.4 to ksqlDB 5.5 in-place is not supported. See Upgrade Confluent Operator, Confluent Platform, and Helm for the steps to migrate KSQL 5.4 to ksqlDB 5.5.
  • The name of the ksqlDB Docker Hub image changed to cp-ksqldb-server-operator.
  • Confluent has deprecated support for Docker images based on the Debian and Alpine operating systems, and provides images based on the Red Hat Universal Base Image (UBI) instead. After July 1, 2020, Confluent will not release new versions of the Debian- and Alpine-based images. For more information about this change, see this Knowledge Base article (requires login). For instructions on how to use the UBI-based images with your Operator deployments, see Custom Docker images.

Notable enhancements

  • Adds support for Kubernetes 1.16 and 1.17 (and will likely support newer Kubernetes versions as they’re released).
  • Enables you to deploy Confluent Platform components via the Operator without the requirement to have cluster-level user permissions.
  • Enables you to install the Operator without the requirement to grant the Operator cluster-level service account permissions.
  • Improves the management of sensitive credentials, such as allowing users to specify a reference to a secret object for the Confluent license key, rather than passing the value in directly when running Helm commands.

Confluent Auto Data Balancer

  • Confluent Auto Data Balancer (ADB) now includes a --topics flag that will limit the rebalance plan to a single topic or set of topics. This will not find a globally optimal rebalance solution, but will allow for a faster rebalance on the topics selected.
  • ADB includes a --replica-placement-only flag that you can use to limit a rebalance plan to topics that have changed their replica placement constraints, and are not satisfying this constraint under their current location.

For more information, see Auto Data Balancing.

Replicator

A new Replicator command line tool is added that can identify issues with a Replicator configuration and recommend remedial actions. For more information, see Verifying a Replicator configuration.

Community Features

Apache Kafka

Confluent Platform 5.5 includes Kafka 2.5. For a full list of Kafka KIPs, features, and bug fixes, see the Kafka release notes.

Noteable KIPs include:

  • KIP-515 enables Apache ZooKeeper™ to Broker TLS authentication and Kafka now ships with ZooKeeper 3.5.7.
  • KIP-447 saw significant producer scalability when using exactly once semantics.
  • KIP-541 added a fetch.max.bytes configuration for the broker that makes it easier for Kafka operators to enforce client behavior centrally across the cluster.

Ansible

New features

  • Supports resilient and performant multi-site architectures of Confluent Platform by allowing users to configure a logical “rack” property per broker, unlocking the ability to leverage rack awareness across the platform. This feature is also now available on Confluent Platform 5.4.x.
  • Simplifies the management of multiple ksqlDB clusters and multiple Connect clusters within a single instance of the platform, all within a single Ansible inventory file. This feature is also now available on Confluent Platform 5.4.x.
  • Secures the platform by enabling role-based access control (RBAC) and Access Control Lists (ACLs) across Confluent Platform for greenfield installations via the Ansible Playbooks, including automation of the full configuration and bootstrapping. This release adds support for ACLs based on the Metadata Service (MDS), rather than ZooKeeper-based ACLs. For more information, see Role-based Access Control.

Upgrade considerations

  • Upgrading KSQL 5.4 to ksqlDB 5.5 in-place is not supported in Ansible. Follow the instructions in Upgrading ksqlDB to migrate previous versions of KSQL to ksqlDB 5.5.

Notable enhancements

  • Allows you to install Confluent Platform components individually, rather than all in a single command.
  • Allows you to specify the Linux users that Confluent Platform component processes should run as.

Schema Registry

  • Support for Protocol Buffers and JSON Schema has been added in Schema Registry and throughout Confluent Platform.

    • Schema Registry is now pluggable with the ability to add new schema types via schema plugins.
    • Confluent provides three out-of-the box schema plugins for Avro, Protobuf and JSON Schema schema types.
    • Proto 3 version is supported for Protocol Buffers (version 2 is partially supported).
    • Draft 7 spec is supported for JSON Schema.
    • Other Confluent components upgraded to support Protobuf and JSON, including: Control Center, Confluent CLI, ksqlDB, Streams, Connect, REST Proxy, and Kafka Clients (Java, Python, .NET).

    For more information, see Schema Formats, Serializers, and Deserializers.

  • Schema Registry Security Plugin trial licenses are not supported with TLS-enabled ZooKeeper quorums.

Clients

  • librdkafka now has complete Exactly-Once-Semantics (EOS) functionality, supporting the idempotent producer (since v1.0.0), a transaction-aware consumer (since v1.2.0) and full producer transaction support (in this release). This enables developers to create Exactly-Once applications with Kafka. For a complete transactional application example, see the librdkafka transactions example.
  • librdkafka 1.4 adds support for KIP-345 (static consumer group membership) and KIP-511 (report client software name and version to broker).
  • librdkafka 1.4 includes a number of fixes and enhancements. For more details, see the librdkafka release notes.

REST Proxy

Confluent REST Proxy API v3 preview incorporates a new v3 namespace that will enable full coverage of all Kafka APIs, including AdminClient APIs. In this release, support is added for common operations such as listing topics, creating topics, listing brokers, and describing topic configs. For a full list of supported operations, see Kafka REST Proxy API reference.

Connect

KIP-558 - track the set of actively used topics by connectors in Connect. During runtime it’s not easy to know the topics a sink connector reads records from when a regex is used for topic selection. For a source connector, bookkeeping of active topics requires some sort of external tracking by the connector or its user.

Connectors

JDBC Connector

  • Enables retries when a connection fails.
  • Adds an option to start an initial query with a specific timestamp using the timestamp.initial configuration property in JDBC source.
  • Supports a suffix query config for DB2.
  • JDBC source logs at the trace level queries executed by the connector.
  • Fixes OOM for Postgres by limiting the fetch size.
  • Packages the latest version, 42.2.10, of Postgres JDBC driver.
  • Upgrades derby to 10.14.2.0.

Amazon S3 Sink Connector

  • Introduces configuration properties to easily define key and secret credentials per connector instance.
  • Adds AWS IAM Assume Role credentials provider to enable cross-account authorization.
  • Allow setting a compression level for gzip in Json and ByteArray formats.
  • Upgrades AWS SDK to 1.11.725.
  • CVE-2019-10172: Upgrades jackson-mapper-asl to 1.11.0

Kafka Streams

  • KIP-150: Add Cogroup to the DSL: In previous versions, aggregating multiple streams into one could be complicated and error-prone. It generally requires you to group and aggregate all of the streams into tables, then make multiple outer join calls. Using the new co-group operator will clean up the syntax of your programs and reduce the number of state store invocations, which ultimately increases performance. For more information, see Co-group.
  • KIP-523: Add toTable() to the DSL: Sometimes you want to interpret a stream of events as a changelog and materialize a table over it. The new toTable can be applied to a stream and will materialize the latest value per key. Any null values will be interpreted as deletes for that key (tombstones). For more information, see Streams API changes.
  • KIP-535: Allow state stores to serve stale reads during rebalance: Previously, interactive queries (IQ) against state stores would fail during the time period where there is a rebalance in progress. This hurts the uptime of applications that depend on the ability to query Kafka Streams’ tables of state. Applications can now query any replica of a state store and observe how far each replica is lagging behind the primary. For more information, see Querying Stale Data From Standby.

ksqlDB

  • KSQL has been renamed to ksqlDB - “KSQL” has been renamed to “ksqlDB” in Confluent Platform 5.5.x. This rename affects the release package and Docker image names as well as artifact names. Run scripts and configuration parameter names have not been changed.

  • ksqlDB in Confluent Platform 5.5 is not backward compatible with Confluent Platform 5.4 - In-place upgrades on existing Confluent Platform deployments will not be possible. To run existing workloads using ksqlDB 5.5.x, a migration will be required. Migrations can be performed using this migration guide.

  • Topology viewer - Confluent Platform 5.5.x includes a topology viewer for ksqlDB, which visualizes your end-to-end workload as a directed acyclic graph of continuous queries. Individual nodes (continuous queries) in the topology may be inspected to determine node throughput, schema, and query definition.

  • Protobuf serialization format - A new serialization format has been added to support messages serialized using protocol buffers. The protocol buffers serialization format may be used by specifying PROTOBUF as the value_format.

  • Highly available pull queries - Pull queries can now be configured such that they continue to work during cluster rebalances and node failures. If a target node is down, a standby node will be chosen to obtain the pull query result from. ksqlDB can be configured to select the standby with the least amount of lag in these cases.

  • Primitive keys - Primitive row keys (of supported types) may be worked with directly instead of parsing them out of strings. You can now serialize ROWKEYs using the following types: INT, BIGINT, DOUBLE, and STRING.

  • JOIN expression support - Arbitrary expressions may now be used in JOIN clauses:

    SELECT * FROM s1 JOIN s2 ON s1.str = SUBSTRING(s2.str, 3) EMIT CHANGES;
    
  • PARTITION BY expression support - Arbitrary expressions may now be used in PARTITION BY clauses:

    SELECT x, y, z FROM s1 PARTITION BY MY_FUNCTION(x, 2) EMIT CHANGES;
    
  • New builtins - The following new builtin functions have been added:

    • COUNT_DISTINCT(v) - Aggregate function, computes the number of distinct values of v.
    • CUBE_EXPLODE(a) - Tabular function, outputs a row for each unique combination of elements in array a.
  • Native constructors for Arrays - Arrays can now be constructed in SELECT expression list:

    SELECT ARRAY[1, 2] FROM s1 EMIT CHANGES;
    
  • Native constructors for Maps - Maps can now be constructed in SELECT expression list:

    SELECT MAP(k1:=v1, k2:=v1*2) FROM s1 EMIT CHANGES;
    
  • Native constructors for Structs - Structs can now be constructed in SELECT expression list:

    SELECT {f1 v1, f2 v2} FROM s1 EMIT CHANGES;
    

Deprecation Warnings

  • Beginning with the next major release of Confluent Platform, Connect connectors will no longer be packaged natively with Confluent Platform. All formerly packaged connectors must be downloaded directly from Confluent Hub.
  • ../includes/restproxyv1.rst

How to Download

Confluent Platform is available for download at https://www.confluent.io/download/. See the On-Premises Deployments section for detailed information.

Important

The Confluent Platform package includes Confluent Server by default and requires a confluent.license key in your server.properties file. Starting with Confluent Platform 5.4.x, the Confluent Server broker checks for a license during start-up. You must supply a license string in each broker’s properties file using the confluent.license property as below:

confluent.license=LICENCE_STRING_HERE_NO_QUOTES

If you want to use the Kafka broker, download the confluent-community package. The Kafka broker is the default in all Debian or RHEL and CentOS packages.

For more information about migrating to Confluent Server, see Migrate to Confluent Server.

To upgrade Confluent Platform to a newer version, check the Upgrade Confluent Platform documentation.

Supported Versions and Interoperability

For the supported versions and interoperability of Confluent Platform and its components, see Supported Versions and Interoperability.

Questions?

If you have questions regarding this release, feel free to reach out via the community mailing list or community Slack. Confluent customers are encouraged to contact our support directly.