.. _streams_upgrade-guide:
Upgrade Guide
=============
**Table of Contents**
.. contents::
:local:
Upgrading from CP 3.2.x (Kafka 0.10.2.x-cp1) to CP 3.3.x (Kafka 0.11.0.x-cp1)
-----------------------------------------------------------------------------
.. _streams_upgrade-guide_3.3.x-compatibility:
Compatibility
^^^^^^^^^^^^^
Kafka Streams applications built with CP 3.3.x (Kafka 0.11.0.x-cp1) are forward and backward compatible with certain Kafka clusters.
* **Forward-compatible to CP 3.3.x clusters (Kafka 0.11.0.x-cp1):**
Existing Kafka Streams applications built with CP 3.0.x (Kafka 0.10.0.x-cp1), CP 3.1.x (Kafka 0.10.1.x-cp2), or CP 3.2.x (Kafka 0.10.2.x-cp1)
will work with upgraded Kafka clusters running CP 3.3.x (Kafka 0.11.0.x-cp1).
* **Backward-compatible to CP 3.1.x and CP 3.2.x clusters (Kafka 0.10.1.x-cp2 and 0.10.2.x-cp1):**
New Kafka Streams applications built with CP 3.3.x (Kafka 0.11.0.x-cp1) will work with older Kafka clusters running CP 3.1.x (Kafka 0.10.1.x-cp2) or CP 3.2.x (Kafka 0.10.2.x-cp1).
The only limitation is the newly added exactly-once processing guarantee, that requires upgrading your Kafka cluster to CP 3.3.x (Kafka 0.11.0.x-cp1).
Note, that exactly-once feature is disabled by default and thus a rolling bounce upgrade of your Streams application is possible if you don't enable this new feature explicitly.
Kafka clusters running CP 3.0.x (Kafka 0.10.0.x-cp1) are *not* compatible with new CP 3.3.x Kafka Streams applications though.
Compatibility Matrix:
.. include:: compatibilityMatrix.rst
.. _streams_upgrade-guide_3.3.x-upgrade-apps:
Upgrading your Kafka Streams applications to CP 3.3.x
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To make use of CP 3.3.x (Kafka 0.11.0.x-cp1), you just need to update the Kafka Streams dependency of your application
to use the version number ``0.11.0.0-cp1``, and then recompile your application.
For example, in your ``pom.xml`` file:
.. sourcecode:: xml
org.apache.kafka
kafka-streams
0.11.0.0-cp1
API changes
^^^^^^^^^^^
Kafka Streams and its API were improved and modified since the release of CP 3.2.x.
All of these changes are backward compatible, thus it's not require to update the code of your Kafka Streams applications immediately.
However, some methods and configuration parameters were deprecated and thus it is recommend to update your code eventually to allow for future upgrades.
In this section we focus on deprecated APIs.
Streams Configuration
"""""""""""""""""""""
The following configuration parameters were renamed and their old names were deprecated.
* ``key.serde`` renamed to ``default.key.serde``
* ``value.serde`` renamed to ``default.value.serde``
* ``timestamp.extractor`` renamed to ``default.timestamp.extractor``
Thus, ``StreamsConfig#KEY_SERDE_CONFIG``, ``StreamsConfig#VALUE_SERDE_CONFIG``, and ``StreamsConfig#TIMESTAMP_EXTRACTOR_CONFIG`` were deprecated, too.
Additionally, the following method changes apply:
* method ``keySerde()`` was deprecated and replaced by ``defaultKeySerde()``
* method ``valueSerde()`` was deprecated and replaced by ``defaultValueSerde()``
* new method ``defaultTimestampExtractor()`` was added
Local timestamp extractors
""""""""""""""""""""""""""
The Streams API was extended to allow users to specify a per stream/table timestamp extractor.
This simplifies the usage of different timestamp extractor logic for different streams/tables.
Before, users needed to apply an ``if-then-else`` pattern within the default timestamp extractor to apply different logic to different input topics.
The old behavior introduced unnecessary dependencies and thus limited code modularity and code reuse.
To enable the new feature, the methods ``KStreamBuilder#stream()``, ``KStreamBuilder#table()``, ``KStream#globalTable()``,
``TopologyBuilder#addSource()``, and ``TopologyBuilder#addGlobalStore()`` have new overloads that allow to specify a "local" timestamp
extractor that is solely applied to the corresponding input topics.
.. literalinclude:: upgrade-guide-3_3/upgrade-guide_timestamp-extractor.java
:language: java
KTable Changes
""""""""""""""
The following methods have been deprecated on the ``KTable`` interface
* ``void foreach(final ForeachAction super K, ? super V> action)``
* ``void print()``
* ``void print(final String streamName)``
* ``void print(final Serde keySerde, final Serde valSerde)``
* ``void print(final Serde keySerde, final Serde valSerde, final String streamName)``
* ``void writeAsText(final String filePath)``
* ``void writeAsText(final String filePath, final Serde keySerde, final Serde valSerde)``
* ``void writeAsText(final String filePath, final String streamName)``
* ``void writeAsText(final String filePath, final String streamName, final Serde keySerde, final Serde valSerde)``
These methods have been deprecated in favor of using the :ref:`Interactive Queries API `.
If you want to query the current content of the state store backing the ``KTable``, use the following approach:
* Make a call to ``KafkaStreams.store(String storeName, QueryableStoreType queryableStoreType)`` followed by a call to ``ReadOnlyKeyValueStore.all()`` to iterate over the keys of a ``KTable``.
If you want to view the changelog stream of the ``KTable`` then you could do something along the lines of the following:
* Call ``KTable.toStream()`` then call ``KStream#print()``.
Full upgrade workflow
^^^^^^^^^^^^^^^^^^^^^
A typical workflow for upgrading Kafka Streams applications from CP 3.2.x to CP 3.3.x has the following steps:
#. **Upgrade your application:** See :ref:`upgrade instructions ` above.
#. **Stop the old application:** Stop the old version of your application, i.e. stop all the application instances
that are still running the old version of the application.
#. **Optional, upgrade your Kafka cluster:** See :ref:`kafka upgrade instructions `.
`Note, if you want to use exactly-once processing semantics, upgrading your cluster to CP 3.3.x is mandatory.`
#. **Start the upgraded application:** Start the upgraded version of your application, with as many instances as needed.
By default, the upgraded application will resume processing its input data from the point when the old version was stopped (see previous step).
Upgrading older Kafka Streams applications to CP 3.3.x
------------------------------------------------------
It is also possible to upgrade CP 3.0 or CP 3.1 applications to 3.3.
Some of the API improvements introduced in CP 3.1 and CP 3.2 are breaking changes that require you to update the code of your Kafka Streams applications.
In the following two sections we focus on only these breaking changes.
API changes (from CP 3.1 to CP 3.2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Kafka Streams and its API were improved and modified since the release of CP 3.1.x.
Some of these changes are breaking changes that require you to update the code of your Kafka Streams applications.
In this section we focus on only these breaking changes.
Handling Negative Timestamps and Timestamp Extractor Interface
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Kafka Streams behavior with regard to invalid (i.e., negative) timestamps was improved.
By default you will still get an exception on an invalid timestamp.
However, you can reconfigure your application to react more gracefully to invalid timestamps which was not possible before.
Even if you do not use a custom timestamp extractor, you need to recompile your application code,
because the ``TimestampExtractor`` interface was changed in an incompatible way.
The internal behavior of Kafka Streams with regard to negative timestamps was changed.
Instead of raising an exception if the timestamp extractor returns a negative timestamp,
the corresponding record will be dropped silently and not be processed.
This allows to process topic for which only a few records cannot provide a valid timestamp.
Furthermore, the ``TimestampExtractor`` interface was changed and now has one additional parameter.
This parameter provides a timestamp that can be used, for example, to return an estimated timestamp,
if no valid timestamp can be extracted from the current record.
The old default timestamp extractor ``ConsumerRecordTimestampExtractor`` was replaced with
``FailOnInvalidTimestamp``, and two new extractors which both extract a record's built-in timestamp
were added (``LogAndSkipOnInvalidTimestamp`` and ``UsePreviousTimeOnInvalidTimestamp``).
The new default extractor (``FailOnInvalidTimestamp``) raises an exception in case of a negative built-in
record timestamp such that Kafka Streams' default behavior is kept (i.e., fail-fast on negative timestamp).
The two newly added extractors allow to handle negative timestamp more gracefully by implementing
a log-and-skip and timestamp-estimation strategy.
.. literalinclude:: upgrade-guide-3_2/upgrade-guide_timestamp-extractor.java
:language: java
Metrics
"""""""
If you provide custom metrics by implementing interface ``StreamsMetrics`` you need to update your code as the
interface has many new methods allowing to register finer grained metrics than before.
More details are available in
`KIP-114 `__.
.. literalinclude:: upgrade-guide-3_2/upgrade-guide_metrics.java
:language: java
Scala
"""""
Starting with 0.10.2.0, if your application is written in Scala, you may need to declare types explicitly in order for
the code to compile. The
:cp-examples:`StreamToTableJoinScalaIntegrationTest|src/test/scala/io/confluent/examples/streams/StreamToTableJoinScalaIntegrationTest.scala`
has an example where the types of return variables are explicitly declared.
API changes (from CP 3.0 to CP 3.1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Stream grouping and aggregation
"""""""""""""""""""""""""""""""
Grouping (i.e., repartitioning) and aggregation of the ``KStream`` API was significantly changed to be aligned with the ``KTable`` API.
Instead of using a single method with many parameters, grouping and aggregation is now split into two steps.
First, a ``KStream`` is transformed into a ``KGroupedStream`` that is a repartitioned copy of the original ``KStream``.
Afterwards, an aggregation can be performed on the ``KGroupedStream``, resulting in a new ``KTable`` that contains the result of the aggregation.
Thus, the methods ``KStream#aggregateByKey(...)``, ``KStream#reduceByKey(...)``, and ``KStream#countByKey(...)`` were replaced by ``KStream#groupBy(...)`` and ``KStream#groupByKey(...)`` which return a ``KGroupedStream``.
While ``KStream#groupByKey(...)`` groups on the current key, ``KStream#groupBy(...)`` sets a new key and re-partitions the data to build groups on the new key.
The new class ``KGroupedStream`` provides the corresponding methods ``aggregate(...)``, ``reduce(...)``, and ``count(...)``.
.. literalinclude:: upgrade-guide-3_1/upgrade-guide_grouping.java
:language: java
Auto Repartitioning
"""""""""""""""""""
Previously when performing ``KStream#join(...)``, ``KStream#outerJoin(...)`` or ``KStream#leftJoin(...)`` operations after a key changing operation, i.e,
``KStream#map(...)``, ``KStream#flatMap(...)``, ``KStream#selectKey(...)`` the developer was required to call ``KStream#through(...)`` to repartition the mapped ``KStream``
This is no longer required. Repartitioning now happens automatically for all join operations.
.. literalinclude:: upgrade-guide-3_1/upgrade-guide_repartitioning.java
:language: java
TopologyBuilder
"""""""""""""""
Two public method signatures have been changed on ``TopologyBuilder``, ``TopologyBuilder#sourceTopics(String applicationId)`` and ``TopologyBuilder#topicGroups(String applicationId)``.
These methods no longer take ``applicationId`` as a parameter and instead you should call ``TopologyBuilder#setApplicationId(String applicationId)`` before calling one of these methods.
.. literalinclude:: upgrade-guide-3_1/upgrade-guide_topology-builder.java
:language: java
.. _streams_upgrade-guide_dsl-store-names:
DSL: New parameters to specify state store names
""""""""""""""""""""""""""""""""""""""""""""""""
Apache Kafka ``0.10.1`` introduces :ref:`Interactive Queries `,
which allow you to directly query state stores of a Kafka Streams application.
This new feature required a few changes to the operators in the DSL.
Starting with Kafka ``0.10.1``, state stores must be always be "named",
which includes both explicitly used state stores (e.g., defined by the user)
and internally used state stores (e.g., created behind the scenes by operations such as ``count()``).
This naming is a prerequisite to make state stores queryable.
As a result of this, the previous "operator name" is now the state store name.
This change affects ``KStreamBuilder#table(...)`` and *windowed* aggregates ``KGroupedStream#count(...)``, ``#reduce(...)``, and ``#aggregate(...)``.
.. literalinclude:: upgrade-guide-3_1/upgrade-guide_operator-names.java
:language: java
Windowing
"""""""""
The API for ``JoinWindows`` was improved.
It is not longer possible to define a window with a default size (of zero).
Furthermore, windows are not named anymore.
Rather, any such naming is now done for state stores.
See section :ref:`DSL: New parameters to specify state store names ` above).
.. literalinclude:: upgrade-guide-3_1/upgrade-guide_windows.java
:language: java