.. _streams_upgrade-guide: Streams Upgrade Guide ===================== Upgrading from |cp| 4.1.x (Kafka 1.1.x-cp1) to |cp| 5.0.0 (Kafka 2.0.0-cp1) --------------------------------------------------------------------------- .. _streams_upgrade-guide_5.0.x-compatibility: Compatibility ^^^^^^^^^^^^^ Kafka Streams applications built with |cp| 5.0.0 (Kafka 2.0.0-cp1) are forward and backward compatible with certain Kafka clusters. * **Forward-compatible to Confluent Platform 5.0.0 clusters (Kafka 2.0.0-cp1):** Existing Kafka Streams applications built with |cp| 3.0.x (Kafka 0.10.0.x-cp1), |cp| 3.1.x (Kafka 0.10.1.x-cp2), |cp| 3.2.x (Kafka 0.10.2.x-cp1), |cp| 3.3.x (Kafka 0.11.0.x-cp1), |cp| 4.0.x (Kafka 1.0.x-cp1) or |cp| 4.1.x (Kafka 1.1.x-cp1) will work with upgraded Kafka clusters running |cp| 5.0.0 (Kafka 2.0.0-cp1). * **Backward-compatible to older clusters down to Confluent Platform 3.1.x (Kafka 0.10.1.x-cp2):** New Kafka Streams applications built with |cp| 5.0.0 (Kafka 2.0.0-cp1) will work with older Kafka clusters running |cp| 3.1.x (Kafka 0.10.1.x-cp2), |cp| 3.2.x (Kafka 0.10.2.x-cp1), |cp| 3.3.x (Kafka 0.11.0.x-cp1), |cp| 4.0.x (Kafka 1.0.x-cp1) or |cp| 4.1.x (Kafka 1.1.x-cp1). However, when exactly-once processing guarantee is required, your Kafka cluster needs to be upgraded to at least |cp| 3.3.x (Kafka 0.11.0.x-cp1). Note, that exactly-once feature is disabled by default and thus a rolling bounce upgrade of your Streams application is possible if you don't enable this new feature explicitly. Kafka clusters running |cp| 3.0.x (Kafka 0.10.0.x-cp1) are *not* compatible with new |cp| 5.0.0 Kafka Streams applications. .. note:: As of |cp| 4.0.0 (Kafka 1.0.0-cp1), Kafka Streams requires message format 0.10 or higher. Thus, if you kept an older message format when upgrading your brokers to |cp| 3.1 (Kafka 0.10.1-cp1) or a later version, Kafka Streams |cp| 5.0.0 (Kafka 2.0.0-cp1) won't work. You will need to upgrade the message format to 0.10 before you upgrade your Kafka Streams application to |cp| 5.0.0 (Kafka 2.0.0-cp1) or newer. Compatibility Matrix: .. include:: compatibilityMatrix.rst .. _streams_upgrade-guide_5.0.x-upgrade-apps: Upgrading your Kafka Streams applications to |cp| 5.0.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To make use of |cp| 5.0.0 (Kafka 2.0.0-cp1), you need to update the Kafka Streams dependency of your application to use the version number ``2.0.0-cp1``, and you may need to make minor code changes (details below), and then recompile your application. For example, in your ``pom.xml`` file: .. sourcecode:: xml org.apache.kafka kafka-streams 2.0.0-cp1 There are some :ref:`Streams API changes in Confluent Platform 5.0.0`, if your application uses them, then you need to update your code accordingly. .. note:: As of |cp| 4.0.0 (Kafka 1.0.0-cp1) a topology regression was introduced when source ``KTable`` instances were changed to have changelog topics instead of re-using the source topic. As of |cp| 5.0.0 (Kafka 2.0.0-cp1) ``KTable`` instances re-using the source topic as the changelog topic has been reinstated, but is optional and must be configured by setting ``StreamsConfig.TOPOLOGY_OPTIMIZATION`` to ``StreamsConfig.OPTIMIZE``. This brings up some different scenarios depending on what you are upgrading from and what you are upgrading to. * If you are upgrading from using ``KStreamBuilder`` version Kafka 1.0.x-cp1/1.1.x-cp1 to ``StreamsBuilder`` Kafka 2.0.0-cp1 it's recommended to enable the optimization as there are no changes to your topology. Please note that if you elect not to enable the optimization, there is a small window of time for possible data loss until the changelog topic contains one record per key. * If you are upgrading from using ``StreamsBuilder`` version Kafka 1.0.x-cp1/1.1.x-cp1 to Kafka 2.0.0-cp-1 you can enable the optimization, but your topology will change so you'll need to restart your application with a new application ID. Additionally, if you want to perform a rolling upgrade, it is recommended not to enable the optimization. If you elect not to enable the optimization, then no further changes are required. Additionally, when starting with a new application ID, you can possibly end up reprocessing data, since the application ID has been changed. If you don't want to reprocess records, you'll need to create new output topics, so downstream user can cut over in a controlled fashion. .. _streams_upgrade-guide_5.0.x-api-changes: API changes ^^^^^^^^^^^ New Streams configurations and public interfaces are added, and deprecated APIs are removed. Skipped Records Metrics Refactored """""""""""""""""""""""""""""""""" Starting with |cp| 5.0.0, Kafka Streams does not report the ``skippedDueToDeserializationError-rate`` and ``skippedDueToDeserializationError-total`` metrics. Deserialization errors, and all other causes of record skipping, are now accounted for in the pre-existing metrics ``skipped-records-rate`` and ``skipped-records-total``. When a record is skipped, the event is now logged at WARN level. Note these metrics are mainly for monitoring unexpected events; If there are systematic issues that caused too many unprocessable records to be skipped, and hence the resulted warning logs become burdensome, you should consider filtering our these unprocessable records instead of depending on record skipping semantics. For more details, see `KIP-274 `__. As of right now, the potential causes of skipped records are: * ``null`` keys in table sources. * ``null`` keys in table-table inner/left/outer/right joins. * ``null`` keys or values in stream-table joins. * ``null`` keys or values in stream-stream joins. * ``null`` keys or values in aggregations / reductions / counts on grouped streams. * ``null`` keys in aggregations / reductions / counts on windowed streams. * ``null`` keys in aggregations / reductions / counts on session-windowed streams. * Errors producing results, when the configured ``default.production.exception.handler`` decides to ``CONTINUE`` (the default is to ``FAIL`` and throw an exception). * Errors deserializing records, when the configured ``default.deserialization.exception.handler`` decides to ``CONTINUE`` (the default is to ``FAIL`` and throw an exception). This was the case previously captured in the ``skippedDueToDeserializationError`` metrics. * Fetched records having a negative timestamp. New Functions in Window Store Interface """"""""""""""""""""""""""""""""""""""" Confluent Platform now supports methods in ``ReadOnlyWindowStore`` which allows you to query the key-value pair of a single window. If you have customized window store implementations on the above interface, you must update your code to implement the newly added method. For more details, see `KIP-261 `__. Simplified KafkaStreams Constructor """"""""""""""""""""""""""""""""""" The ``KafakStreams`` constructor was simplfied. Instead of requiring the user to create a boilderplate ``StreamsConfig`` object, the constructor now directly accepts the ``Properties`` object that specifies the actual user configuration. .. literalinclude:: upgrade-guide-5_0/upgrade-guide_kafka-streams.java :language: java Support Dynamic Routing at Sink """"""""""""""""""""""""""""""" In this release you can now dynamically route records to Kafka topics. More specifically, in both the lower-level ``Topology#addSink`` and higher-level ``KStream#to`` APIs, we have added variants that take a ``TopicNameExtractor`` instance instead of a specific ``String`` topic name. For each record received from the upstream processor, the ``TopicNameExtractor`` will dynamically determine which Kafka topic to write to based on the record's key and value, as well as record context. Note that all output Kafka topics are still considered user topics and hence must be pre-created. Also, we have modified the ``StreamPartitioner`` interface to add the topic name parameter since the topic name now may not be known beforehand; users who have customized implementations of this interface would need to update their code while upgrading their application. Support Message Headers """"""""""""""""""""""" In this release there is message header support in the ``Processor API``. In particular, we have added a new API ``ProcessorContext#headers()`` which returns a ``Headers`` object that keeps track of the headers of the source topic's message that is being processed. Through this object, users can manipulate the headers map that is being propagated throughout the processor topology as well, for example ``Headers#add(String key, byte[] value)`` and ``Headers#remove(String key)``. When Streams DSL is used, users can call ``process`` or ``transform`` in which they can also access the ``ProcessorContext`` to access and manipulate the message header; if user does not manipulate the header, it will still be preserved and forwarded while the record traverses through the processor topology. When the resulted record is sent to the sink topics, the preserved message header will also be encoded in the sent record. KTable Now Supports Transform Values """""""""""""""""""""""""""""""""""" In this release another new API, `KTable#transformValues`, was added. For more information, see `KIP-292 ` __. Improved Windowed Serde Support """"""""""""""""""""""""""""""" We added helper class ``WindowedSerdes`` that allows you to create time- and session-windowed serdes without the need to know the details how windows are de/serialized. The created window serdes wrap a user-provided serde for the inner key- or value-data type. Furthermore, two new configs ``default.windowed.key.serde.inner`` and ``default.windowed.value.serde.inner`` were added that allow to specify the default inner key- and value-serde for windowed types. Note, these new configs are only effective, if ``default.key.serde`` or ``default.value.serde`` specifies a windowed serde (either ``WindowedSerdes.TimeWindowedSerde`` or ``WindowedSerdes.SessionWindowedSerde``). Allow Timestamp Manipulation """""""""""""""""""""""""""" Using the Processor API, it is now possible to set the timestamp for output messages explicitly. This change implies updates to the ``ProcessorContext#forward()`` method. Some existing methods were deprecated and replaced by new ones. In particular, it is not longer possible to send records to a downstream processor based on its index. .. literalinclude:: upgrade-guide-5_0/upgrade-guide_processor-context.java :language: java Public Test-Utils Artifact """""""""""""""""""""""""" Confluent Platform now ships with a ``kafka-streams-test-uitls`` artifact that contains utility classes to unit test your Kafka Streams application. Check out :ref:`Testing Streams Code ` section for more details. Scala API """"""""" Confluent Platform now ships with the Apache Kafka Scala API for Kafka Streams. You can add the dependency for Scala 2.11 or 2.12 artifacts: .. sourcecode:: xml org.apache.kafka kafka-streams-scala_2.11 2.0.0-cp1 Deprecated APIs are Removed """"""""""""""""""""""""""" The following deprecated APIs are removed in |cp| 5.0.0: #. **KafkaStreams#toString** no longer returns the topology and runtime metadata; to get topology metadata you can call ``Topology#describe()``, and to get thread runtime metadata you can call ``KafkaStreams#localThreadsMetadata`` (deprecated since |cp| 4.0.0). For detailed guidance on how to update your code please read :ref:`here`. #. **TopologyBuilder** and **KStreamBuilder** are removed and replaced by ``Topology`` and ``StreamsBuidler`` respectively (deprecated since |cp| 4.0.0). #. **StateStoreSupplier** are removed and replaced with ``StoreBuilder`` (deprecated since |cp| 4.0.0); and the corresponding **Stores#create** and **KStream, KTable, KGroupedStream**'s overloaded functions that use it have also been removed. #. **KStream, KTable, KGroupedStream** overloaded functions that requires serde and other specifications explicitly are removed and replaced with simpler overloaded functions that use ``Consumed, Produced, Serialized, Materialized, Joined`` (deprecated since |cp| 4.0.0). #. **Processor#punctuate**, **ValueTransformer#punctuate**, **ValueTransformer#punctuate** and **RecordContext#schedule(long)** are removed and replaced by ``RecordContext#schedule(long, PunctuationType, Punctuator)`` (deprecated since |cp| 4.0.0). #. The second ``boolean`` typed parameter **loggingEnabled in ProcessorContext#register** has been removed; you can now use ``StoreBuilder#withLoggingEnabled, #withLoggingDisabled`` to specify the behavior when they create the state store (deprecated since |cp| 3.3.0). #. **KTable#writeAs, #print, #foreach, #to, #through** are removed as their semantics are more confusing than useful, you can call ``KTable#tostream()#writeAs`` etc instead for the same purpose (deprecated since |cp| 3.3.0). #. **StreamsConfig#KEY_SERDE_CLASS_CONFIG, #VALUE_SERDE_CLASS_CONFIG, #TIMESTAMP_EXTRACTOR_CLASS_CONFIG** are removed and replaced with ``StreamsConfig#DEFAULT_KEY_SERDE_CLASS_CONFIG, #DEFAULT_VALUE_SERDE_CLASS_CONFIG, #DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG`` respectively (deprecated since |cp| 3.3.0). #. **StreamsConfig#ZOOKEEPER_CONNECT_CONFIG** is removed as we do not need |zk| dependency in Streams any more (deprecated since |cp| 3.2.0). Full upgrade workflow ^^^^^^^^^^^^^^^^^^^^^ A typical workflow for upgrading Kafka Streams applications from |cp| 4.1.x to |cp| 5.0.0 has the following steps: #. **Upgrade your application:** See :ref:`upgrade instructions ` above. #. **Stop the old application:** Stop the old version of your application, i.e. stop all the application instances that are still running the old version of the application. #. **Optional, upgrade your Kafka cluster:** See :ref:`kafka upgrade instructions `. `Note, if you want to use exactly-once processing semantics, upgrading your cluster to at least Confluent Platform 3.3.x is mandatory.` #. **Start the upgraded application:** Start the upgraded version of your application, with as many instances as needed. By default, the upgraded application will resume processing its input data from the point when the old version was stopped (see previous step). Upgrading older Kafka Streams applications to |cp| 5.0.0 -------------------------------------------------------- .. _streams_upgrade-guide_4.1.x-api-changes: API changes (from |cp| 4.0 to |cp| 4.1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ A few new Streams configurations and public interfaces are added into |cp| 4.1.x release. Changes in bin/kafka-streams-application-reset """""""""""""""""""""""""""""""""""""""""""""" Added options to specify input topics offsets to reset according to `KIP-171 `__. Embedded Admin Client Configuration """"""""""""""""""""""""""""""""""" You can now customize the embedded admin client inside your Streams application which would be used to send all the administrative requests to Kafka brokers, such as internal topic creation, etc. This is done via the additional ``KafkaClientSupplier#getAdminClient(Map)`` interface; for example, users can provide their own ``AdminClient`` implementations to override the default ones in their integration testing. In addition, users can also override the configs that are passed into ``KafkaClientSupplier#getAdminClient(Map)`` to configure the returned ``AdminClient``. Such overridden configs can be specified via the ``StreamsConfig`` by adding the admin configs with the prefix as defined by ``StreamsConfig#adminClientPrefix(String)``. Any configs that aren't admin client configs will be ignored. For example: .. sourcecode:: java Properties streamProps = ...; // use retries=10 for the embedded admin client streamsProps.put(StreamsConfig.adminClientPrefix("retries"), 10); API changes (from |cp| 3.3 to |cp| 4.0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. _streams_upgrade-guide_4.0.x-api-changes: Kafka Streams and its API were improved and modified in the |cp| 4.0.x release. All of these changes are backward compatible, thus it's not require to update the code of your Kafka Streams applications immediately. However, some methods were deprecated and thus it is recommend to update your code eventually to allow for future upgrades. In this section we focus on deprecated APIs. Building and running a topology """"""""""""""""""""""""""""""" The two main classes to specify a topology, ``KStreamBuilder`` and ``TopologyBuilder``, were deprecated and replaced by ``StreamsBuilder`` and ``Topology``. Note, that both new classes are in package ``org.apache.kafka.streams`` and that ``StreamsBuilder`` does not extend ``Topology``, i.e., the class hierarchy is different now. This change also affects ``KafkaStreams`` constructors that now only accept a ``Topology``. If you use ``StreamsBuilder`` you can obtain the constructed topology via ``StreamsBuilder#build()``. The new classes have basically the same methods as the old ones to build a topology via DSL or Processor API. However, some internal methods that were public in ``KStreamBuilder`` and ``TopologyBuilder``, but not part of the actual API, are no longer included in the new classes. .. literalinclude:: upgrade-guide-4_0/upgrade-guide_builder.java :language: java Describing topology and stream task metadata """""""""""""""""""""""""""""""""""""""""""" ``KafkaStreams#toString()`` and ``KafkaStreams#toString(final String indent)``, which were previously used to retrieve the user-specified processor topology information as well as runtime stream tasks metadata, are deprecated in 4.0.0. Instead, a new method of ``KafkaStreams``, namely ``localThreadsMetadata()`` is added which returns an ``org.apache.kafka.streams.processor.ThreadMetadata`` object for each of the local stream threads that describes the runtime state of the thread as well as its current assigned tasks metadata. Such information will be very helpful in terms of debugging and monitoring your streams applications. For retrieving the specified processor topology information, users can now call ``Topology#describe()`` which returns an ``org.apache.kafka.streams.TopologyDescription`` object that contains the detailed description of the topology (for DSL users they would need to call ``StreamsBuilder#build()`` to get the ``Topology`` object first). Merging KStreams: """"""""""""""""" As mentioned above, ``KStreamBuilder`` was deprecated in favor of ``StreamsBuilder``. Additionally, ``KStreamBuilder#merge(KStream...)`` was replaced by ``KStream#merge(KStream)`` and thus ``StreamsBuilder`` does not have a ``merge()`` method. Note: instead of merging an arbitrary number of ``KStream`` instances into a single ``KStream`` as in the old API, the new ``#merge()`` method only accepts a single ``KStream`` and thus merges two ``KStream`` instances into one. If you want to merge more than two ``KStream`` instances, you can call ``KStream#merge()`` multiple times. .. literalinclude:: upgrade-guide-4_0/upgrade-guide_merge.java :language: java Punctuation functions """"""""""""""""""""" The Processor API was extended to allow users to schedule ``punctuate`` functions either based on :ref:`event-time ` (i.e. ``PunctuationType.STREAM_TIME``) or *wall-clock-time* (i.e. ``PunctuationType.WALL_CLOCK_TIME``). Before this, users could only schedule based on *event-time* and hence the ``punctuate`` function was data-driven only. As a result, the original ``ProcessorContext#schedule`` is deprecated with a new overloaded function. In addition, the ``punctuate`` function inside ``Processor`` is also deprecated, and is replaced by the newly added ``Punctuator#punctuate`` interface. .. literalinclude:: upgrade-guide-4_0/upgrade-guide_schedule-punctuator.java :language: java Streams Configuration """"""""""""""""""""" You can now override the configs that are used to create internal repartition and changelog topics. You provide these configs via the ``StreamsConfig`` by adding the topic configs with the prefix as defined by ``StreamsConfig#topicPrefix(String)``. Any properties in the ``StreamsConfig`` with the prefix will be applied when creating internal topics. Any configs that aren't topic configs will be ignored. If you are already using ``StateStoreSupplier`` or ``Materialized`` to provide configs for changelogs, then they will take precedence over those supplied in the config. For example: .. sourcecode:: java Properties streamProps = ...; // use cleanup.policy=delete for internal topics streamsProps.put(StreamsConfig.topicPrefix("cleanup.policy"), "delete"); New classes for optional DSL parameters """"""""""""""""""""""""""""""""""""""" Several new classes were introduced, i.e., ``Serialized``, ``Consumed``, ``Produced`` etc. to enable us to reduce the overloads in the DSL. These classes mostly have a static method ``with`` to create an instance, i.e., ``Serialized.with(Serdes.Long(), Serdes.String())``. Scala users should be aware that they will need to surround ``with`` with backticks. For example: .. sourcecode:: scala // When using Scala: enclose "with" with backticks Serialized.`with`(Serdes.Long(), Serdes.String()) API changes (from |cp| 3.2 to |cp| 3.3) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Kafka Streams and its API were improved and modified since the release of |cp| 3.2.x. All of these changes are backward compatible, thus it's not require to update the code of your Kafka Streams applications immediately. However, some methods and configuration parameters were deprecated and thus it is recommend to update your code eventually to allow for future upgrades. In this section we focus on deprecated APIs. Streams Configuration """"""""""""""""""""" The following configuration parameters were renamed and their old names were deprecated. * ``key.serde`` renamed to ``default.key.serde`` * ``value.serde`` renamed to ``default.value.serde`` * ``timestamp.extractor`` renamed to ``default.timestamp.extractor`` Thus, ``StreamsConfig#KEY_SERDE_CONFIG``, ``StreamsConfig#VALUE_SERDE_CONFIG``, and ``StreamsConfig#TIMESTAMP_EXTRACTOR_CONFIG`` were deprecated, too. Additionally, the following method changes apply: * method ``keySerde()`` was deprecated and replaced by ``defaultKeySerde()`` * method ``valueSerde()`` was deprecated and replaced by ``defaultValueSerde()`` * new method ``defaultTimestampExtractor()`` was added Local timestamp extractors """""""""""""""""""""""""" The Streams API was extended to allow users to specify a per stream/table timestamp extractor. This simplifies the usage of different timestamp extractor logic for different streams/tables. Before, users needed to apply an ``if-then-else`` pattern within the default timestamp extractor to apply different logic to different input topics. The old behavior introduced unnecessary dependencies and thus limited code modularity and code reuse. To enable the new feature, the methods ``KStreamBuilder#stream()``, ``KStreamBuilder#table()``, ``KStream#globalTable()``, ``TopologyBuilder#addSource()``, and ``TopologyBuilder#addGlobalStore()`` have new overloads that allow to specify a "local" timestamp extractor that is solely applied to the corresponding input topics. .. literalinclude:: upgrade-guide-3_3/upgrade-guide_timestamp-extractor.java :language: java KTable Changes """""""""""""" The following methods have been deprecated on the ``KTable`` interface * ``void foreach(final ForeachAction action)`` * ``void print()`` * ``void print(final String streamName)`` * ``void print(final Serde keySerde, final Serde valSerde)`` * ``void print(final Serde keySerde, final Serde valSerde, final String streamName)`` * ``void writeAsText(final String filePath)`` * ``void writeAsText(final String filePath, final Serde keySerde, final Serde valSerde)`` * ``void writeAsText(final String filePath, final String streamName)`` * ``void writeAsText(final String filePath, final String streamName, final Serde keySerde, final Serde valSerde)`` These methods have been deprecated in favor of using the :ref:`Interactive Queries API `. If you want to query the current content of the state store backing the ``KTable``, use the following approach: * Make a call to ``KafkaStreams.store(String storeName, QueryableStoreType queryableStoreType)`` followed by a call to ``ReadOnlyKeyValueStore.all()`` to iterate over the keys of a ``KTable``. If you want to view the changelog stream of the ``KTable`` then you could do something along the lines of the following: * Call ``KTable.toStream()`` then call ``KStream#print()``. API changes (from |cp| 3.1 to |cp| 3.2) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Kafka Streams and its API were improved and modified since the release of |cp| 3.1.x. Some of these changes are breaking changes that require you to update the code of your Kafka Streams applications. In this section we focus on only these breaking changes. Handling Negative Timestamps and Timestamp Extractor Interface """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" Kafka Streams behavior with regard to invalid (i.e., negative) timestamps was improved. By default you will still get an exception on an invalid timestamp. However, you can reconfigure your application to react more gracefully to invalid timestamps which was not possible before. Even if you do not use a custom timestamp extractor, you need to recompile your application code, because the ``TimestampExtractor`` interface was changed in an incompatible way. The internal behavior of Kafka Streams with regard to negative timestamps was changed. Instead of raising an exception if the timestamp extractor returns a negative timestamp, the corresponding record will be dropped silently and not be processed. This allows to process topic for which only a few records cannot provide a valid timestamp. Furthermore, the ``TimestampExtractor`` interface was changed and now has one additional parameter. This parameter provides a timestamp that can be used, for example, to return an estimated timestamp, if no valid timestamp can be extracted from the current record. The old default timestamp extractor ``ConsumerRecordTimestampExtractor`` was replaced with ``FailOnInvalidTimestamp``, and two new extractors which both extract a record's built-in timestamp were added (``LogAndSkipOnInvalidTimestamp`` and ``UsePreviousTimeOnInvalidTimestamp``). The new default extractor (``FailOnInvalidTimestamp``) raises an exception in case of a negative built-in record timestamp such that Kafka Streams' default behavior is kept (i.e., fail-fast on negative timestamp). The two newly added extractors allow to handle negative timestamp more gracefully by implementing a log-and-skip and timestamp-estimation strategy. .. literalinclude:: upgrade-guide-3_2/upgrade-guide_timestamp-extractor.java :language: java Metrics """"""" If you provide custom metrics by implementing interface ``StreamsMetrics`` you need to update your code as the interface has many new methods allowing to register finer grained metrics than before. More details are available in `KIP-114 `__. .. literalinclude:: upgrade-guide-3_2/upgrade-guide_metrics.java :language: java Scala """"" Starting with 0.10.2.0, if your application is written in Scala, you may need to declare types explicitly in order for the code to compile. The :cp-examples:`StreamToTableJoinScalaIntegrationTest|src/test/scala/io/confluent/examples/streams/StreamToTableJoinScalaIntegrationTest.scala` has an example where the types of return variables are explicitly declared. API changes (from |cp| 3.0 to |cp| 3.1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Stream grouping and aggregation """"""""""""""""""""""""""""""" Grouping (i.e., repartitioning) and aggregation of the ``KStream`` API was significantly changed to be aligned with the ``KTable`` API. Instead of using a single method with many parameters, grouping and aggregation is now split into two steps. First, a ``KStream`` is transformed into a ``KGroupedStream`` that is a repartitioned copy of the original ``KStream``. Afterwards, an aggregation can be performed on the ``KGroupedStream``, resulting in a new ``KTable`` that contains the result of the aggregation. Thus, the methods ``KStream#aggregateByKey(...)``, ``KStream#reduceByKey(...)``, and ``KStream#countByKey(...)`` were replaced by ``KStream#groupBy(...)`` and ``KStream#groupByKey(...)`` which return a ``KGroupedStream``. While ``KStream#groupByKey(...)`` groups on the current key, ``KStream#groupBy(...)`` sets a new key and re-partitions the data to build groups on the new key. The new class ``KGroupedStream`` provides the corresponding methods ``aggregate(...)``, ``reduce(...)``, and ``count(...)``. .. literalinclude:: upgrade-guide-3_1/upgrade-guide_grouping.java :language: java Auto Repartitioning """"""""""""""""""" Previously when performing ``KStream#join(...)``, ``KStream#outerJoin(...)`` or ``KStream#leftJoin(...)`` operations after a key changing operation, i.e, ``KStream#map(...)``, ``KStream#flatMap(...)``, ``KStream#selectKey(...)`` the developer was required to call ``KStream#through(...)`` to repartition the mapped ``KStream`` This is no longer required. Repartitioning now happens automatically for all join operations. .. literalinclude:: upgrade-guide-3_1/upgrade-guide_repartitioning.java :language: java TopologyBuilder """"""""""""""" Two public method signatures have been changed on ``TopologyBuilder``, ``TopologyBuilder#sourceTopics(String applicationId)`` and ``TopologyBuilder#topicGroups(String applicationId)``. These methods no longer take ``applicationId`` as a parameter and instead you should call ``TopologyBuilder#setApplicationId(String applicationId)`` before calling one of these methods. .. literalinclude:: upgrade-guide-3_1/upgrade-guide_topology-builder.java :language: java .. _streams_upgrade-guide_dsl-store-names: DSL: New parameters to specify state store names """""""""""""""""""""""""""""""""""""""""""""""" Apache Kafka ``0.10.1`` introduces :ref:`Interactive Queries `, which allow you to directly query state stores of a Kafka Streams application. This new feature required a few changes to the operators in the DSL. Starting with Kafka ``0.10.1``, state stores must be always be "named", which includes both explicitly used state stores (e.g., defined by the user) and internally used state stores (e.g., created behind the scenes by operations such as ``count()``). This naming is a prerequisite to make state stores queryable. As a result of this, the previous "operator name" is now the state store name. This change affects ``KStreamBuilder#table(...)`` and *windowed* aggregates ``KGroupedStream#count(...)``, ``#reduce(...)``, and ``#aggregate(...)``. .. literalinclude:: upgrade-guide-3_1/upgrade-guide_operator-names.java :language: java Windowing """"""""" The API for ``JoinWindows`` was improved. It is not longer possible to define a window with a default size (of zero). Furthermore, windows are not named anymore. Rather, any such naming is now done for state stores. See section :ref:`DSL: New parameters to specify state store names ` above). .. literalinclude:: upgrade-guide-3_1/upgrade-guide_windows.java :language: java