Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Kafka Streams¶
Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in an Apache Kafka® cluster. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka’s server-side cluster technology.
- Quick Start Guide
- The Kafka Streams Quick Start demonstrates how to run your first Java application that uses the Kafka Streams library by showcasing a simple end-to-end data pipeline powered by Kafka.
- Streams Podcasts
Streaming Audio is a podcast from Confluent, the team that built Kafka. Host Tim Berglund (Senior Director of Developer Experience, Confluent) and guests unpack a variety of topics surrounding Kafka, event stream processing, and real-time data.
- Recommended Reading
- Streams API Screencasts
Watch the Intro to Streams API on YouTube.
Contents
- Introduction
- Kafka Streams Quick Start
- Tutorial: Introduction to Streaming Application Development
- Kafka Streams Demo Application
- Connecting Kafka Streams to Confluent Cloud
- Streams Concepts
- Kafka 101
- Stream
- Stream Processing Application
- Processor Topology
- Stream Processor
- Stateful Stream Processing
- Duality of Streams and Tables
- KStream
- KTable
- GlobalKTable
- Time
- Aggregations
- Joins
- Windowing
- Interactive Queries
- Processing Guarantees
- Out-of-Order Handling
- Out-of-Order Terminology
- Suggested Reading
- Streams Architecture
- Streams Code Examples
- Streams Developer Guide
- Writing a Streams Application
- Testing Streams Code
- Configuring a Streams Application
- Streams DSL
- Naming Kafka Streams DSL Topologies
- Optimizing Kafka Streams Topologies
- Processor API
- Data Types and Serialization
- Interactive Queries
- Memory Management
- Running Streams Applications
- Managing Streams Application Topics
- Streams Security
- Application Reset Tool
- Pipelining with Kafka Connect and Kafka Streams
- Overview
- Description of Data
- Demo Prerequisites
- Run the demo
- Example 1: Kafka console producer -> Key:String and Value:String
- Example 2: JDBC source connector with Single Message Transformations -> Key:Long and Value:JSON
- Example 3: JDBC source connector with SpecificAvro -> Key:String(null) and Value:SpecificAvro
- Example 4: JDBC source connector with GenericAvro -> Key:String(null) and Value:GenericAvro
- Example 5: Java client producer with SpecificAvro -> Key:Long and Value:SpecificAvro
- Example 6: JDBC source connector with Avro to ksqlDB -> Key:Long and Value:Avro
- Technical Notes
- Streams Operations
- Streams Upgrade Guide
- Upgrading from Confluent Platform 5.4.x to Confluent Platform 5.5.15
- Upgrading older Kafka Streams applications to Confluent Platform 5.5.x
- Streams API changes
- Streams API changes in Confluent Platform 5.3.0
- Streams API changes in Confluent Platform 5.2.0
- Streams API changes in Confluent Platform 5.1.0
- Streams API changes in Confluent Platform 5.0
- Streams API changes in Confluent Platform 4.1
- Streams API changes in Confluent Platform 4.0
- Streams API changes in Confluent Platform 3.3
- Streams API changes in Confluent Platform 3.2
- Streams API changes in Confluent Platform 3.1
- Streams FAQ
- General
- Is Kafka Streams a project separate from Kafka?
- Is Kafka Streams a proprietary library of Confluent?
- Do Kafka Streams applications run inside the Kafka brokers?
- Why Does My Kafka Streams Application Use So Much Memory?
- What are the system dependencies of Kafka Streams?
- How do I migrate my older Kafka Streams applications to the latest Confluent Platform version?
- Which versions of Kafka clusters are supported by Kafka Streams?
- What programming languages are supported?
- Why is my application re-processing data from the beginning?
- Scalability
- Processing
- How should I retain my Streams application’s processing results from being cleaned up?
- Accessing record metadata such as topic, partition, and offset information?
- Difference between
map
,peek
,foreach
in the DSL? - How to avoid data repartitioning if you know it’s not required?
- Serdes
config
method - How can I replace RocksDB with a different store?
- Failure and exception handling
- Interactive Queries
- Security
- Troubleshooting and debugging
- Easier to interpret Java stacktraces?
- Visualizing topologies?
- Inspecting streams and tables?
- Invalid Timestamp Exception
- Why do I get an
IllegalStateException
when accessing record metadata? - Why is
punctuate()
not called? - Scala: compile error “no type parameter”, “Java-defined trait is invariant in type T”
- How can I convert a KStream to a KTable without an aggregation step?
- RocksDB behavior in 1-core environments
- General
- Streams Javadocs