Streaming Agents with Confluent Intelligence in Confluent Cloud
Confluent Intelligence provides Streaming Agents to create intelligent streaming workflows.
Note
Streaming Agents are available as an Open Preview feature in Confluent Cloud.
A Preview feature is a Confluent Cloud component that is being introduced to gain early feedback from developers. Preview features can be used for evaluation and non-production testing purposes or to provide feedback to Confluent. The warranty, SLA, and Support Services provisions of your agreement with Confluent do not apply to Preview features. Confluent may discontinue providing preview releases of the Preview features at any time in Confluent’s’ sole discretion.
Streaming Agents on Confluent Cloud enable you to build, deploy, and orchestrate event-driven agents natively on Flink. Embedded in data streams, streaming agents monitor and act on what’s happening in the business in real time to power intelligent context-aware automation.
Streaming Agents use tools to interact with external systems, perform actions, or retrieve information as part of an AI workflow. By invoking tools, agents extend their capabilities beyond simple data processing, enabling more complex and dynamic workflows in streaming applications.
Streaming Agents provide a comprehensive platform for building AI-powered streaming applications with high-level abstractions, debugging capabilities, and robust agent orchestration that makes it easier to build, test, and deploy intelligent streaming workflows.
Streaming Agents include these capabilities:
Function call agent: High-level agent definition that abstracts away orchestration logic
Tool Calling with MCP: Enable agents to use the right tools at the right time with Model Context Protocol (MCP) integration
Agent Runtime: Robust execution engine with MCP integration and tool calling
Replayability: First-class debugging and testing capabilities for agent sessions
Function call agent
The function call agent provides a declarative way to define AI agents by using familiar constructs:
Role: Define what the agent does through prompts.
Capabilities: Specify tools the agent can use (functions, MCP servers).
Model: Reference to the large language model (LLM) that powers the agent’s reasoning.
This abstraction eliminates the need to manually chain SQL operators or Table API calls, making it much easier to express dynamic reasoning loops and build modular, testable agents.
The function call agent on Confluent Cloud has these benefits:
Intuitive agent definition using SQL DDL statements
Automatic orchestration of reasoning and tool invocation loops
Native integration with Flink’s event-driven runtime
Support for both function-based and MCP-based tools
Tool calling with MCP
Agent calling tools with Confluent Intelligence
Streaming Agents invoke the right tools at the right time by using MCP integration, which brings tool calling into data streams for fast, reliable, and context-aware automation.
MCP Client Support: Use MCP client support in Flink for contextual tool invocation
Flexible Tool Definition: Define tools in an MCP server or as UDFs
Traceability and Auditability: All tool interactions are logged for complete visibility
Contextual Decision Making: Agents select appropriate tools based on real-time business context
Replayability
Replayability makes debugging and testing AI agents a first-class feature by capturing every agent interaction in real-time:
Event Traceability: Complete timeline of events, decisions, and tool calls
Safe Replay: Test new agent versions against historical data without side effects
Comparative Analysis: Compare outputs between different agent versions
The replayability feature on Confluent Cloud has these benefits:
Full visibility into agent behavior and decision-making
Safe testing of agent changes against real-world event streams
Time-travel debugging capabilities
A/B testing
Regression validation
Agent runtime
The agent runtime provides a robust execution engine that handles:
Tool Integration: Seamless integration with MCP servers and Flink functions
State Management: State handling to recover from errors conversations
Error Handling: Robust error handling and retry mechanisms
Observability: Comprehensive logging and metrics for agent execution
Scalability: Scalable execution across Flink clusters
Architecture overview
Streaming Agents build on the Confluent Cloud for Apache Flink infrastructure. The system processes streaming events through agent workflows, capturing all interactions for debugging and replay capabilities.
Key components include:
Agent Runtime: Orchestrates iterative loops and tool execution
Model Inference: Handles LLM interactions and reasoning
Tool Integration: Manages function-based and MCP-based tools
Session Store: Maintains conversation state and context
Replay Engine: Enables debugging and testing capabilities
Scaling and multi-agent systems
Streaming Agents scale to power any use case without reinventing your architecture. The platform supports multiple architectural patterns for complex agent workflows:
- Event-Driven Architecture
Decoupled, immutable architecture with robust security controls
Real-time and performant processing with fully managed cloud-native service
120+ pre-built connectors for seamless data integration
- Integration Ecosystem
Data Sources: Browser, mobile, telemetry, data warehouses, data lakes, SaaS apps
Stream Processing: Event-driven processing with governance and security
Data Sinks: Data warehouses, data lakes, CRM, CDP, apps, and microservices
AI Services: Tools (MCP), LLMs, vector stores, and external services
Use cases
Streaming Agents enable a wide range of intelligent streaming applications.
- Customer service automation
Real-time customer query processing
Automated ticket routing and resolution
Multi-agent workflows for complex issues
- Financial services
Real-time fraud detection and prevention
Automated trading decisions
Risk assessment and compliance monitoring
- IoT and edge computing
Intelligent sensor data processing
Predictive maintenance workflows
Automated response to equipment failures
- Content processing
Real-time content moderation
Automated content categorization
Dynamic content personalization
Model support
Streaming Agents support the following models:
Anthropic
Gemini
OpenAI
Getting started
To get started with Streaming Agents:
Define Tools: Create function-based or MCP-based tools by using the CREATE TOOL statement.
Create Agents: Define agents with models, prompts, and tools by using the CREATE AGENT statement.
Execute Agents: Run agents on streaming data using the AI_RUN_AGENT function.
Debug and Replay: Use replay capabilities to test and debug agent behavior.
Quick Start
Use the Streaming Agents Quickstart repo to build your first Streaming Agent in minutes.
For detailed guides, see:
Note
Streaming agents are an Open Preview feature in Confluent Cloud.
A Preview feature is a Confluent Cloud component that is being introduced to gain early feedback from developers. Preview features can be used for evaluation and non-production testing purposes or to provide feedback to Confluent. The warranty, SLA, and Support Services provisions of your agreement with Confluent do not apply to Preview features. Confluent may discontinue providing preview releases of the Preview features at any time in Confluent’s’ sole discretion.