Security Controls in Confluent Cloud for Apache Flink

Confluent Cloud for Apache Flink® implements comprehensive security controls to protect your data and ensure isolation between tenants. These controls span network security, compute isolation, and secure execution of user-defined code.

This topic describes the security architecture and controls for:

  • Network security for public and private endpoints

  • Compute isolation and data plane security

  • User-defined functions (UDFs) and custom connector plugin security

Network security controls

Flink can execute statements toward multiple cluster resources running in the same or different Confluent Cloud environments. Access patterns are based on the Flink endpoint type deployed.

Public endpoints

Flink statements issued from public endpoints in a supported cloud service provider region can read and write Confluent Cloud cluster resources running on public endpoints inside the same and other Confluent Cloud environments within the same Confluent Cloud organization.

Statements running on public endpoints cannot access Confluent Cloud cluster resources running on private endpoints.

Private endpoints

Flink statements issued from private endpoints can read and write Confluent Cloud cluster resources running on private endpoints inside the same and other Confluent Cloud environments within the same organization.

All Confluent Cloud environments accessed by Flink using private networking must be in the same cloud region.

Flink statements issued on private endpoints can execute read, but not write, statements toward Confluent Cloud cluster resources running on public endpoints with the same or different Confluent Cloud organization. This policy addresses data exfiltration risks.

For more information about private networking, see Private Networking with Confluent Cloud for Apache Flink.

Compute isolation and security controls

Flink implements a multi-layered security architecture with strict isolation between tenants at both the control plane and data plane levels.

Control plane architecture

The Flink control plane is fully multi-tenant and runs in-region across three Availability Zones (AZs). It manages resources, metadata, and operations for all customers.

Data plane architecture

The Flink data plane is currently scoped to a single compute pool. While underlying compute nodes are shared, the actual Flink processes run as dedicated, single-tenant pods that process data only for that specific pool.

The data plane provides strict resource isolation and resource usage controls with cgroups, dedicated pods per compute pool, and single-tenant processing per pool.

Component isolation

Compute pools

Each compute pool can span multiple compute nodes containing multiple task managers managed by a single job manager.

Job managers

Job managers oversee resource management across applications and are scoped by compute pool. Job managers are designed for multi-tenant functionality but are currently deployed single-tenant.

Task managers

Task managers execute the actual data processing tasks. Task managers operate in single-tenant capacity.

RocksDB state storage

Each task manager uses a dedicated, isolated Kubernetes volume for RocksDB persistence, enforcing data segregation on shared compute nodes. State is uploaded to object storage for resiliency as part of the 60-second checkpointing interval.

Object storage for state

Checkpointing state is stored in encrypted object storage. Authentication is performed by using regional IAM roles. Only task managers have access to object store state. Users can’t initiate checkpoints or access object storage through arbitrary object store paths.

For more information about compute pools, see Compute Pools in Confluent Cloud for Apache Flink.

User-defined functions and custom connector plugin security

UDFs and custom connector plugins execute in a secure, multi-zone environment with isolated cloud accounts and Kafka pods. Each zone applies strict security measures aligned with Confluent security practices.

Isolated execution environment

UDFs and plugins execute in dedicated cloud accounts under least privilege access with zero-trust relationships to Confluent Cloud production environments. Execution occurs in isolated, untrusted virtual machines with seccomp-jailing for system call filtering and strict network policies.

Security monitoring

Workload protection agents monitor for security threats, and full audit logging is enabled. UDFs and plugins are stored with unique pod mappings, and pre-signed, time-limited URLs prevent unauthorized access.

For more information about UDFs, see User-defined Functions in Confluent Cloud for Apache Flink.