Compute Pools in Confluent Cloud for Apache Flink

A compute pool in Confluent Cloud for Apache Flink® represents a set of compute resources bound to a region that is used to run your SQL statements. The resources provided by a compute pool are shared between all statements that use it.

The capacity of a compute pool is measured in CFUs. Compute pools expand and shrink automatically based on the resources required by the statements using them. A compute pool without any running statements scales down to zero. The maximum size of a compute pool is configured during creation.

Each compute pool has a configurable maximum capacity, up to 50 CFUs per pool. All statements in the pool share this capacity, and autoscaling adjusts each statement’s CFU usage within that pool. A single statement can scale up to the full pool capacity when it is the only statement running there, which means the practical maximum for an individual statement is 50 CFUs.

To run larger overall workloads, you can:

  • Create multiple compute pools and distribute statements across them, for example using a 1:1 mapping between critical statements and pools when you want to dedicate the full 50 CFUs to a single job.

  • Optimize statement design and query patterns so they stay within the 50-CFU per-pool or per-statement envelope, which is chosen to balance performance with cluster stability.

If you have workloads that appear to require more capacity than these limits, consider engaging Confluent Support or your account team to review your use case, sizing, and statement design before making architectural decisions.

Note

Compute pools with up to 1,000 CFU capacity are available as a Limited Availability feature for customers with large Flink job fleets. If you would like to participate in the Limited Availability Program, sign up at Scale Apache Flink® compute pools to 1,000 CFUs.

The maximum size of a single job remains 50 CFUs. The total CFU usage of all concurrently running jobs in a pool cannot exceed the pool’s CFU capacity. For example, a 1,000 CFU pool can run up to twenty 50-CFU jobs concurrently, or a larger number of smaller jobs, as long as the total CFUs in use do not exceed 1,000.

With higher CFU limits available during Limited Availability, you can consolidate multiple smaller compute pools into a single larger pool to simplify architecture and management. For information on moving statements between pools, see Update metadata for a statement.

A compute pool is provisioned in a specific region. The statements using a compute pool can only read and write Apache Kafka® topics in the same region as the compute pool.

Compute pools fulfill two roles:

  • Workload Isolation: Statements in different compute pools are isolated from each other.

  • Budgeting: Statements within a compute pool can’t use more than the configured maximum number of CFUs.

Compute pools and isolation

All statements using the same compute pool compete for resources. Although Confluent Cloud’s Autopilot aims to provide each statement with the resources it needs, this might not always be possible, in particular, when the maximum resources of the compute pool are exhausted.

To avoid situations in which statements with different latency and availability requirements compete for resources, Confluent recommends using separate compute pools for different use cases, for example, ad-hoc exploration vs. mission-critical, long-running queries. Because statements may affect each other, Confluent recommends sharing compute pools only between statements with comparable requirements.

Manage compute pools

You can use these Confluent tools to create and manage compute pools.

Authorization

You must be authorized to create, update, delete (FlinkAdmin) or use (FlinkDeveloper) a compute pool. For more information, see Grant Role-Based Access in Confluent Cloud for Apache Flink.

Move statements between compute pools

You can move a statement from one compute pool to another. This can be useful if you’re close to maxing out the resources in one pool. To move a running statement, you must stop the statement, change its compute pool, then restart the statement.