Compute Pools in Confluent Cloud for Apache Flink

A compute pool in Confluent Cloud for Apache Flink® represents a set of compute resources bound to a region that runs your SQL statements. All statements that use a compute pool share its resources.

Default compute pools

When default compute pools are enabled and you create a workspace or run a Flink SQL statement, Confluent Cloud for Apache Flink automatically creates and manages a default compute pool in your environment and region. These default pools are:

  • Automatic: Created on demand when you first use Flink in an environment-region.

  • Shared: All users in the environment use the default pool unless they specify an explicit pool. Although users share pools, statements remain visible only to the users who submit them and to the administrators.

  • Visible: The compute pools list in the Cloud Console shows default pools with a “default” label. You can view metrics and filter by default pools in the UI and API.

  • Elastic: Confluent Cloud for Apache Flink scales default pools automatically based on your workload needs, up to a maximum of 50 CFUs by default. Users with the OrganizationAdmin role can modify this limit.

You can start running SQL statements immediately without creating a compute pool.

Note

Users with the OrganizationAdmin role can enable or disable default compute pools at the organization level. When disabled, all users must create compute pools manually to run Flink SQL statements. For more information, see Manage Compute Pools.

User-created compute pools

You can create compute pools manually for advanced use cases such as these:

  • Workload isolation: Separate production workloads from development or ad hoc queries to prevent resource contention.

  • Budgeting: Set specific CFU limits for different teams or projects. Statements within a compute pool can’t use more than the configured maximum number of CFUs.

  • Security isolation: For fine-grained access control of statements, separate different workloads, for example, by team, and grant the FlinkDeveloper role at the pool level.

For more information, see Manage Compute Pools in Confluent Cloud for Apache Flink.

Note

Users with the FlinkAdmin role or higher can create a default compute pool manually by using the --default-pool flag in the Confluent CLI, the default_pool parameter in Terraform, or the spec.default_pool field in the API. This enables you to pre-provision a default pool with specific configurations before users start running statements.

Compute pool properties

CFUs measure the capacity of a compute pool. For more information, see CFUs. Compute pools expand and shrink automatically based on the resources that the statements using them require. A compute pool without any running statements scales down to zero. You configure the maximum size of a compute pool during creation.

  • Default compute pools: Have a platform-managed maximum of 50 CFUs by default. Users with the OrganizationAdmin role can modify this limit.

  • User-created compute pools: Have a maximum size that you configure during creation.

Each compute pool has a configurable maximum capacity, up to 50 CFUs per pool. All statements in the pool share this capacity, and autoscaling adjusts each statement’s CFU usage within that pool. A single statement can scale up to the full pool capacity when it is the only statement running there, which means the practical maximum for an individual statement is 50 CFUs.

To run larger overall workloads, you can:

  • Create multiple compute pools and distribute statements across them, for example using a 1:1 mapping between critical statements and pools when you want to dedicate the full 50 CFUs to a single job.

  • Optimize statement design and query patterns so they stay within the 50-CFU per-pool or per-statement envelope, which balances performance with cluster stability.

If you have workloads that appear to require more capacity than these limits, consider engaging Confluent Support or your account team to review your use case, sizing, and statement design before making architectural decisions.

Note

Compute pools with up to 1,000 CFU capacity are available as a Limited Availability feature for customers with large Flink job fleets. To participate in the Limited Availability Program, sign up at Scale Apache Flink® compute pools to 1,000 CFUs.

The maximum size of a single job remains 50 CFUs. The total CFU usage of all concurrently running jobs in a pool cannot exceed the pool’s CFU capacity. For example, a 1,000 CFU pool can run up to twenty 50-CFU jobs concurrently, or a larger number of smaller jobs, as long as the total CFUs in use do not exceed 1,000.

With higher CFU limits available during Limited Availability, you can consolidate multiple smaller compute pools into a single larger pool to simplify architecture and management. For more information about moving statements between pools, see Update metadata for a statement.

A compute pool is provisioned in a specific region. The statements using a compute pool can only read and write Apache Kafka® topics in the same region as the compute pool.

Isolation and resource sharing

All statements that use the same compute pool compete for resources. Although Confluent Cloud’s Autopilot aims to provide each statement with the resources it needs, this might not always be possible, in particular when the compute pool exhausts its maximum resources.

Default compute pools are suitable for most use cases, including development, testing, and many production workloads. However, if you have statements with different latency and availability requirements, you should create separate compute pools manually. For example, separate ad hoc exploration queries from mission-critical, long-running production queries to prevent resource contention. Because statements can affect each other, you should share compute pools only between statements with comparable requirements.

Manage compute pools

You can use these Confluent tools to create and manage compute pools.

Authorization

When default compute pools are enabled, all users can use Flink. Confluent Cloud for Apache Flink creates a default pool if one doesn’t exist in the region and environment the user needs, and the user can run and manage their own Flink statements.

For non-default pools, the following rules apply.

To create, update, or delete user-created compute pools, you need the FlinkAdmin, EnvironmentAdmin, or OrganizationAdmin role.

You can grant the FlinkDeveloper role at the organization level or at the environment level. You can also grant it at the compute-pool level to restrict a user’s access to specific pools only. This is useful when you want to limit which specific pools a user can access while removing their default environment-level permissions.

The FlinkFunctionDeveloper role enables users to create and manage user-defined function (UDF) artifacts and configure external connectivity, but does not provide access to compute pools or statements.

For more information, see Grant Role-Based Access in Confluent Cloud for Apache Flink.

Move statements between compute pools

You can move a statement from one compute pool to another. This can be useful if you’re close to maxing out the resources in one pool. To move a running statement, you must stop the statement, change its compute pool, then restart the statement.