Frequently Asked Questions for Confluent Cloud for Apache Flink

This topic provides answers to frequently asked questions about Confluent Cloud for Apache Flink®.

What is Confluent Cloud for Apache Flink?

Confluent Cloud for Apache Flink is a fully managed, cloud-native service for stream processing using Flink SQL. It enables you to process, analyze, and transform data in real time directly on your Confluent Cloud-managed Kafka clusters.

How do I get started with Confluent Cloud for Apache Flink?

Get started by clicking SQL Workspaces in the Confluent Cloud Console. For more information, see Flink SQL Quick Start with Confluent Cloud Console.

Also, you can run the confluent flink shell command to start the Flink SQL shell. For more information, see Flink SQL Shell Quick Start.

What is a compute pool?

A compute pool is a dedicated set of resources, measured in CFUs, that runs your Flink SQL statements. You must create a compute pool before running statements. Multiple statements can share a compute pool, and you can scale pools up or down as needed. For more information, see Compute Pools.

How is Confluent Cloud for Apache Flink billed?

Billing is based on the number of CFUs provisioned in your compute pools and the duration for which they are running. You are charged for the resources allocated, not per statement. For more information, see Billing.

What are the prerequisites for using Confluent Cloud for Apache Flink?

  • You need a Confluent Cloud account and an environment with Stream Governance enabled.
  • You must have the appropriate roles and permissions, for example, the FlinkDeveloper role to run statements.
  • You need access to at least one compute pool.

What sources and sinks are supported?

Confluent Cloud for Apache Flink supports reading from and writing to Kafka topics in your Confluent Cloud environment. In addition, you use Confluent’s AI/ML features to perform searches on external tables. And you can use Confluent Tableflow to materialize streams to external tables.

What happens if my statement fails?

If a statement fails, you will see an error message in the Cloud Console. You can view logs and metrics to diagnose the issue. Statements can be restarted after resolving the underlying problem.

How do I manage schema evolution?

Flink SQL integrates with Confluent’s Schema Registry. When reading from or writing to topics with Avro, Protobuf, or JSON Schema, Flink SQL uses the registered schemas and handles compatible schema evolution.

How do I control access to Flink resources?

Access to Flink resources is managed using Role-Based Access Control (RBAC) in Confluent Cloud. Assign users and service accounts the appropriate roles, such as FlinkAdmin or FlinkDeveloper, to control what actions they can perform. For more information, see Grant Role-Based Access.

How do I move my SQL statements to production?

To move your Flink SQL statements to production, follow best practices such as using service accounts, applying least-privilege permissions, and thoroughly testing your statements in a development environment before deploying them to production compute pools. For detailed guidance, see Best Practices for Moving SQL Statements to Production.

You can use GitHub Actions and Terraform to deploy your Flink SQL statements to production. For more information, see Deploy a Flink SQL Statement Using CI/CD.

Where can I get help or support?

If you have questions or need support, you can use the in-product help in the Confluent Cloud Console, visit the Flink documentation, or reach out through the established channels. You can also ask questions in the Confluent Community forums or contact Confluent Support if you have a support plan.