Deploy and Manage Statements in Confluent Manager for Apache Flink

Statements are Confluent Manager for Apache Flink (CMF) resources that represent SQL queries, including their configuration, properties, and potential results. SQL queries must use the Flink SQL syntax. Statements are created within an Environment and must be linked to a Compute Pool.

A Statement has access to all catalogs, databases, and tables that its environment has access to. Environments can also set default Flink configuration properties, overriding those specified directly in the statement. The Compute Pool referenced by a Statement provides the specification and configuration of the Flink cluster responsible for executing the Statement’s SQL query.

There are four types of statements that are differently handled by CMF:

Statements reading catalog metadata such as SHOW TABLES.

These statements are immediately executed by CMF without creating a Flink deployment. They are typically used in ad-hoc, interactive scenarios.

Table operation statements (CREATE TABLE, ALTER TABLE, DROP TABLE).

These statements allow you to create, customize, and drop tables backed by Kafka topics. Table operations are executed by CMF without creating a Flink deployment. For details, see Table Operations in Confluent Manager for Apache Flink.

Interactive SELECT statements.

These statements are executed on Flink clusters and collect results that can be retrieved from CMF via the Statement Results endpoint. SELECT statements are typically used in ad-hoc, interactive scenarios to explore data or develop production statements.

Detached INSERT INTO statements.

These statements are executed on Flink clusters and write results into a table (backed by a Kafka topic). INSERT INTO statements are typically used to deploy data pipeline jobs in production scenarios.

See the following topics to learn more about working with Statements: