Statements in Confluent Cloud

Learn how to use statements for your SQL queries and data processing needs.

Important

Confluent Cloud for Apache Flink®️ is currently available for Preview. A Preview feature is a Confluent Cloud component that is being introduced to gain early feedback from developers. Preview features can be used for evaluation and non-production testing purposes or to provide feedback to Confluent. The warranty, SLA, and Support Services provisions of your agreement with Confluent do not apply to Preview features. Confluent may discontinue providing Preview releases of the Preview features at any time in Confluent’s sole discretion. Check out Getting Help for questions, feedback and requests.

For Flink SQL features and limitations in the preview program, see Notable Limitations in Public Preview.

A statement represents a high-level resource that’s created by Confluent Cloud for Apache Flink®️ when you enter a SQL query.

Each statement has a property that holds the SQL query that you entered. Based on the SQL query, it may be of different kinds, like a metadata operation, a statement writing data back to a table/topic (running in the background), or a statement writing data back to the UI or client. The statement represents any SQL statement like Data Definition Language (DDL), Data Manipulation Language (DML), and Data Query Language (DQL).

When you submit a SQL query, Confluent Cloud creates a statement resource. You can create a statement resource from any Confluent-supported interface, including the SQL shell, Confluent CLI, Cloud Console, the REST API, and Terraform.

The SQL query within a statement is immutable, which means that you can’t make changes to the SQL query once it’s been submitted. If you need to edit a statement, stop the running statement and create a new statement.

Lifecycle operations statements

These are the supported lifecycle operations for a statement.

Submit a statement

Describe a statement

Delete a statement

List statement exceptions

Queries in Flink

Flink enables issuing queries with an ANSI-standard SQL on data at rest (batch) and data in motion (streams).

These are the queries that are possible with Flink SQL.

Metadata queries
CRUD on catalogs, databases, tables, etc. Because Flink implements ANSI-Standard SQL, Flink uses a database analogy, and similar to a database, it uses the concepts of catalogs, databases and tables. In Kafka, these concepts map to environments, Kafka clusters, and topics, respectively.
Ad-hoc / exploratory queries
You can issue queries on a topic and see the results immediately. A query can be a batch query (“show me what happened up to now”), or a transient streaming query (“show me what happened up to now and give me updates for the near future”). In this case, when the query or the session is ended, no more compute is needed.
Streaming queries
These queries run continuously and read data from one or more tables/topics and write results of the queries to one table/topic.

In general, Flink supports both batch and stream processing, but the exact subset of allowed operations differs slightly depending of the type of query. For more information, see Flink SQL Queries in Confluent Cloud.

All queries are executed in streaming execution mode, whether the sources are bounded or unbounded.

Data lifecycle

Broadly speaking, the Flink SQL lifecycle is:

  • Data is read into a Flink table from Kafka via the Flink connector for Kafka.

  • Data is processed using SQL statements.

  • Data is processed using Flink task managers (managed by Confluent and not exposed to users), which are part of the Flink runtime. Some data may be stored temporarily as state in Flink while it’s being processed

  • Data is returned to the user as a result-set.

    • The result-set may be bounded, in which case the query terminates.
    • The result-set may be unbounded, in which case the query runs until canceled manually.

    OR

  • Data is written back out to one or more tables.

    • Data is stored in Kafka topics.
    • Schema for the table is stored in Flink Metastore and synchronized out to Schema Registry.

Available DDL statements

These are the available DDL statements in Confluent Cloud for Flink SQL.

ALTER
CREATE
DESCRIBE
RESET
SET
SHOW
USE