documentation
Get Started Free
  • Get Started Free
  • Courses
      What are the courses?

      Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

      Learning pathways (21)
      New Courses
      NewApache Flink® 101
      NewBuilding Flink® Apps in Java
      NewKafka® for .NET Developers
      NewPractical Event Modeling
      NewHybrid and Multicloud Architecture
      NewMastering Production Data Streaming Systems with Apache Kafka®
      Featured Courses
      Kafka® 101
      Kafka® Connect 101
      Kafka Streams 101
      Schema Registry 101
      ksqlDB 101
      Data Mesh 101
  • Learn
      Pick your learning path

      A wide range of resources to get you started

      Start Learning
      Articles

      Deep-dives into key concepts

      Patterns

      Architectures for event streaming

      FAQs

      Q & A about Kafka® and its ecosystem

      100 Days of Code

      A self-directed learning path

      Blog

      The Confluent blog

      Podcast

      Our podcast, Streaming Audio

      Coding in Motion

      Build a real-time streaming app

      NewApache Kafka® on the Go

      One-minute guides to Kafka's core concepts

  • Build
      Design. Build. Run.

      Build a client app, explore use cases, and build on our demos and resources

      Start Building
      Language Guides

      Build apps in your favorite language

      Tutorials

      Hands-on stream processing examples

      Demos

      More resources to get you started

  • Community
      Join the Community

      Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

      Learn More
      Meetups & Events

      Kafka and data streaming community

      Ask the Community

      Community forums and Slack channels

      Community Catalysts

      Sharing expertise with the community

      DevX Newsletter

      Bi-weekly newsletter with Apache Kafka® resources, news from the community, and fun links.

      Current 2023

      View sessions and slides from Current 2023

      Kafka Summit 2023

      View sessions and slides from Kafka Summit 2023

      Current 2022

      View sessions and slides from 2022

      Data Streaming Awards

      Nominate amazing use cases and view previous winners

Courses
What are the courses?

Video courses covering Apache Kafka basics, advanced concepts, setup and use cases, and everything in between.

Learning pathways (21)
New Courses
NewApache Flink® 101
NewBuilding Flink® Apps in Java
NewKafka® for .NET Developers
NewPractical Event Modeling
NewHybrid and Multicloud Architecture
NewMastering Production Data Streaming Systems with Apache Kafka®
Featured Courses
Kafka® 101
Kafka® Connect 101
Kafka Streams 101
Schema Registry 101
ksqlDB 101
Data Mesh 101
Learn
Pick your learning path

A wide range of resources to get you started

Start Learning
Articles

Deep-dives into key concepts

Patterns

Architectures for event streaming

FAQs

Q & A about Kafka® and its ecosystem

100 Days of Code

A self-directed learning path

Blog

The Confluent blog

Podcast

Our podcast, Streaming Audio

Coding in Motion

Build a real-time streaming app

NewApache Kafka® on the Go

One-minute guides to Kafka's core concepts

Build
Design. Build. Run.

Build a client app, explore use cases, and build on our demos and resources

Start Building
Language Guides

Build apps in your favorite language

Tutorials

Hands-on stream processing examples

Demos

More resources to get you started

Community
Join the Community

Confluent proudly supports the global community of streaming platforms, real-time data streams, Apache Kafka®️, and its ecosystems

Learn More
Meetups & Events

Kafka and data streaming community

Ask the Community

Community forums and Slack channels

Community Catalysts

Sharing expertise with the community

DevX Newsletter

Bi-weekly newsletter with Apache Kafka® resources, news from the community, and fun links.

Current 2023

View sessions and slides from Current 2023

Kafka Summit 2023

View sessions and slides from Kafka Summit 2023

Current 2022

View sessions and slides from 2022

Data Streaming Awards

Nominate amazing use cases and view previous winners

Get Started Free
Confluent Documentation
  1. Home
  2. Cloud
  3. Process Data Streams in Confluent Cloud
  4. Get Started

CLOUD

  • Overview
  • Get Started
    • Overview
    • Quick Start
    • Kafka API Quick Start
    • Manage Schemas Quick Start
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview of Confluent Cloud Examples
      • Example: Create Fully-Managed Services
      • Example: Build an ETL Pipeline
      • Confluent Replicator to Confluent Cloud Configurations
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
        • Manage Audit Logs
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Use Self-managed Encryption Keys
      • Protect Data at Rest using Self-managed Encryption Keys
      • Use Self-managed Encryption Keys on AWS
      • Use Self-managed Encryption Keys on Azure
      • Use Self-managed Encryption Keys on Google Cloud
      • Use Confluent CLI for Self-managed Encryption Keys
      • Use BYOK API for Self-managed Encryption Keys
      • Revoke Access to Data at Rest
      • Best Practices
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
  • Build Client Applications
    • Overview
    • Connect with Confluent (Partner Integration)
    • Client Quick Start
    • Configuration Settings
    • Kafka Consumer
    • Kafka Producer
    • Guides
      • Python Client
      • .NET Client
      • Go Client
      • Java Client
      • C++ Client
      • JMS Client
    • Examples
      • Overview
      • C/C++ Example
      • .NET Example
      • Go Example
      • Spring Boot Example
      • Java Example
      • KafkaProducer Example
      • Python Example
      • REST Example
      • Node.js Example
      • Clojure Example
      • Groovy Example
      • Kafka Connect Datagen Example
      • kafkacat Example
      • Kotlin Example
      • Ruby Example
      • Rust Example
      • Scala Example
    • Architecture
    • Test
    • Monitor
    • Optimize and Tune
      • Overview
      • Configuration Settings
      • Throughput
      • Latency
      • Durability
      • Availability
  • Manage Accounts and Access
    • Resource Hierarchy
      • Overview
      • Organizations
        • Overview
        • Manage Multiple Organizations
      • Environments
      • Confluent Resource Names (CRNs)
    • Manage Accounts
      • Service Accounts
      • User Accounts
    • Authenticate
      • Overview
      • Use API Keys
        • Overview
        • Best Practices
        • Troubleshoot
      • Use OAuth
        • Overview
        • Add an OAuth/OIDC identity provider
        • Use identity pools and filters
        • Manage the JWKS URI
        • Configure OAuth clients
        • Access Kafka REST APIs
        • Use Confluent STS tokens with REST APIs
        • Best Practices
      • Use Single Sign-on (SSO)
        • Overview
        • Enable SAML SSO
        • Use Azure OIDC SSO
        • Just-in-time User Provisioning
        • Group Mapping
          • Overview
          • Enable Group Mapping
          • Manage Group Mappings
          • Troubleshooting
          • Best Practices
      • Authentication Security Protections
    • Control Access
      • Role-Based Access Control
        • Role-based Access Control (RBAC)
        • Predefined RBAC Roles
        • Manage Role Bindings
        • Use ACLs with RBAC
      • Access Control Lists
      • Use the Confluent CLI with multiple credentials
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Work With Topics
    • Use the Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
  • Govern Data Streams
    • Overview
    • Stream Governance
      • Packages, Features, and Limits
      • Data Portal
      • Track Data with Stream Lineage
      • Manage Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage Guide
        • GraphQL API Catalog Usage Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema Validation
      • Schema Linking
      • Schema Registry Tutorial
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Formats, Serializers, and Deserializers
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
      • Security
    • Reference
      • Schema Registry REST API Usage Examples
      • Clusters API Quick Start
      • Use AsyncAPI to Describe Topics and Schemas
      • Maven Plugin
    • FAQ
  • Connect to External Systems
    • Overview
    • Install Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon SQS Source
      • Amazon S3 Sink
      • Amazon S3 Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Source
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS)
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink (Legacy)
      • Google BigQuery Sink V2
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink
      • Google Cloud Functions Sink
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • Google Cloud Pub/Sub Source
      • HTTP Sink
      • HTTP Source
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium)
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium)
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • Oracle CDC Source
        • Connector Features
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • Configure and Launch the connector
        • SMT examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink
      • Pinecone Sink
      • PostgreSQL CDC Source (Debezium)
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Source
      • RabbitMQ Sink
      • Redis Sink
      • Salesforce Bulk API Source
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Install Custom Plugins and Custom Connectors
      • About Custom Connectors
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
      • API and CLI
    • Networking, DNS, and Service Endpoints
    • Confluent Cloud API for Managed and Custom Connectors
    • Manage Public Egress IP Addresses
    • Preview Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Transforms List
      • Single Message Transforms for Confluent Platform
      • Cast
      • Drop
      • DropHeaders
      • EventRouter (Debezium)
      • ExtractField
      • ExtractTopic
      • Filter (Apache Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
      • Custom transformations
  • Process Data Streams
    • Flink SQL Overview
    • Get Started with Flink SQL
      • Get Started
      • Quick Start with Cloud Console
      • Quick Start with SQL Shell
    • How-To Guides
      • How-to Guides
      • Aggregate a Stream in a Tumbling Window
      • Convert the Serialization Format of a Topic
    • Concepts
      • Stream Processing Concepts
      • Compute Pools
      • Statements
      • Determinism in Continuous Queries
      • Dynamic Tables
      • Timely Stream Processing
      • Time Attributes
    • Operate and Deploy
      • Operate and Deploy
      • Manage statement life cycle with the Confluent CLI
      • Monitor Statements
      • Grant Role-Based Access
      • Generate a HAR file for Troubleshooting
    • Reference
      • Flink SQL Reference
      • SQL Syntax
      • Statements
        • Overview
        • ALTER TABLE
        • CREATE TABLE
        • DESCRIBE
        • RESET
        • SET
        • SHOW
        • USE CATALOG
        • USE database_name
      • Queries
        • DML Queries
        • Deduplication
        • Group Aggregation
        • INSERT INTO FROM SELECT
        • INSERT VALUES
        • Joins
        • LIMIT
        • Pattern Recognition
        • ORDER BY
        • OVER Aggregation
        • SELECT
        • Set Logic
        • Top-N
        • Window Aggregation
        • Window Deduplication
        • Window Join
        • Window Top-N
        • Window Table-Valued Function
        • WITH
      • Functions
        • Functions
        • Aggregate
        • Collections
        • Comparison
        • Conditional
        • Datetime
        • Hashing
        • JSON
        • Numeric
        • String
      • Data Types
      • Time Zone
      • Serialize and Deserialize Data
      • Reserved Keywords
      • Notable Limitations in Public Preview
    • Get Help
    • Build Data Pipelines with Stream Designer
      • Stream Designer for Confluent Cloud
      • Quick Start for Stream Designer
      • Create a Join Pipeline
      • Create an Aggregation Pipeline
      • Import a Recipe Into a Pipeline
      • Import and Export a Pipeline
      • Edit and Update a Pipeline
      • Role-Based Access Control for Pipelines
      • Troubleshoot a Pipeline in Stream Designer
      • Manage Pipelines With the CLI
      • Manage Pipelines With the REST API
      • Manage Pipeline Secrets
    • Create Stream Processing Apps with ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the Confluent CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Quick Start
      • Grant Role-Based Access
  • Manage Networking
    • Networking in Confluent Cloud
    • Networking on AWS
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • AWS PrivateLink
        • AWS PrivateLink for Dedicated Clusters
        • AWS PrivateLink for Enterprise Clusters
      • VPC Peering on AWS
      • AWS Transit Gateway
    • Networking on Azure
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Azure Private Link
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Google Cloud Private Service Connect
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Schema Registry in Private Networking
      • Public Egress IP Address for Connectors and Cluster Linking
      • Cluster Linking in AWS PrivateLink
      • Follower Fetching in AWS VPC Peering
    • Use Confluent Cloud with Private Networking
    • Test Connectivity
  • Log and Monitor
    • Audit Logs
      • Concepts
      • Understand Audit Log Records
      • Event Schema
      • Auditable Event Methods
        • Connector
        • Custom connector plugin
        • Flink
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management
        • ksqlDB Cluster Authentication and Authorization
        • Networking
        • Notifications Service
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Pipeline (Stream Designer)
        • Role-based Access Control (RBAC)
        • Schema Registry Authentication and Authorization
        • Schema Registry Management and Operations
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Metrics
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Kafka Clients to Confluent Cloud
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use GCP Pay As You Go
    • Use GCP Commits
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Service Quotas
    • List Service Quotas using Confluent CLI
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Kafka Admin and Produce APIs
    • Connect API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
    • Stream Designer Pipelines API
    • Public API Terms of Use
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
  • Support
  • Glossary

Flink SQL Quick Start with Confluent Cloud Console¶

Important

Confluent Cloud for Apache Flink®️ is currently available for Preview. A Preview feature is a Confluent Cloud component that is being introduced to gain early feedback from developers. Preview features can be used for evaluation and non-production testing purposes or to provide feedback to Confluent. The warranty, SLA, and Support Services provisions of your agreement with Confluent do not apply to Preview features. Confluent may discontinue providing Preview releases of the Preview features at any time in Confluent’s sole discretion. Check out Getting Help for questions, feedback and requests.

For Flink SQL features and limitations in the preview program, see Notable Limitations in Public Preview.

This quick start gets you up and running with Confluent Cloud for Apache Flink®️. The following steps show how to create a compute pool for running SQL statements on streaming data.

In this quick start guide, you perform the following steps:

  • Step 1: Create a Flink compute pool
  • Step 2: Create a workspace
  • Step 3: Run a SQL statement
  • Step 4a: (Optional) Query existing topics
  • Step 4b: (Optional) Using Sample Data

Prerequisites¶

You need the OrganizationAdmin, EnvironmentAdmin, or FlinkAdmin role for creating compute pools, or the FlinkDevelper role if you already have a compute pool. If you don’t have the appropriate role, reach out to your OrganizationAdmin or EnvironmentAdmin.

Step 1: Create a Flink compute pool¶

A compute pool represents the compute resources that are used to run your SQL statements. The resources provided by a compute pool are shared among all statements that use it. It allows you to limit or guarantee resources as your use cases require. A compute pool is bound to a region.

  1. Log in to Confluent Cloud Console at https://confluent.cloud/login.

  2. In the navigation menu, click Environments and click the tile for the environment where you want to use Flink SQL.

  3. In the environment details page, click Flink (preview).

  4. In the Flink (preview) page, click Compute pools in case it is not already selected.

  5. Click Create compute pool to open the Create compute pool page.

  6. In the Region dropdown, select the region that hosts the data you want to process with Flink SQL, or use any region if you just want to try out Flink using sample data. Click Continue.

  7. In the Pool name textbox, enter “my-compute-pool”.

  8. In the Max Confluent Flink Units (CFU) dropdown, select 10. For more information, see Confluent Flink Units (CFUs).

  9. Click Continue, and on the Review and create page, click Finish.

    A tile for your compute pool appears on the Flink (preview) page. It shows the pool in the Provisioning state. It may take a few minutes for the pool to enter the Running state.

    Tip

    The tile for your compute pool provides the Confluent CLI command for using the pool from the CLI. Learn more about the CLI in the Flink SQL Quick Start with the SQL Shell.

Step 2: Create a workspace¶

When your compute pool is in the Running state, you can create a SQL workspace. SQL workspaces provide an intuitive, flexible UI for dynamically exploring and interacting with all of your data on Confluent Cloud using Flink SQL. In workspaces, you can save your queries, run multiple queries simultaneously in a single view, and browse your catalogs, databases and tables.

  • In the my-compute-pool tile, click Open SQL workspace.

    A new workspace page opens, containing a SQL code editor.

Step 3: Run a SQL statement¶

In the code editor, or cell, of the new workspace, you can start running SQL statements.

  1. Copy the following SQL and paste it into the cell.

    SELECT CURRENT_TIMESTAMP;
    
  2. Click Run.

    Information about the statement is displayed, including its status and unique identifier. Click the Statement ID link to open the statement details pane.

    After a few seconds, the result from the statement is displayed.

    Your output should resemble:

    CURRENT_TIMESTAMP
    2023-09-14 17:35:48.322
    

At this point, you have verified that Confluent Cloud for Flink is working as expected. Now, you have two options on how to proceed.

Step 4a: (Optional) Query existing topics¶

If you’ve created the compute pool in a region where you already have Kafka clusters and topics, you can explore this data with Flink SQL. If not, proceed to Step 4b.

In Flink SQL, catalog objects (e.g. tables) are scoped by catalog and database.

  • A catalog is a collection of databases that share the same namespace.
  • A database is a collection of tables that share the same namespace.

Tip

In Confluent Cloud, an environment is mapped to a Flink catalog, and a Kafka cluster is mapped to a Flink database.

  1. Set the a default catalog and database. While you can always use three-part identifiers for your tables (like catalog.database.table), it’s more convenient to set a default.

    You can do this by using the dropdown menus (Use catalog and Use database) in the top-right corner of the SQL workspace. First, choose an environment as your default catalog, then set the database to one of the Kafka clusters in that environment.

  2. Verify this worked by clicking add-new-cell to create a new cell, and run the following statement.

    SHOW TABLES;
    

    This statement lists all the tables of the Kafka cluster that you have just selected as the default.

  1. In another cell, you can now browse any of these tables by running a SELECT statement.

    SELECT * FROM <table_name>;
    

Step 4b: (Optional) Using Sample Data¶

If you don’t have any existing clusters and topics in the regions where you created the compute pool, you can use Flink to create a topic and populate it with data before subsequently querying it.

  1. Follow the steps in Manage Kafka Clusters on Confluent Cloud to create a Kafka cluster named “my-kafka-cluster” in the same region that you created the compute pool.

  2. Now you can set the default catalog and database. While you can always use three-part identifiers for your tables (like catalog.database.table) it’s more convenient to set a default.

    You can do this by using the dropdown menus in the top-right corner of the SQL workspace. First, choose an environment as your default catalog, then choose “my-kafka-cluster” as your default database.

Tip

In Confluent Cloud, an environment is mapped to a Flink catalog, and a Kafka cluster is mapped to a Flink database.

  1. Click add-new-cell to create a new cell, and run the following statement to create a table in the default database and catalog. Flink SQL automatically creates the backing topic and schemas in Kafka and Schema Registry.

    CREATE TABLE random_int_table(
      ts TIMESTAMP_LTZ(3),
      random_value INT
    );
    

    Once the statement is completed, you should see the table and its schema in the left-side catalog browser.

  2. In the next cell, run the following INSERT VALUES statement to populate random_int_table with records that have a timestamp field and an integer field. timestamp values are generated by the CURRENT_TIMESTAMP function, and integer values are generated by the RAND_INTEGER(INT) function.

    INSERT INTO random_int_table VALUES
      (CURRENT_TIMESTAMP, RAND_INTEGER(100)),
      (CURRENT_TIMESTAMP, RAND_INTEGER(1000)),
      (CURRENT_TIMESTAMP, RAND_INTEGER(10000)),
      (CURRENT_TIMESTAMP, RAND_INTEGER(100000)),
      (CURRENT_TIMESTAMP, RAND_INTEGER(1000000));
    
  3. Finally, in the next cell, run the following statement to query random_int_table for all of its records.

    SELECT * FROM random_int_table;
    
  4. The statement continues to run, waiting for more records to be produceed to the table/topic it reads from. You can stop the query by clicking Stop.

Next steps¶

  • Flink SQL Quick Start with the SQL Shell
  • How-to Guides

Related content¶

  • Statements in Confluent Cloud
  • Stream Processing Concepts
  • Built-in Functions

Note

This website includes content developed at the Apache Software Foundation under the terms of the Apache License v2.

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portal Ask the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • ksqlDB
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- Apache, Apache Kafka, Kafka, the Kafka logo, Apache Flink, Flink, and the Flink logo are trademarks of the Apache Software Foundation

On this page: