Confluent Platform for Apache Flink Features and Support¶
Confluent Platform for Apache Flink® is compatible with open-source Flink. The following sections list all components that are supported by Confluent. Only configurations related to the listed components are supported. Other configurations or custom configurations are not supported.
For requirement and compatibility details, see Confluent Platform for Apache Flink compatibility.
Core components¶
The following core components are supported with Confluent Platform for Apache Flink:
- Runtime
- REST API
- Web UI
- Flink CLI
State backends¶
The following state backends are supported with Confluent Platform for Apache Flink:
- RocksDB - recommended as the default state backend.
- Memory - recommended for small state.
FileSystem implementations¶
The following FileSystem implementations are supported with Confluent Platform for Apache Flink:
- AWS S3 - includes Presto and Hadoop variants
- Azure blob storage
- Google Cloud cloud storage
Data formats¶
The following data formats are supported with Confluent Platform for Apache Flink:
- Avro
- Avro (CSR)
- CSV
- ORC
- Parquet
- Protobuf
- JSON
- Debezium JSON
Flink APIs, libraries and metadata catalogs¶
The following table lists Flink APIs and their Confluent Platform for Apache Flink support.
Flink Component | Supported by Confluent | Notes |
---|---|---|
Flink SQL | Yes | SQL Shell is not currently supported. |
Table API - Java | Yes | Python for this API not currently supported. |
DataStream API - Java | Yes | Python for this API not currently supported. |
DataSet API | No | Deprecated in open-source Flink and not supported in Confluent Platform for Apache Flink. |
In addition:
- Libraries: Complex Event Processing (CEP) is the only library supported with Confluent Platform for Apache Flink, for use with SQL. PyFlink, Flink ML, Stateful Functions and Queryable State are not supported.
- Catalogs:
GenericInMemoryCatalog
andJdbcCatalog
are supported with Confluent Platform for Apache Flink. Hive not currently supported.
Connectors¶
All Flink connectors are compatible with Confluent Platform for Apache Flink. However, support is limited to the connectors listed in the following table:
Connector | Supported by Confluent | Distribution channel | Notes |
---|---|---|---|
Kafka Source and Sink | Yes | Maven via packages.confluent.io | Java and SQL support only. Bundle with your user code JAR. |
FileSystem Source and Sink | Yes | Open-source Flink | Java and SQL support only. Additional support charges apply. Not bundled with the Confluent Docker image. |
JDBC | Yes | Open-source Flink | Java and SQL support only. |
CDC Source | Yes | Open-source Flink | Databases: DB2, MySQL, Oracle, Postgres, SQLServer. Java and SQL support only. |
All other connectors | No | Open-source Flink | No additional connectors are currently supported. |
Deployment and monitoring¶
Note the following about deploying Flink jobs with Confluent Platform for Apache Flink:
- Confluent Platform for Apache Flink supports Application Mode only.
- Confluent Platform for Apache Flink supports high-availability deployment via Kubernetes. The default mode to deploy with Kubernetes is native. This is the only supported deployment solution for Confluent Platform for Apache Flink.
- ZooKeeper is not supported.
- The following metrics reporters are supported with Confluent Platform for Apache Flink:
- Datadog
- Prometheus
- InfluxDB
- JMX
- Statsd
- Graphite