Deploy and Manage Confluent Manager for Apache Flink Applications¶
Confluent Manager for Apache Flink® defines a Flink application resource, which closely mirrors a Apache Flink FlinkDeployment in structure and purpose. A Flink application exists to provide Confluent customers with strong compatibility guarantees to make sure Flink Applications won’t break with future version upgrades. Confluent might provide additional features in the Flink Application abstraction. FlinkDeployment is exposed as Kubernetes Custom Resource (CR), and a FlinkApplication exhibits the same principles, but it is only available as a CR when used with CFK. In CMF, it is exposed via a REST API, C3 and the confluent cli. A Custom Resource is an extension of the Kubernetes API and provides a way to define new object types.
A Flink application enables you to define and manage Flink clusters using Confluent Manager for Apache Flink (CMF). It describes the desired state of a Flink application, including configurations for the job manager, task manager, and job specifications, and exposes the actual status of the application (which is a Flink cluster).
After CMF is installed and running, it will continuously watch Flink applications. To learn more about CMF, see Confluent Manager for Apache Flink. To install CMF, see Install Confluent Manager for Apache Flink with Helm
Running and suspending a Flink application¶
A Flink application controls the physical deployment of the underlying Flink application:
- Running: A Flink application in the desired job state
running
, it is deployed according its specified configuration. When there are no errors, the Flink cluster associated with the Flink application consumes the configured amount of physical resources (such as CPU and memory). - Suspended: A Flink application in
suspended
desired state is inactive. When there are no errors, the Flink cluster associated with the Flink application consumes no physical resources (such as CPU and memory). Depending on the configuredupgradeMode
, the state of asuspended
Flink Flink application will be preserved and restored upon a transition back intorunning
state.
Relationship to Environment¶
When you deploy a Flink application with CMF, CMF manages the life-cycle of the Flink job that contains it with an Environment.
The Environment controls the Kubernetes namespace in which the Flink job is deployed.
In addition, the Environment sets configuration options that take precedence over the configuration options specified in the Flink application.
Isolation between applications¶
Each Flink application is executed in a Flink cluster. This deployment mode isolates
each Flink application at the process level. When a Flink application is in a running
state,
the physical resources (such as CPU and memory) are exclusively allocated to the respective Flink application.
Flink application definition example¶
Flink application objects are defined in YAML or JSON. A Flink application definition might look like the following:
The following shows a YAML example of a Flink application definition.
apiVersion: cmf.confluent.io/v1
kind: FlinkApplication
metadata:
name: curl-example
spec:
image: confluentinc/cp-flink:1.19.1-cp2
flinkVersion: v1_19
flinkConfiguration:
taskmanager.numberOfTaskSlots: "1"
serviceAccount: flink
jobManager:
resource:
memory: 1024m
cpu: 1
taskManager:
resource:
memory: 1024m
cpu: 1
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
state: running
parallelism: 1
upgradeMode: stateless
{
"apiVersion": "cmf.confluent.io/v1",
"kind": "FlinkApplication",
"metadata": {
"name": "curl-example"
},
"spec": {
"image": "confluentinc/cp-flink:1.19.1-cp2",
"flinkVersion": "v1_19",
"flinkConfiguration": {
"taskmanager.numberOfTaskSlots": "1"
},
"serviceAccount": "flink",
"jobManager": {
"resource": {
"memory": "1024m",
"cpu": 1
}
},
"taskManager": {
"resource": {
"memory": "1024m",
"cpu": 1
}
},
"job": {
"jarURI": "local:///opt/flink/examples/streaming/StateMachineExample.jar",
"state": "running",
"parallelism": 1,
"upgradeMode": "stateless"
}
}
}