Grant Role-Based Access in Confluent Cloud for Apache Flink¶
Confluent Cloud for Apache Flink® supports Role-based Access Control (RBAC) for managing Flink resources. These roles are supported:
- FlinkAdmin: Grant the FlinkAdmin role to a user account to enable full access to Flink resources in an environment. The FlinkAdmin role is bound at the environment level.
- FlinkDeveloper: Grant the FlinkDeveloper role to a user account to enable limited access to Flink resources in an environment. It should be given to users who run Flink statements but don’t manage compute pools. The FlinkDeveloper role is bound at the environment level.
- Assigner: Grant the Assigner role to a user account that needs a role binding on a service account to run Flink statements with a service principal.
The account that gets the role is referred to as the principal. For more information, see Manage RBAC role bindings on Confluent Cloud.
Use the CLI reference command to grant permissions to users for Flink resources.
Authorization¶
The principal that’s used for statement submission requires these permissions:
- Consumer group permissions, so Flink can commit its offsets back
- Transactional-Id permissions for Flink to create and complete transactions
Run the following commands to grant these permissions to a principal. An
example USER_NAME
for a user account is u-123456
, and an example for a
service account is sa-1a2b3c
.
confluent iam rbac role-binding create \
--role DeveloperManage \
--principal User:${USER_NAME} \
--environment ${ENVIRONMENT_ID} \
--cloud-cluster ${KAFKA_ID} \
--kafka-cluster ${KAFKA_ID} \
--resource Group:_confluent-flink_ \
--prefix
confluent iam rbac role-binding create \
--role DeveloperRead \
--principal User:${USER_NAME} \
--environment ${ENVIRONMENT_ID} \
--cloud-cluster ${KAFKA_ID} \
--kafka-cluster ${KAFKA_ID} \
--resource Transactional-Id:_confluent-flink_ \
--prefix
confluent iam rbac role-binding create \
--role DeveloperWrite \
--principal User:${USER_NAME} \
--environment ${ENVIRONMENT_ID} \
--cloud-cluster ${KAFKA_ID} \
--kafka-cluster ${KAFKA_ID} \
--resource Transactional-Id:_confluent-flink_ \
--prefix
Users¶
Users must be authorized to use a compute pool.
Compute pools¶
Compute pools are a resource that can be authorized, for example, a user with
the EnvironmentAdmin role can edit a compute pool. But a compute pool is not a
principal, for example, Compute Pool lfcp-abc123
doesn’t make calls to
other resources, so the compute pool itself doesn’t need an identity with role
bindings, in contrast with statements.
A compute pool doesn’t have any permissions and isn’t a security principal, because a compute pool knows nothing about the workloads run on it.
Statements¶
The FlinkAdmin and FlinkDeveloper roles control access to compute resources in a compute pool. Access to data is controlled by the Apache Kafka® data access model, and data permissions are determined by the roles granted to the principal.
The hierarchy at which a statement exists doesn’t constrain the data it can access. A statement’s access level is determined entirely by the permissions that you attach to the statement. For more information, see Grant Role-Based Access in Confluent Cloud for Apache Flink.
Statements can access any data, across environments, and eventually orgs, that the permissions attached by the user are authorized to access.
This includes all statements:
- DML statements that run on Flink, like SELECT * FROM …
- DDL statements, like CREATE TABLE
- Metadata queries, like SHOW TABLES
Administrator¶
Run the following command to log in to Confluent Cloud with an EnvironmentAdmin or FlinkAdmin account by using the Confluent CLI.
confluent login --save --organization ${ORG_ID}
Grant the FlinkAdmin role to a user¶
Run the following Confluent CLI command as an Admin to grant the FlinkAdmin
role to a user with the identifier USER_ID
.
confluent iam rbac role-binding create \
--environment ${ENV_ID} \
--principal User:${USER_ID} \
--role FlinkAdmin
Grant a user access to all compute pools in an environment¶
Run the following Confluent CLI command as an Admin to grant the FlinkDeveloper role to a user for broad access to all compute pools in an environment.
confluent iam rbac role-binding create \
--environment ${ENV_ID} \
--principal User:${USER_ID} \
--role FlinkDeveloper
Grant a service account and user permission to run SQL statements¶
Run the following Confluent CLI commands as an Admin to grant permission for a service account to run Flink SQL statements.
In this case, you grant a service account EnvironmentAdmin permissions, then you grant a user Assigner permissions to the service account.
Create the service account. Optionally, you can use an existing service account in the environment. Note the ID returned by the command, which resembles
sa-1a2b3c
, and save it in an environment variable namedSERVICE_ACCOUNT_ID
.confluent iam service-account create \ ${SA_NAME} \ –-description ${SA_DESCRIPTION}
Grant EnvironmentAdmin permission for the service account to access all resources in the environment.
confluent iam rbac role-binding create \ --environment ${ENV_ID} \ --principal User:${SERVICE_ACCOUNT_ID} \ --role EnvironmentAdmin
Grant a user the Assigner role to access the service account. In this context, the service account is acting as a resource.
confluent iam rbac role-binding create \ --principal User:${USER_ID} \ --resource service-account:${SERVICE_ACCOUNT_ID} \ --role Assigner
Submit long-running statements¶
To start long-running statements, Confluent recommends starting the Flink SQL shell and specifying a service account which has permissions on the topics you want to query. This approach safeguards against a user account expiring when an employee leaves or moves to another organization.
Specify the --service-account
option when you start the Flink SQL shell
to enable submitting long-running statements. You must have the Assigner role
on this service account, as shown in
Grant a service account and user permission to run SQL statements.
confluent flink shell \
--compute-pool ${COMPUTE_POOL_ID} \
--environment ${ENV_ID} \
--service-account ${SERVICE_ACCOUNT_ID}
If you start the Flink SQL shell with your user account, you can still start
long-running statements by setting the client.service-account
shell
property.
SET 'client.service-account' = '<service-account-id>';
The specified service account identifier is cached in the shell client and isn’t validated when you make the assignment. If the identifier references a service account that doesn’t have access to topics referenced in the query, you’ll get an error why you try to submit a statement.
Topic schemas¶
For principals without OrganizationAdmin or EnvironmentAdmin privileges to access a topic that’s created by a CREATE TABLE statement in the Flink SQL shell, you must manually grant the DeveloperWrite role for the corresponding Schema Registry subject to enable reading from or writing to the topic.
Audit log events¶
Auditable event methods for the FLINK_WORKSPACE
and STATEMENT
resource types are triggered by operations on a Flink workspace and
generate event messages that are sent to the audit log cluster, where
they are stored as event records in a Kafka topic.
For more information, see Auditable Event Methods for Apache Flink on Confluent Cloud.
User defined functions (UDFs)¶
To create and use a UDF in Confluent Cloud for Apache Flink, you must have write access to a database and Flink permissions.
In the Early Access Program, the minimum permissions required are CloudClusterAdmin and FlinkDeveloper. Also, with the EnvironmentAdmin role, you can execute all UDF management operations.
The following table shows RBAC roles for managing Flink SQL UDFs.
Role | Upload artifact | Delete/Describe artifact | List Artifacts | CREATE FUNCTION | Invoke UDF | List/Describe UDFs | Delete UDF |
---|---|---|---|---|---|---|---|
CloudClusterAdmin | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
EnvAdmin | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
FlinkAdmin | No | No | No | No | No | No | No |
FlinkDeveloper | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Operator | No | No | No | No | No | No | No |
OrgAdmin | No | No | No | No | No | No | No |