Role-Based Access Control Quick Start¶
Role-based access control (RBAC) is administered by a super user using the Confluent CLI and distributed across an organization. This quick start demonstrates how to create roles and interact with Kafka topics in an RBAC environment.
See also
To get started, try the automated RBAC example that showcases the RBAC functionality in Confluent Platform.
- Prerequisites
- Confluent Platform.
- Confluent CLI (included with Confluent Platform since 5.4.x).
- The Confluent Platform commercial component
confluent-server
. For more information, see Migrate to Confluent Server. - An RBAC-enabled cluster that you have super user access to, or the SystemAdmin role. For more information, see Role-Based Access Control Predefined Roles.
Log in to Cluster¶
Log in to the Confluent CLI with the RBAC Metadata Service (MDS) URL (
<url>
) specified. For more information, see confluent login .confluent login --url <url>
Specify the super user credentials when prompted.
Important
The user ID specified in group role bindings is case-specific, and must match the case specified in the AD record. Also note that when logging in as a super user, the login ID is also case-specific and must match the case specified for the user ID in role bindings.
Enter your Confluent credentials: Username: Password:
Your output should resemble:
Logged in as user
Grant the SystemAdmin Role¶
The user you are creating role bindings for (my-user-name
) must be created
in LDAP before you can actually log in with this user to the system.
Note
Confluent Platform cluster registry provides a way for Kafka cluster administrators to centrally register Kafka clusters in the metadata service (MDS) to enable a more user-friendly RBAC role binding experience. For details, refer to Cluster Registry.
Important
The user ID specified in group role bindings is case-specific, and must match the case specified in the AD record. Also note that when logging in as a super user, the login ID is also case-specific and must match the case specified for the user ID in role bindings.
Grant the SystemAdmin role (
SystemAdmin
) to a user (<my-user-name>
) on the Kafka cluster (<kafka-cluster-id>
). For more information, see confluent iam rbac role-binding create.Tip
- This user must also be created in LDAP before they can actually log in to the system, but this is not required for defining the role.
- You can find the cluster ID by running this command:
./bin/zookeeper-shell <host>:2181 get /cluster/id
.
confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role SystemAdmin \ --kafka-cluster-id <kafka-cluster-id>
Grant SystemAdmin role for the Confluent Platform components to a user.
Tip
For more information about how to find the cluster ID, see Discover Identifiers for Clusters.
Confluent Control Center
confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role SystemAdmin \ --kafka-cluster-id <kafka-cluster-id>
Connect
confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role SystemAdmin \ --kafka-cluster-id <kafka-cluster-id> \ --connect-cluster-id <connect-cluster-id>
ksqlDB
confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role SystemAdmin \ --kafka-cluster-id <kafka-cluster-id> \ --ksql-cluster-id <ksql-cluster-id>
Schema Registry
confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role SystemAdmin \ --kafka-cluster-id <kafka-cluster-id> \ --schema-registry-cluster-id <schema-registry-cluster-id>
If the cluster you are referencing is defined in the cluster registry, you can specify the role binding using the cluster name only (without the cluster ID) as follows:
confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role SystemAdmin \ --cluster-name <exampleConnect>
Grant the UserAdmin Role on the Kafka Cluster¶
The UserAdmin role grants the user permission to manage permissions for other users. This command grants UserAdmin to a user.
confluent iam rbac role-binding create \
--principal User:<my-user-name> \
--role UserAdmin \
--kafka-cluster-id <kafka-cluster-id>
Grant Topic Permissions¶
To interact with topics using the Kafka CLI tools, you must
provide a JAAS configuration that enables Kafka CLI tools to authenticate with a broker.
You can provide the JAAS configuration using a file (--command-config
) or using the
command line options --producer-property
or --consumer-property
for the producer or
consumer. This configuration is required for creating topics, producing, consuming,
and more. For example:
kafka-console-producer --producer-property sasl.mechanism=OAUTHBEARER
The value you specify in sasl.mechanism
depends on your broker’s security
configuration for the port. In this case, OAUTHBEARER is used because it is the
default configuration in the automated RBAC demo. However, you can use any
authentication mechanism exposed by your broker.
Important
Do not use token services or the OAUTHBEARER
SASL mechanism
(listener.name.rbac.sasl.enabled.mechanisms=OAUTHBEARER
)
for external client communications. With RBAC enabled, token services are
intended for internal communication between Confluent Platform components only (for
example, it is valid for a Schema Registry licensed client), and not for
long-running service principals or client authentication. The OAUTHBEARER
setting is for internal use and subject to change, and does not implement a
full-featured OAuth protocol. Therefore, use one of the supported authentication
methods like SASL or mTLS (mutual TLS) for long-lived or client use cases. For
details, refer to
Authentication Methods Overview.
To identify which roles provide write access to a topic, see Predefined roles.
Create a configuration file named
my-user-name.properties
, and specify the MDS service (<metadata_server_urls>
), username (<my-user-name>
), and password (<my-password>
). You can also specify these same properties using the command line.sasl.mechanism=OAUTHBEARER # Use SASL_SSL for production environments security.protocol=SASL_PLAINTEXT sasl.login.callback.handler.class=io.confluent.kafka.clients.plugins.auth.token.TokenUserLoginCallbackHandler sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ username="<my-user-name>" \ password="<my-password>" \ metadataServerUrls="<metadata_server_urls>";
Grant a user permissions to a topic (
<topic-name>
). For example, you can grant topic permissions to ResourceOwner or DeveloperManage roles on the resource level; or to the SystemAdmin role at the cluster level. To create a topic, the DeveloperManage role is required on the topic resource. Other roles that grant the ability to create topics are ResourceOwner on the topic resource, or SystemAdmin at the cluster level.confluent iam rbac role-binding create \ --principal User:<my-user-name> \ --role ResourceOwner \ --resource Topic:test-topic- \ --prefix \ --kafka-cluster-id <kafka-cluster-id>
Create a topic using the Kafka kafka-topics tool with the
my-user-name.properties
specified.<path-to-confluent>/kafka-topics \ --bootstrap-server <kafka-hostname>:9092 \ --command-config ~/my-user-name.properties \ --topic test-topic-1 \ --create \ --replication-factor 1 \ --partitions 3
Produce to a Topic¶
To produce to a topic, you must have (minimally) DeveloperWrite role for that topic. You are not required to be an owner of the topic to produce to it.
To view which privileges have already been granted for each role, run the following command:
confluent iam rbac role describe <rolename>
Grant the user permissions to access the topic. Use DeveloperWrite role on the topic resource.
confluent iam rbac role-binding create \ --principal User:<my-producer-user> \ --role DeveloperWrite \ --resource Topic:test-topic- \ --prefix \ --kafka-cluster-id <kafka-cluster-id>
Produce to a topic using the kafka-console-producer tool.
echo "test_message" | ./kafka-console-producer \ --broker-list <kafka-hostname>:9092 \ --topic test-topic-1 \ --producer.config ~/my-producer-user.properties \ --property parse.key=false
Any principal used by Idempotent producers must be authorized as Write on the cluster. Binding either the DeveloperWrite or ResourceOwner RBAC roles on the Kafka cluster grants Write permission. Note that DeveloperWrite is the less permissive of the two roles, and is the first recommendation. The following role binding ensures that Write has access to the cluster:
confluent iam rbac role-binding create \
--principal $PRINCIPAL \
--role DeveloperWrite \
--resource Cluster:kafka-cluster \
--kafka-cluster-id $KAFKA_CLUSTER_ID
Consume from a Topic¶
To consume from a topic, you must have DeveloperRead access to both the topic resource and consumer group resource. If users require the ability to delete consumer groups, then also assign the ResourceOwner role on the consumer group prefix. Note that consume does not require additional Kafka permissions to be Idempotent consumers (as producers do).
To view which privileges have already been granted for each role, run the following command:
confluent iam rbac role describe <rolename>
Grant permission to topic resource.
confluent iam rbac role-binding create \ --principal User:<my-consumer-user> \ --role DeveloperRead \ --resource Topic:test-topic- \ --prefix \ --kafka-cluster-id <kafka-cluster-id>
Grant permission to the consumer group.
To consume from a topic as a member of a consumer group, you must have access to the topic resource and consumer group resource. Grant the DeveloperRead role to the topic resource and consumer group resource. As mentioned in the previous step, assign the ResourceOwner role only if users require the ability to delete consumer groups.
confluent iam rbac role-binding create \ --principal User:my-consumer-user \ --role DeveloperRead \ --resource Group:console-consumer- \ --prefix \ --kafka-cluster-id <kafka-cluster-id>
Consume using the kafka-console-consumer tool.
./kafka-console-consumer \ --bootstrap-server <kafka-hostname>:9092 \ --topic test-topic-1 \ --consumer.config ~/my-consumer-user.properties \ --from-beginning \ --property parse.key=false