RBAC Demo¶
This demo showcases the role-based access control (RBAC) functionality in Confluent Platform. It is mostly for reference to see a workflow using the new RBAC feature across the services in Confluent Platform.
There are two ways to run the demo.
Run demo on local install of Confluent Platform¶
This method of running the demo is for users who have downloaded Confluent Platform to their local hosts.
Caveats¶
- For simplicity, this demo does not use LDAP, instead it uses the Hash Login service with statically defined users/passwords. Additional configurations would be required if you wanted to augment the demo to connect to your LDAP server.
- The RBAC configurations and role bindings in this demo are not comprehensive, they are only for development to get minimum RBAC functionality set up across all the services in Confluent Platform. Please refer to the RBAC documentation for comprehensive configuration and production guidance.
Prerequisites¶
- Confluent CLI: Confluent CLI must be installed on your machine, version
v0.127.0
or higher (note: as of Confluent Platform 5.3, the Confluent CLI is a separate download). - Confluent Platform 5.3 or higher: use a clean install of Confluent Platform without modified properties files, or else the demo is not guaranteed to work.
- jq tool must be installed on your machine.
Run the demo¶
Clone the confluentinc/examples repository from GitHub and check out the
5.3.1-post
branch.git clone git@github.com:confluentinc/examples.git cd examples git checkout 5.3.1-post
Navigate to
security/rbac/scripts
directory.cd security/rbac/scripts
You have two options to run the demo.
Option 1: run the demo end-to-end for all services
./run.sh
Option 2: step through it one service at a time
./init.sh ./enable-rbac-broker.sh ./enable-rbac-schema-registry.sh ./enable-rbac-connect.sh ./enable-rbac-rest-proxy.sh ./enable-rbac-ksql-server.sh ./enable-rbac-control-center.sh
After you run the demo, view the configuration files:
# The original configuration bundled with Confluent Platform ls /tmp/original_configs/
# Configurations added to each service's properties file ls ../delta_configs/
# The modified configuration = original + delta ls /tmp/rbac_configs/
After you run the demo, view the log files for each of the services. Since this demo uses Confluent CLI, all logs are saved in a temporary directory specified by
confluent local current
.ls `confluent local current | tail -1`
In that directory, you can step through the configuration properties for each of the services:
connect control-center kafka kafka-rest ksql-server schema-registry zookeeper
In this demo, the metadata service (MDS) logs are saved in a temporary directory.
cat `confluent local current | tail -1`/kafka/logs/metadata-service.log
Stop the demo¶
To stop the demo, stop Confluent Platform, and delete files in /tmp/
.
cd scripts
./cleanup.sh
Summary of Configurations and Role Bindings¶
Here is a summary of the delta configurations and required role bindings, by service.
Note
For simplicity, this demo uses the Hash Login service instead of LDAP. If you are using LDAP in your environment, extra configurations are required.
Broker¶
Additional RBAC configurations required for server.properties
Role bindings:
# Broker Admin confluent iam rolebinding create --principal User:$USER_ADMIN_SYSTEM --role SystemAdmin --kafka-cluster-id $KAFKA_CLUSTER_ID # Producer/Consumer confluent iam rolebinding create --principal User:$USER_CLIENT_A --role ResourceOwner --resource Topic:$TOPIC1 --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_CLIENT_A --role DeveloperRead --resource Group:console-consumer- --prefix --kafka-cluster-id $KAFKA_CLUSTER_ID
Schema Registry¶
Additional RBAC configurations required for schema-registry.properties
Role bindings:
# Schema Registry Admin confluent iam rolebinding create --principal User:$USER_ADMIN_SCHEMA_REGISTRY --role ResourceOwner --resource Topic:_schemas --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_SCHEMA_REGISTRY --role SecurityAdmin --kafka-cluster-id $KAFKA_CLUSTER_ID --schema-registry-cluster-id $SCHEMA_REGISTRY_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_SCHEMA_REGISTRY --role ResourceOwner --resource Group:$SCHEMA_REGISTRY_CLUSTER_ID --kafka-cluster-id $KAFKA_CLUSTER_ID # Client connecting to Schema Registry confluent iam rolebinding create --principal User:$USER_CLIENT_A --role ResourceOwner --resource Subject:$SUBJECT --kafka-cluster-id $KAFKA_CLUSTER_ID --schema-registry-cluster-id $SCHEMA_REGISTRY_CLUSTER_ID
Connect¶
Additional RBAC configurations required for connect-avro-distributed.properties
Additional RBAC configurations required for a source connector
Additional RBAC configurations required for a sink connector
Role bindings:
# Connect Admin confluent iam rolebinding create --principal User:$USER_ADMIN_CONNECT --role ResourceOwner --resource Topic:connect-configs --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_CONNECT --role ResourceOwner --resource Topic:connect-offsets --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_CONNECT --role ResourceOwner --resource Topic:connect-statuses --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_CONNECT --role ResourceOwner --resource Group:connect-cluster --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User $USER_ADMIN_CONNECT --role ResourceOwner --resource Topic:_secrets --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User $USER_ADMIN_CONNECT --role ResourceOwner --resource Group:secret-registry --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User $USER_ADMIN_CONNECT --role SecurityAdmin --kafka-cluster-id $KAFKA_CLUSTER_ID --connect-cluster-id $CONNECT_CLUSTER_ID # Connector Submitter confluent iam rolebinding create --principal User:$USER_CONNECTOR_SUBMITTER --role ResourceOwner --resource Connector:$CONNECTOR_NAME --kafka-cluster-id $KAFKA_CLUSTER_ID --connect-cluster-id $CONNECT_CLUSTER_ID # Connector confluent iam rolebinding create --principal User:$USER_CONNECTOR --role ResourceOwner --resource Topic:$TOPIC2_AVRO --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_CONNECTOR --role ResourceOwner --resource Subject:${TOPIC2_AVRO}-value --kafka-cluster-id $KAFKA_CLUSTER_ID --schema-registry-cluster-id $SCHEMA_REGISTRY_CLUSTER_ID
REST Proxy¶
Additional RBAC configurations required for kafka-rest.properties
Role bindings:
# REST Proxy Admin: no additional administrative rolebindings required because REST Proxy just does impersonation # Producer/Consumer confluent iam rolebinding create --principal User:$USER_CLIENT_RP --role ResourceOwner --resource Topic:$TOPIC3 --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_CLIENT_RP --role DeveloperRead --resource Group:$CONSUMER_GROUP --kafka-cluster-id $KAFKA_CLUSTER_ID
KSQL¶
Additional RBAC configurations required for ksql-server.properties
Role bindings:
# KSQL Server Admin confluent iam rolebinding create --principal User:$USER_ADMIN_KSQL --role ResourceOwner --resource Topic:_confluent-ksql-${KSQL_SERVICE_ID}_command_topic --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_KSQL --role ResourceOwner --resource Topic:${KSQL_SERVICE_ID}ksql_processing_log --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_KSQL --role SecurityAdmin --kafka-cluster-id $KAFKA_CLUSTER_ID --ksql-cluster-id $KSQL_SERVICE_ID confluent iam rolebinding create --principal User:$USER_ADMIN_KSQL --role ResourceOwner --resource KsqlCluster:ksql-cluster --kafka-cluster-id $KAFKA_CLUSTER_ID --ksql-cluster-id $KSQL_SERVICE_ID # KSQL CLI queries confluent iam rolebinding create --principal User:${USER_KSQL} --role DeveloperWrite --resource KsqlCluster:ksql-cluster --kafka-cluster-id $KAFKA_CLUSTER_ID --ksql-cluster-id $KSQL_SERVICE_ID confluent iam rolebinding create --principal User:${USER_KSQL} --role DeveloperRead --resource Topic:$TOPIC1 --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_KSQL} --role DeveloperRead --resource Group:_confluent-ksql-${KSQL_SERVICE_ID} --prefix --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_KSQL} --role DeveloperRead --resource Topic:${KSQL_SERVICE_ID}ksql_processing_log --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_ADMIN_KSQL} --role DeveloperRead --resource Group:_confluent-ksql-${KSQL_SERVICE_ID} --prefix --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_ADMIN_KSQL} --role DeveloperRead --resource Topic:$TOPIC1 --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_KSQL} --role ResourceOwner --resource Topic:_confluent-ksql-${KSQL_SERVICE_ID}transient --prefix --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_ADMIN_KSQL} --role ResourceOwner --resource Topic:_confluent-ksql-${KSQL_SERVICE_ID}transient --prefix --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_KSQL} --role ResourceOwner --resource Topic:${CSAS_STREAM1} --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_ADMIN_KSQL} --role ResourceOwner --resource Topic:${CSAS_STREAM1} --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_KSQL} --role ResourceOwner --resource Topic:${CTAS_TABLE1} --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_ADMIN_KSQL} --role ResourceOwner --resource Topic:${CTAS_TABLE1} --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:${USER_ADMIN_KSQL} --role ResourceOwner --resource Topic:_confluent-ksql-${KSQL_SERVICE_ID} --prefix --kafka-cluster-id $KAFKA_CLUSTER_ID
Control Center¶
Additional RBAC configurations required for control-center-dev.properties
Role bindings:
# Control Center Admin confluent iam rolebinding create --principal User:$USER_ADMIN_C3 --role SystemAdmin --kafka-cluster-id $KAFKA_CLUSTER_ID # Control Center user confluent iam rolebinding create --principal User:$USER_CLIENT_C --role DeveloperRead --resource Topic:$TOPIC1 --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_CLIENT_C --role DeveloperRead --resource Topic:$TOPIC2_AVRO --kafka-cluster-id $KAFKA_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_CLIENT_C --role DeveloperRead --resource Subject:${TOPIC2_AVRO}-value --kafka-cluster-id $KAFKA_CLUSTER_ID --schema-registry-cluster-id $SCHEMA_REGISTRY_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_ADMIN_C3 --role ClusterAdmin --kafka-cluster-id $KAFKA_CLUSTER_ID --schema-registry-cluster-id $SCHEMA_REGISTRY_CLUSTER_ID confluent iam rolebinding create --principal User:$USER_CLIENT_C --role DeveloperRead --resource Connector:$CONNECTOR_NAME --kafka-cluster-id $KAFKA_CLUSTER_ID --connect-cluster-id $CONNECT_CLUSTER_ID
General Rolebinding Syntax¶
The general rolebinding syntax is:
confluent iam rolebinding create --role [role name] --principal User:[username] --resource [resource type]:[resource name] --[cluster type]-cluster-id [insert cluster id]
Available role types and permissions can be found here.
Resource types include: Cluster, Group, Subject, Connector, TransactionalId, Topic.
Listing a Users roles¶
General listing syntax:
confluent iam rolebinding list User:[username] [clusters and resources you want to view their roles on]
For example, list the roles of User:bender
on Kafka cluster KAFKA_CLUSTER_ID
confluent iam rolebinding list --principal User:bender --kafka-cluster-id $KAFKA_CLUSTER_ID
Run demo in Docker¶
This method of running the demo is for users who have Docker. This demo setup includes:
- ZooKeeper
- Kafka with MDS, connected to the OpenLDAP
- Schema Registry
- KSQL
- Kafka Connect
- REST Proxy
- Confluent Control Center
- OpenLDAP
Prerequisites¶
- Docker (validated on Docker for Mac version 18.03.0-ce-mac60)
zookeeper-shell
must be on yourPATH
- Confluent CLI:
Confluent CLI must be installed on your machine, version
v0.127.0
or higher (note: as of Confluent Platform 5.3, the Confluent CLI is a separate download
Image Versions¶
- You can use production or pre-production images. This is configured
via environment variables
PREFIX
andTAG
.PREFIX
is appended before the actual image name, before/
TAG
is a docker tag, appended after the:
- E.g. with
PREFIX=confluentinc
andTAG=5.3.1
, kafka will use the following image:confluentinc/cp-server:5.3.1
- If these variables are not set in the shell, they will be read
from the
.env
file. Shell variables override whatever is set in the.env
file - You can also edit
.env
file directly - This means all images would use the same tag and prefix. If you
need to customize this behavior, edit the
docker-compose.yml
file
Run the demo¶
Clone the confluentinc/examples repository from GitHub and check out the
5.3.1-post
branch.git clone git@github.com:confluentinc/examples.git cd examples git checkout 5.3.1-post
Navigate to
security/rbac/scripts
directory.cd security/rbac/rbac-docker
To start Confluent Platform, run
./confluent-start.sh
You can optionally pass in where -p project-name
to name the
docker-compose project, otherwise it defaults to rbac
. You can use
standard docker-compose commands like this listing all containers:
docker-compose -p rbac ps
or tail Confluent Control Center logs:
docker-compose -p rbac logs --t 200 -f control-center
The Kafka broker is available at localhost:9094
, not localhost::9092
.
Service | Host:Port |
---|---|
Kafka | localhost:9094 |
MDS | localhost:8090 |
C3 | localhost:9021 |
Connect | localhost:8083 |
KSQL | localhost:8088 |
OpenLDAP | localhost:389 |
Schema Registry | localhost:8081 |
Grant Rolebindings¶
Login to the MDS URL as
professor:professor
, the configured super user, to grant initial role bindingsconfluent login --url http://localhost:8090
Set
KAFKA_CLUSTER_ID
ZK_HOST=localhost:2181 KAFKA_CLUSTER_ID=$(zookeeper-shell $ZK_HOST get /cluster/id 2> /dev/null | grep version | jq -r .id)
Grant
User:bender
ResourceOwner to prefixTopic:foo
on Kafka clusterKAFKA_CLUSTER_ID
confluent iam rolebinding create --principal User:bender --kafka-cluster-id $KAFKA_CLUSTER_ID --resource Topic:foo --prefix
List the roles of
User:bender
on Kafka clusterKAFKA_CLUSTER_ID
confluent iam rolebinding list --principal User:bender --kafka-cluster-id $KAFKA_CLUSTER_ID
The general listing syntax is:
confluent iam rolebinding list User:[username] [clusters and resources you want to view their roles on]
The general rolebinding syntax is:
confluent iam rolebinding create --role [role name] --principal User:[username] --resource [resource type]:[resource name] --[cluster type]-cluster-id [insert cluster id]
Available role types and permissions can be found here.
Resource types include: Cluster, Group, Subject, Connector, TransactionalId, Topic.
Users¶
Description | Name | Role |
---|---|---|
Super User | User:professor | SystemAdmin |
Connect | User:fry | SystemAdmin |
Schema Registry | User:leela | SystemAdmin |
KSQL | User:zoidberg | SystemAdmin |
C3 | User:hermes | SystemAdmin |
Test User | User:bender | <none> |
User bender:bender
doesn’t have any role bindings set up and can be used as a user under test
- You can use
./client-configs/bender.properties
file to authenticate asbender
from kafka console commands (likekafka-console-producer
,kafka-console-consumer
,kafka-topics
and the like). - This file is also mounted into the broker docker container, so you can
docker-compose -p [project-name] exec broker /bin/bash
to open bash on broker and then use console commands with/etc/client-configs/bender.properties
. - When running console commands from inside the broker container, use
localhost:9092
.