Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Authorization with Confluent Operator¶
Role-based Access Control¶
This guide walks you through an end-to-end setup of role-based access control (RBAC) with mTLS for Confluent Platform with Operator. Note that there are many ways you can modify this process for your own use cases and environments.
Note
Currently, RBAC with Operator is available for new installations, only supports mTLS mechanism for client authentication, and does not support central management of RBAC across multiple Kafka clusters.
In addition to the general prerequisites for Confluent Operator, you must have an LDAP server that Confluent Platform can use for authentication while using the Confluent Platform RBAC feature.
The examples in this guide use the following assumptions:
$VALUES_FILE
refers to the configuration file you set up in Create the global configuration file.To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file (
$VALUES_FILE
). However, in your production deployments, use the--set
or--set-file
option when applying sensitive data with Helm. For example:helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal=”cn=mds,dc=test,dc=com” \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operator
is the namespace that Confluent Platform is deployed in.All commands are executed in the
helm
directory under the directory Confluent Operator was downloaded to.
Following assumptions are made specific to the examples used in this guide:
- You will use the external-dns service (provided with the Confluent Operator bundle) to manage DNS records that will be needed for external access to the platform.
- You have created the LDAP user/password for an example user who will be able to log into Control Center and successfully view all Confluent Platform components: user
testadmin
with passwordtestadmin
- You have created the LDAP user/password for a user who has a minimum of LDAP read-only permissions to allow Metadata Service (MDS) to query LDAP about other users: user``mds`` with password
Developer!
- You have created the following LDAP users/passwords for all Confluent Platform components.
- Kafka:
kafka
/kafka-secret
- Confluent Control Center:
c3
/c3-secret
- ksqlDB:
ksql
/ksql-secret
- Schema Registry:
sr
/sr-secret
- Replicator:
replicator
/replicator-secret
- Connect:
connect
/connect-secret
- Kafka:
- You defined a super user for bootstrapping RBAC within Confluent Platform:
kafka
- You enabled mTLS for authenticating external clients of the Confluent Platform.
- You are familiar with the concepts and use cases of the Confluent Platform RBAC feature, as described in Authorization using Role-Based Access Control.
The summary of the configuration and deployment process is:
- Enable and set the required fields in the Confluent Operator configuration file (
$VALUES_FILE
). - Deploy Confluent Operator, ZooKeeper, and Kafka.
- Add the required role bindings for Confluent Platform components: Schema Registry, Connect, Replicator, ksqlDB, and Control Center.
- Deploy Confluent Platform components.
Global configuration¶
In your Operator configuration file ($VALUES_FILE
), set the following:
global:
sasl:
plain:
username: kafka # User for inter-broker communication and internal communication
password: kafka-secret
authorization:
superUsers:
- User:kafka
rbac:
enabled: true
dependencies:
mds:
endpoint: # See instructions below on “MDS endpoints and ports”
publicKey: |-
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----
See Create a PEM key pair for details on creating a public-private key pair.
MDS endpoint and ports¶
The Metadata Service (MDS) runs as part of each Kafka broker. The
global.dependencies.mds.endpoint
property determines the address other Confluent Platform
components will use to communicate with MDS. Decide whether or not you want MDS
to serve HTTP or HTTPS traffic, and whether you would like Confluent Platform components to
communicate with MDS over the internal Kubernetes network or externally using a
load balancer.
To have MDS serve HTTP, and to have Confluent Platform components access MDS over the internal Kubernetes network:
global: dependencies: mds: endpoint: http://<kafka_name>.svc.cluster.local:8091 kafka: services: mds: https: false
To have MDS serve HTTPS, and to have Confluent Platform components access MDS using the Kubernetes network:
global: dependencies: mds: endpoint: https://<kafka_name>.svc.cluster.local:8090 kafka: services: mds: https: true
To have MDS serve HTTP, and to have Confluent Platform components access MDS using the load balancer:
global: dependencies: mds: endpoint: http://<kafka_bootstrap_endpoint> kafka: services: mds: https: false
To have MDS serve HTTPS, and to have Confluent Platform components access MDS using the load balancer:
global: dependencies: mds: endpoint: https://<kafka_bootstrap_endpoint> kafka: services: mds: https: true
Kafka configuration¶
In the Confluent Operator configuration file ($VALUES_FILE
), set the following values
for Kafka:
kafka:
services:
mds:
https: ----- [1]
tokenKeyPair: |- ----- [2]
----- BEGIN RSA PRIVATE KEY -----
...
----- END RSA PRIVATE KEY -----
ldap:
address: ldaps://ldap.operator.svc.cluster.local:636 ----- [3]
authentication:
simple: ----- [4]
principal: cn=mds,dc=test,dc=com
credentials: Developer!
configurations: ----- [5]
groupNameAttribute: cn
groupObjectClass: group
groupMemberAttribute: member
groupMemberAttributePattern: CN=(.*),DC=test,DC=com
groupSearchBase: dc=test,dc=com
userNameAttribute: cn
userMemberOfAttributePattern: CN=(.*),DC=test,DC=com
userObjectClass: organizationalRole
userSearchBase: dc=test,dc=com
tls:
enabled: true
authentication:
type: tls
principalMappingRules: ----- [6]
- RULE:^CN=([a-zA-Z0-9.]*).*$/$1/L
- DEFAULT
cacerts: |-
- [1] Set
https: true
if you want MDS to serve HTTPS traffic. See MDS endpoint and ports. - [2] Set
tokenKeyPair
to a PEM-encoded RSA key pair that MDS can use for signing tokens. These are required for token authentication between Confluent Platform components. See Create a PEM key pair for details on creating a public-private key pair. - [3]
address
is the URL for the LDAP server Confluent Platform uses for authentication. If you provide a secure LDAPS URL,kafka.tls.cacerts
must be configured to allow MDS to trust the certificate presented by your LDAP server. - [4]
principal
andcredentials
are used by MDS to authenticate itself with the LDAP server. - [5]
configurations
should be configured according to your LDAP settings. - [6] For mTLS, the principal is retrieved from the certificate, and the user must be in the LDAP. For details about using principal mapping rules, see Principal Mapping Rules for SSL Listeners (Extract a Principal from a Certificate)
Confluent Platform component configuration¶
In the Confluent Operator configuration file ($VALUES_FILE
), set the following values
for each Confluent Platform component.
<component>:
dependencies:
mds:
authentication:
username:
password:
The usernames and passwords must be already set on your LDAP server, as described in the assumptions.
If you do not want to enter this sensitive data into your $VALUES_FILE
, use
the --set
flag when applying the configuration with the helm upgrade --install
command for each Confluent Platform component.
Deploy Kafka with Metadata Service (MDS)¶
With MDS now configured, deploy Kafka components in the following order as described in Install Confluent Operator and Confluent Platform:
- Operator
- ZooKeeper
- Kafka
Verify MDS configuration¶
Log into MDS to verify the correct configuration and to get the Kafka cluster ID. You need the Kafka cluster ID for component role bindings.
Replace https://<kafka_bootstrap_endpoint>
in the below commands with the
value you set in your config file ($VALUES_FILE
) for
global.dependences.mds.endpoint
. See MDS endpoint and ports for detail.
Log into MDS as the Kafka super user as below:
confluent login \ --url https://<kafka_bootstrap_endpoint> \ --ca-cert-path <path-to-cacerts.pem>
You need to pass the
--ca-cert-path
flag if:- You have configured MDS to serve HTTPS traffic (
kafka.services.mds.https: true
) in MDS endpoint and ports. - The CA used to issue the MDS certificates is not trusted by system where you are running these commands.
Provide the Kafka username and password when prompted, in this example,
kafka
andkafka-secret
.You get a response to confirm a successful login.
- You have configured MDS to serve HTTPS traffic (
Verify that the advertised listeners are correctly configured using the following command:
curl -ik \ -u '<kafka-user>:<kafka-user-password>' \ https://<kafka_bootstrap_endpoint>/security/1.0/activenodes/https
Get the Kafka cluster ID using one of the following commands:
confluent cluster describe --url https://<kafka_bootstrap_endpoint>
curl -ik \ https://<kafka_bootstrap_endpoint>/v1/metadata/id
For the examples in the remainder of this topic, save the output Kafka ID from the above commands as an environment variable,
$KAFKA_ID
.
Grant roles to Confluent Platform component principals¶
This section walks you through the workflow to create role bindings so that Confluent Platform components are deployed and function correctly.
Log into MDS as described in the first step in Verify MDS configuration, and run the
confluent iam rolebinding
commands as specified in the following sections.
Set the --principal
option to User:<component-ldap-user>
using the
component LDAP users. The commands in this section uses the example component
users listed in the assumptions.
Schema Registry role binding¶
Grant the required roles to the Schema Registry user to deploy the Schema Registry service.
Set --schema-registry-cluster-id
to id_schemaregistry_operator
, or more
generally to id_<SR-cluster-name>_<namespace>
where <SR-cluster-name>
is
the value of schemaregistry.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Schema Registry.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role SecurityAdmin \
--schema-registry-cluster-id id_schemaregistry_operator
Set --resource
to Group:id_schemaregistry_operator
, or more generally to
Group:id_<SR-cluster-name>_<namespace>
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role ResourceOwner \
--resource Group:id_schemaregistry_operator
Set --resource
to Topic:_schemas_schemaregistry_operator
, or more
generally to Topic:_schemas_<SR-cluster-name>_<namespace>
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role ResourceOwner \
--resource Topic:_schemas_schemaregistry_operator
Kafka Connect role binding¶
Grant the required roles to the Connect user to deploy the Connect service.
Set --resource
to Group:operator.connectors
, or more generally to
<namespace>.<Connect-cluster-name>
where <Connect-cluster-name>
is the
value of connect.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy
Connect.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role ResourceOwner \
--resource Group:operator.connectors
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role DeveloperWrite \
--resource Topic:_confluent-monitoring \
--prefix
Set --resource
to Topic:operator.connectors-
, or more generally to
<namespace>.<Connect-cluster-name>-
where <Connect-cluster-name>
is the
value of connect.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Connect.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role ResourceOwner \
--resource Topic:operator.connectors- \
--prefix
Confluent Replicator role binding¶
Grant the required roles to the Replicator user to deploy the Replicator service.
Set --resource
to Group:operator.replicator
, or more generally to
<namespace>.<Replicator-cluster-name>
where <Replicator-cluster-name>
is
the value of replicator.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Replicator.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:replicator \
--role ResourceOwner \
--resource Group:operator.replicator
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:replicator \
--role DeveloperWrite \
--resource Topic:_confluent-monitoring \
--prefix
Set --resource
to Topic:operator.replicator-
, or more generally to
<namespace>.<Replicator-cluster-name>-
where <Replicator-cluster-name>
is the value of replicator.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Replicator.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:replicator \
--role ResourceOwner \
--resource Topic:operator.replicator- \
--prefix
ksqlDB role binding¶
Grant the required roles to the ksqlDB user to deploy the ksqlDB service.
Set --ksql-cluster-id
to operator.ksql_
, or more generally to
<namespace>.<ksqldb-cluster-name>_
where <ksql-cluster-name>
is the
value of ksql.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace to which you want to deploy
ksqlDB.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:ksql \
--role ResourceOwner \
--ksql-cluster-id operator.ksql_ \
--resource KsqlCluster:ksql-cluster
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:ksql \
--role ResourceOwner \
--resource Topic:_confluent-ksql-operator.ksql_ \
--prefix
Confluent Control Center role binding¶
Grant the required roles to the Control Center user to deploy the Control Center service.
confluent iam rolebinding create \
--principal User:c3 \
--role SystemAdmin \
--kafka-cluster-id $KAFKA_ID
Deploy the remaining Confluent Platform components¶
After granting roles to the various Confluent Platform components, you can successfully deploy those components with Confluent Operator, and the components can be authorized to communicate with each other as necessary.
Follow Install Confluent Operator and Confluent Platform to deploy the rest of Confluent Platform.
Grant roles to the Confluent Control Center user to be able to administer Confluent Platform¶
Control Center users require separate roles for each Confluent Platform component and resource they wish to view and administer in the Control Center UI. Grant explicit permissions to the users as shown below.
In the following examples, the testadmin
principal is used as a Control Center UI user.
Since there’s no access given to testadmin
yet, this user will be able to log
into Confluent Control Center, but nothing will be visible until the appropriate permissions are
granted as described in the following sections.
Grant permission to view and administer the Kafka cluster¶
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--role ClusterAdmin \
--principal User:testadmin
Grant permission to view and administer Schema Registry information¶
Set --schema-registry-cluster-id
using the pattern:
id_.<Schema-Registry-component-name>_<namespace>
. The following example
uses id_schemaregistry_operator
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--schema-registry-cluster-id id_schemaregistry_operator \
--principal User:testadmin \
--role SystemAdmin
Grant permission to view and administer the Connect cluster¶
Set --connect-cluster-id
using the pattern:
<namespace>.<Connect-component-name>
. The following example uses
operator.connectors
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--connect-cluster-id operator.connectors \
--principal User:testadmin \
--role SystemAdmin
Grant permission to view and administer the Replicator cluster¶
Set --connect-cluster-id
using the pattern:
<namespace>.<Connect-component-name>
. You need the Connect cluster ID for
Replicator role binding because Replicator runs in the Connect clusters. The
following example uses operator.connectors
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--connect-cluster-id operator.replicator \
--principal User:testadmin \
--role SystemAdmin
Grant permission to view and administer the ksqlDB cluster¶
Set --ksql-cluster-id
using the pattern:
<namespace>.<ksqldb-component-name>_
. The following example uses
operator.ksql_
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--ksql-cluster-id operator.ksql_ \
--resource KsqlCluster:ksql-cluster \
--principal User:testadmin \
--role ResourceOwner
Access Control Lists¶
To deploy Kafka with the Access Control Lists (ACLs) authorization, specify the settings in your configuration file
($VALUES_FILE
).
The following settings are the recommended way to configure Confluent Platform to use ACLs for authorization.
global:
authorization:
simple:
enabled: true
superUsers:
- User:test
The following settings are still supported for backward compatibility. If you enable ACLs using both the configurations above and the configurations below, the above configuration take precedence.
kafka:
options:
acl:
enabled: true
# Value for super.users server property in the format, User:UserName;
supers: "User:test"
Note
Any changes to options:acl:supers:
triggers a Kafka cluster rolling upgrade.
Both examples above configure a superuser test
that all Confluent Platform components use
when communicating with the Kafka cluster. This is required so that components
can create internal topics.
mTLS authentication with ACLs enabled¶
When mTLS authentication is enabled, Confluent Platform determines the identity of the
authenticated principal via data extracted from the client certificate. The
Subject
section of the certificate is will be used to determine the
username. For example, for the following certificate’s subject (C=US, ST=CA,
L=Palo Alto
), the username will be extracted as User:L=Palo Alto,ST=CA,C=US
.
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
omitted...
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Palo Alto, L=CA, O=Company, OU=Engineering, CN=TestCA
Validity
Not Before: Mar 28 16:37:00 2019 GMT
Not After : Mar 26 16:37:00 2024 GMT
Subject: C=US, ST=CA, L=Palo Alto
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
User:ANONYMOUS
is the default user for internal clients and inter-broker
communication.
When Kafka is configured with mTLS authentication, in addition to
User:ANONYMOUS
, the cert user name is required.
Typically, you will have many different users and clients of Confluent Platform. In some cases, you may have shared client certificates across multiple users/clients, but the best practice is to issue unique client certificates for each user and certificate, and each certificate should have a unique subject/username. For each user/client, you must decide what they should be authorized to access within Confluent Platform, and ensure that the corresponding ACLs have been created for the subject/username of their corresponding client certificates. Follow the instructions in Authorization using ACLs to enable ACL authorization for Kafka objects.