Configure Authorization with Confluent Operator¶
Role-based Access Control¶
This guide walks you through an end-to-end setup of role-based access control (RBAC) for Confluent Platform with Operator. Note that there are many ways you can modify this process for your own use cases and environments.
Note
- Currently, RBAC with Operator is available for new installations.
- Operator does not support central management of RBAC across multiple Kafka clusters.
The examples in this guide use the following assumptions:
$VALUES_FILE
refers to the configuration file you set up in Create the global configuration file.To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file (
$VALUES_FILE
). However, in your production deployments, use the--set
or--set-file
option when applying sensitive data with Helm. For example:helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal="cn=mds\,dc=test\,dc=com" \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operator
is the namespace that Confluent Platform is deployed in.All commands are executed in the
helm
directory under the directory Confluent Operator was downloaded to.
Following assumptions are made specific to the examples used in this guide:
- You have created the LDAP user/password for an example user who will be able to log into Control Center and successfully view all Confluent Platform components: user
testadmin
with passwordtestadmin
- You have created the LDAP user/password for a user who has a minimum of LDAP read-only permissions to allow Metadata Service (MDS) to query LDAP about other users: user
mds
with passwordDeveloper!
- You have created the following LDAP users/passwords for all Confluent Platform components.
- Kafka:
kafka
/kafka-secret
- Confluent REST API:
erp
/erp-secret
- Confluent Control Center:
c3
/c3-secret
- ksqlDB:
ksql
/ksql-secret
- Schema Registry:
sr
/sr-secret
- Replicator:
replicator
/replicator-secret
- Connect:
connect
/connect-secret
- Kafka:
- You defined a super user for bootstrapping RBAC within Confluent Platform:
kafka
. - You are familiar with the concepts and use cases of the Confluent Platform RBAC feature, as described in Authorization using Role-Based Access Control.
The summary of the configuration and deployment process is:
- Enable and set the required fields in the Confluent Operator configuration file (
$VALUES_FILE
). - Deploy Confluent Operator, ZooKeeper, and Kafka.
- Add the required role bindings for Confluent Platform components: Schema Registry, Connect, Replicator, ksqlDB, and Control Center.
- Deploy Confluent Platform components.
Requirements¶
The following are the requirements for enabling and using RBAC with Operator:
You must have an LDAP server that Confluent Platform can use for authentication.
Confluent REST API is automatically enabled for RBAC and cannot be disabled when RBAC is enabled.
Use the Kafka bootstrap endpoint (same as the MDS endpoint) to access Confluent REST API.
You must create a user for Confluent REST API in LDAP and provide the username and password as described in Kafka configuration.
You must provide a valid Schema Registry license as described in Add a license key.
Global configuration¶
In your Operator configuration file ($VALUES_FILE
), set the following:
global:
sasl:
plain:
username: kafka ----- [1]
password: kafka-secret ----- [2]
authorization:
superUsers: ----- [3]
- User:kafka
rbac:
enabled: true ----- [4]
dependencies:
mds:
endpoint: ----- [5]
publicKey: |- ----- [6]
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----
[1] Set
username
to the user id used for inter-broker communication and internal communication, e.g.kafka
.[2] Set
password
to the password of the user in [1], e.g.kafka-secret
.[3] Provide the super users to bootstrap the Kafka cluster, e.g.
kafka
.[4] Set
enabled: true
to enable RBAC.[5] Set
endpoint
to the Kafka bootstrap endpoint as below. For load balancer and static host-based routing,<kafka-bootstrap-host>
is derived from the bootstrap prefix and the domain. For others, you provide<kafka-bootstrap-host>
.- To internally access MDS over HTTPS:
https://<kafka_name>.svc.cluster.local:8090
- To internally access MDS over HTTP:
http://<kafka_name>.svc.cluster.local:8091
- To externally access MDS over HTTPS using Load Balancer:
https://<kafka_bootstrap_host>:443
- To externally access MDS over HTTP using Load Balancer:
http://<kafka_bootstrap_host>:80
- To externally access MDS using NodePort:
http(s)://<kafka_bootstrap_host>:<portOffset+1>
- To externally access MDS with static host-based routing:
https://<kafka_bootstrap_host>:8090
- To externally access MDS with static port-based routing:
http(s)://<kafka_bootstrap_host>:<portOffset+1>
This endpoint is also used to access Confluent REST API.
- To internally access MDS over HTTPS:
[6] See Create a PEM key pair for details on creating a public-private key pair.
Kafka configuration¶
In the Confluent Operator configuration file ($VALUES_FILE
), set the following values
for Kafka:
kafka:
services:
restProxy:
authentication:
username: ----- [1]
password: ----- [2]
password: ""
mds:
https: ----- [3]
tokenKeyPair: |- ----- [4]
----- BEGIN RSA PRIVATE KEY -----
...
----- END RSA PRIVATE KEY -----
ldap:
address: ldaps://ldap.operator.svc.cluster.local:636 ----- [5]
authentication:
simple: ----- [6]
principal: cn=mds,dc=test,dc=com
credentials: Developer!
configurations: ----- [7]
groupNameAttribute: cn
groupObjectClass: group
groupMemberAttribute: member
groupMemberAttributePattern: CN=(.*),DC=test,DC=com
groupSearchBase: dc=test,dc=com
userNameAttribute: cn
userMemberOfAttributePattern: CN=(.*),DC=test,DC=com
userObjectClass: organizationalRole
userSearchBase: dc=test,dc=com
tls:
enabled: true
internalTLS: true ----- [8]
authentication:
principalMappingRules: ----- [9]
- RULE:^CN=([a-zA-Z0-9.]*).*$/$1/L
- DEFAULT
cacerts: |-
- [1] Set
username
to the Confluent REST API user you set up in LDAP. - [2] Set
password
to the password of the user specified in [1]. - [3] Set
https: true
if you want MDS to serve HTTPS traffic. - [4] Set
tokenKeyPair
to a PEM-encoded RSA key pair that MDS can use for signing tokens. These are required for token authentication between Confluent Platform components. See Create a PEM key pair for details on creating a public-private key pair. - [5]
address
is the URL for the LDAP server Confluent Platform uses for authentication. If you provide a secure LDAPS URL,kafka.tls.cacerts
must be configured to allow MDS to trust the certificate presented by your LDAP server. - [6]
principal
andcredentials
are used by MDS to authenticate itself with the LDAP server. - [7]
configurations
should be configured according to your LDAP settings. - [8] When using NodePort or static routing for external access, set
internalTLS: true
. - [9] For mTLS (
kafka.tls.authentication.type: tls
), the principal is retrieved from the certificate, and the user must be in the LDAP. For details about using principal mapping rules, see Principal Mapping Rules for SSL Listeners (Extract a Principal from a Certificate).
Configure RBAC and with static host-based routing¶
You need to provide additional configuration information when you enable RBAC with with static host-based routing.
To configure Kafka with RBAC and external access with static host-based routing, perform the following steps:
Configure and deploy Kafka as described in Kafka configuration.
Enable
internalTLS
:kafka: tls: internalTLS: true
Deploy an Ingress controller.
When externally exposing RBAC-enabled Kafka, you need to deploy an Ingress controller with the SSL Passthrough feature enabled.
For example, the following command installs an NGINX controller for Kafka:
helm upgrade --install ingress-with-sni stable/nginx-ingress \ --set rbac.create=true \ --set controller.publishService.enabled=true \ --set controller.extraArgs.enable-ssl-passthrough="true" \ --set tcp.8090="operator/kafka-bootstrap:8090"
The
--set tcp.8090="operator/kafka-bootstrap-sni:8090"
flag is set for accessing MDS externally.Create a bootstrap service of the ClusterIP type with the MDS port in addition to the external port.
For example:
Create the
bootstrap.yaml
file with the following:apiVersion: v1 kind: Service metadata: name: kafka-bootstrap namespace: operator labels: app: kafka-bootstrap spec: ports: - name: external port: 9092 protocol: TCP targetPort: 9092 selector: physicalstatefulcluster.core.confluent.cloud/name: kafka physicalstatefulcluster.core.confluent.cloud/version: v1 type: ClusterIP
Run the following command to create a bootstrap service with the above settings:
kubectl apply -f bootstrap.yaml -n <namespace>
Add the MDS ports to the Ingress resource for all the brokers and the bootstrap services.
For example:
Create
ingress-resource.yaml
for an Ingress resource for NGINX Ingress controller to expose a Kafka bootstrap service and three brokers. The domain name ismydevplatform.gcp.cloud
and bootstrap prefix/brokerPrefix/port are set with the default values. The rules apply to the Kafka brokers and bootstrap service hosts specified.Hosts should be resolved to the same Ingress load balancer external IP.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-with-sni annotations: nginx.ingress.kubernetes.io/ssl-passthrough: "true" kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/backend-protocol: HTTPS spec: tls: - hosts: - b0.platformops.dev.gcp.devel.cpdev.cloud - b1.platformops.dev.gcp.devel.cpdev.cloud - b2.platformops.dev.gcp.devel.cpdev.cloud - kafka.platformops.dev.gcp.devel.cpdev.cloud rules: - host: kafka.platformops.dev.gcp.devel.cpdev.cloud http: paths: - backend: serviceName: kafka-bootstrap servicePort: 9092 - backend: serviceName: kafka-bootstrap servicePort: 8090 - host: b0.platformops.dev.gcp.devel.cpdev.cloud http: paths: - backend: serviceName: kafka-0-internal servicePort: 9092 - host: b1.platformops.dev.gcp.devel.cpdev.cloud http: paths: - backend: serviceName: kafka-1-internal servicePort: 9092 - host: b2.platformops.dev.gcp.devel.cpdev.cloud http: paths: - backend: serviceName: kafka-2-internal servicePort: 9092
Run the following command to create an Ingress resource with the above settings:
kubectl apply -f ingress-resource.yaml -n <namespace>
Configure RBAC with static port-based routing¶
You need to provide additional configuration information when you enable RBAC with static port-based routing.
To configure Kafka with RBAC and external access with static port-based routing, perform the following steps:
Note: If you want MDS to serve HTTP traffic instead of HTTPS, replace the port 8090 with 8091.
Configure and deploy Kafka as described in Kafka configuration.
Install an Ingress controller with additional MDS ports mapped.
For example, the following command installs the NGINX controller using Helm:
helm install <release name> stable/nginx-ingress --set controller.ingressClass=kafka \ --set tcp.9093="operator/kafka-bootstrap:9092" \ --set tcp.9095="operator/kafka-0-internal:9092" \ --set tcp.9097="operator/kafka-1-internal:9092" \ --set tcp.9099="operator/kafka-2-internal:9092" \ --set tcp.9094="operator/kafka-bootstrap:8090" \ --set tcp.9096="operator/kafka-0-internal:8090" \ --set tcp.9098="operator/kafka-1-internal:8090" \ --set tcp.9100="operator/kafka-2-internal:8090"
Create a bootstrap service with the MDS port.
For example:
Create the
bootstrap.yaml
file with the following:apiVersion: v1 kind: Service metadata: name: kafka-bootstrap namespace: operator labels: app: kafka-bootstrap spec: ports: - name: external port: 9092 protocol: TCP targetPort: 9092 - name: metadata port: 8090 protocol: TCP targetPort: 8090 selector: physicalstatefulcluster.core.confluent.cloud/name: kafka physicalstatefulcluster.core.confluent.cloud/version: v1 type: ClusterIP
Run the following command to create an Ingress resource with the above settings:
kubectl apply -f bootstrap.yaml -n <namespace>
Add MDS ports in the Ingress resource for all the brokers and the bootstrap services.
For example:
Create
ingress-resource.yaml
for an Ingress resource for NGINX Ingress controller to expose a Kafka bootstrap service and three brokers. The domain name ismydevplatform.gcp.cloud
and bootstrap prefix/brokerPrefix/port are set with the default values. The rules apply to the Kafka brokers and bootstrap hosts specified.apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: <ingress resource name> annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: <host name> http: paths: - path: backend: serviceName: kafka-bootstrap servicePort: 9092 - path: backend: serviceName: kafka-0-internal servicePort: 9092 - path: backend: serviceName: kafka-1-internal servicePort: 9092 - path: backend: serviceName: kafka-2-internal servicePort: 9092 - path: backend: serviceName: kafka-bootstrap servicePort: 8090 - path: backend: serviceName: kafka-0-internal servicePort: 8090 - path: backend: serviceName: kafka-1-internal servicePort: 8090 - path: backend: serviceName: kafka-2-internal servicePort: 8090
Run the following command to create an Ingress resource:
kubectl apply -f ingress-resource.yaml -n <namespace>
Confluent Platform components configuration¶
In the Confluent Operator configuration file ($VALUES_FILE
), set the following values
for each Confluent Platform component.
<component>:
dependencies:
mds:
authentication:
username:
password:
The usernames and passwords must be already set on your LDAP server, as described in the assumptions.
If you do not want to enter this sensitive data into your $VALUES_FILE
, use
the --set
flag when applying the configuration with the helm upgrade
--install
command for each Confluent Platform component.
ksqlDB configuration with RBAC¶
When deploying Confluent Platform 5.5.x images with RBAC using Operator 1.6.x, ksqlDB pods will
fail the liveness and readiness probe checks. To workaround the issue, configure
/v1/metadata/id
as the liveness and readiness probe in the <operator home
directory>/helm/confluent-operator/charts/ksql/templates/ksql-psc.yaml
file as
shown in the following example:
liveness_probe:
http:
path: /v1/metadata/id
port: 9088
initial_delay_seconds: 30
timeout_seconds: 10
readiness_probe:
http:
path: /v1/metadata/id
port: 9088
initial_delay_seconds: 30
timeout_seconds: 10
Deploy Kafka with Metadata Service (MDS)¶
With MDS now configured, deploy Kafka components in the following order as described in Install Confluent Operator and Confluent Platform:
- Operator
- ZooKeeper
- Kafka
Verify MDS configuration¶
Log into MDS to verify the correct configuration and to get the Kafka cluster ID. You need the Kafka cluster ID for component role bindings.
Replace https://<kafka_bootstrap_endpoint>
in the below commands with the
value you set in your config file ($VALUES_FILE
) for
global.dependencies.mds.endpoint
.
Log into MDS as the Kafka super user as below:
confluent login \ --url https://<kafka_bootstrap_endpoint> \ --ca-cert-path <path-to-cacerts.pem>
You need to pass the
--ca-cert-path
flag if:- You have configured MDS to serve HTTPS traffic (
kafka.services.mds.https: true
) in Kafka configuration. - The CA used to issue the MDS certificates is not trusted by system where you are running these commands.
Provide the Kafka username and password when prompted, in this example,
kafka
andkafka-secret
.You get a response to confirm a successful login.
- You have configured MDS to serve HTTPS traffic (
Verify that the advertised listeners are correctly configured using the following command:
curl -ik \ -u '<kafka-user>:<kafka-user-password>' \ https://<kafka_bootstrap_endpoint>/security/1.0/activenodes/https
Get the Kafka cluster ID using one of the following commands:
confluent cluster describe --url https://<kafka_bootstrap_endpoint>
curl -ik \ https://<kafka_bootstrap_endpoint>/v1/metadata/id
For the examples in the remainder of this topic, save the output Kafka ID from the above commands as an environment variable,
$KAFKA_ID
.
Grant roles to Confluent Platform component principals¶
This section walks you through the workflow to create role bindings so that Confluent Platform components are deployed and function correctly.
Log into MDS as described in the first step in Verify MDS configuration, and run the
confluent iam rolebinding
commands as specified in the following sections.
Set the --principal
option to User:<component-ldap-user>
using the
component LDAP users. The commands in this section use the example component
users listed in the assumptions.
Schema Registry role binding¶
Grant the required roles to the Schema Registry user to deploy the Schema Registry service.
Set --schema-registry-cluster-id
to id_schemaregistry_operator
, or more
generally to id_<SR-component-name>_<namespace>
where <SR-component-name>
is
the value of schemaregistry.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Schema Registry.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role SecurityAdmin \
--schema-registry-cluster-id id_schemaregistry_operator
Set --resource
to Group:id_schemaregistry_operator
, or more generally to
Group:id_<SR-component-name>_<namespace>
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role ResourceOwner \
--resource Group:id_schemaregistry_operator
Set --resource
to Topic:_schemas_schemaregistry_operator
, or more
generally to Topic:_schemas_<SR-component-name>_<namespace>
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role ResourceOwner \
--resource Topic:_schemas_schemaregistry_operator
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:sr \
--role ResourceOwner \
--resource Topic:_confluent-license
Kafka Connect role binding¶
Grant the required roles to the Connect user to deploy the Connect service.
Set --connect-cluster-id
to id_connect_operator
, or more generally to
id_<Connect-component-name>_<namespace>
where <Connect-component-name>
is the value of connect.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Connect.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role SecurityAdmin \
--connect-cluster-id id_connect_operator
Set --resource
to Group:operator.connectors
, or more generally to
<namespace>.<Connect-component-name>
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role ResourceOwner \
--resource Group:operator.connectors
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role DeveloperWrite \
--resource Topic:_confluent-monitoring \
--prefix
Set --resource
to Topic:operator.connectors-
, or more generally to
<namespace>.<Connect-component-name>-
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:connect \
--role ResourceOwner \
--resource Topic:operator.connectors- \
--prefix
Confluent Replicator role binding¶
Grant the required roles to the Replicator user to deploy the Replicator service.
Set --resource
to Group:operator.replicator
, or more generally to
<namespace>.<Replicator-component-name>
where <Replicator-component-name>
is
the value of replicator.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Replicator.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:replicator \
--role ResourceOwner \
--resource Group:operator.replicator
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:replicator \
--role DeveloperWrite \
--resource Topic:_confluent-monitoring \
--prefix
Set --resource
to Topic:operator.replicator-
, or more generally to
<namespace>.<Replicator-component-name>-
where <Replicator-component-name>
is the value of replicator.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace where you want to deploy Replicator.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:replicator \
--role ResourceOwner \
--resource Topic:operator.replicator- \
--prefix
ksqlDB role binding¶
Grant the required roles to the ksqlDB user to deploy the ksqlDB service.
Set --ksql-cluster-id
to operator.ksql_
, or more generally to
<namespace>.<ksqldb-component-name>_
where <ksql-component-name>
is the
value of ksql.name
in your config file ($VALUES_FILE
), and
<namespace>
is the Kubernetes namespace to which you want to deploy
ksqlDB.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:ksql \
--role ResourceOwner \
--ksql-cluster-id operator.ksql_ \
--resource KsqlCluster:ksql-cluster
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--principal User:ksql \
--role ResourceOwner \
--resource Topic:_confluent-ksql-operator.ksql_ \
--prefix
Confluent Control Center role binding¶
Grant the required roles to the Control Center user to deploy the Control Center service.
confluent iam rolebinding create \
--principal User:c3 \
--role SystemAdmin \
--kafka-cluster-id $KAFKA_ID
Deploy the remaining Confluent Platform components¶
After granting roles to the various Confluent Platform components, you can successfully deploy those components with Confluent Operator, and the components can be authorized to communicate with each other as necessary.
Follow Install Confluent Operator and Confluent Platform to deploy the rest of Confluent Platform.
Grant roles to the Confluent Control Center user to be able to administer Confluent Platform¶
Control Center users require separate roles for each Confluent Platform component and resource they wish to view and administer in the Control Center UI. Grant explicit permissions to the users as shown below.
In the following examples, the testadmin
principal is used as a Control Center UI user.
Since there’s no access given to testadmin
yet, this user will be able to log
into Confluent Control Center, but nothing will be visible until the appropriate permissions are
granted as described in the following sections.
Grant permission to view and administer the Kafka cluster¶
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--role ClusterAdmin \
--principal User:testadmin
Grant permission to view and administer Schema Registry information¶
Set --schema-registry-cluster-id
using the pattern:
id_.<Schema-Registry-component-name>_<namespace>
. The following example
uses id_schemaregistry_operator
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--schema-registry-cluster-id id_schemaregistry_operator \
--principal User:testadmin \
--role SystemAdmin
Grant permission to view and administer the Connect cluster¶
Set --connect-cluster-id
using the pattern:
<namespace>.<Connect-component-name>
. The following example uses
operator.connectors
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--connect-cluster-id operator.connectors \
--principal User:testadmin \
--role SystemAdmin
Grant permission to view and administer the Replicator cluster¶
Set --connect-cluster-id
using the pattern:
<namespace>.<Connect-component-name>
. You need the Connect cluster ID for
Replicator role binding because Replicator runs in the Connect clusters. The
following example uses operator.connectors
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--connect-cluster-id operator.replicator \
--principal User:testadmin \
--role SystemAdmin
Grant permission to view and administer the ksqlDB cluster¶
Set --ksql-cluster-id
using the pattern:
<namespace>.<ksqldb-component-name>_
. The following example uses
operator.ksql_
.
confluent iam rolebinding create \
--kafka-cluster-id $KAFKA_ID \
--ksql-cluster-id operator.ksql_ \
--resource KsqlCluster:ksql-cluster \
--principal User:testadmin \
--role ResourceOwner
Access Control Lists¶
To deploy Kafka with the Access Control Lists authorization, specify the settings in your configuration file
($VALUES_FILE
).
The following settings are the recommended way to configure Confluent Platform to use ACLs for authorization.
global:
authorization:
simple:
enabled: true
superUsers:
- User:test
The following settings are still supported for backward compatibility. If you enable ACLs using both the configurations above and the configurations below, the above configuration take precedence.
kafka:
options:
acl:
enabled: true
# Value for super.users server property in the format, User:UserName;
supers: "User:test"
Note
Any changes to options:acl:supers:
triggers a Kafka cluster rolling upgrade.
Both examples above configure a superuser test
that all Confluent Platform components use
when communicating with the Kafka cluster. This is required so that components
can create internal topics.
mTLS authentication with ACLs enabled¶
When mTLS authentication is enabled, Confluent Platform determines the identity of the
authenticated principal via data extracted from the client certificate. The
Subject
section of the certificate is will be used to determine the
username. For example, for the following certificate’s subject (C=US, ST=CA,
L=Palo Alto
), the username will be extracted as User:L=Palo Alto,ST=CA,C=US
.
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
omitted...
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Palo Alto, L=CA, O=Company, OU=Engineering, CN=TestCA
Validity
Not Before: Mar 28 16:37:00 2019 GMT
Not After : Mar 26 16:37:00 2024 GMT
Subject: C=US, ST=CA, L=Palo Alto
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
User:ANONYMOUS
is the default user for internal clients and inter-broker
communication.
When Kafka is configured with mTLS authentication, in addition to
User:ANONYMOUS
, the cert user name is required.
Typically, you will have many different users and clients of Confluent Platform. In some cases, you may have shared client certificates across multiple users/clients, but the best practice is to issue unique client certificates for each user and certificate, and each certificate should have a unique subject/username. For each user/client, you must decide what they should be authorized to access within Confluent Platform, and ensure that the corresponding ACLs have been created for the subject/username of their corresponding client certificates. Follow the instructions in Authorization using ACLs to enable ACL authorization for Kafka objects.