Configure Authentication using Confluent for Kubernetes Blueprints¶
Authentication verifies the identity of users and applications that connect to Kafka and other Confluent components.
This document presents the supported authentication concepts and describes how to configure authentication and manage credentials for Confluent Platform when using Confluent for Kubernetes (CFK) Blueprints.
For details on security concepts in Confluent Platform, see Security in Confluent Platform.
For tutorial scenarios for configuring authentication, see the GitHub examples.
- Authentication to access Kafka
CFK Blueprints supports the following authentication mechanisms for client applications and Confluent Platform components to access Kafka:
SASL/PLAIN authentication: Clients use a username/password for authentication. The username/password credentials are stored in a Kubernetes secret.
For the external/custom listeners, the credential can come from the ConfluentPlatformBlueprint CR or from the cluster deployment CR itself based on the
providerType
configured in the ConfluentPlatformBlueprint CR.SASL/PLAIN with LDAP authentication: Clients use a username/password for authentication. The username/password credentials are stored in an LDAP server.
Username and password are required for inter-broker communications.
For other listeners credential is not required.
mTLS authentication: Clients use TLS certificates for authentication.
For the external and custom listeners, based on the
providerType
configured in the ConfluentPlatformBlueprint CR,prinicipalMappingRules
in the ConfluentPlatformBlueprint CR or in the cluster deployment CR is used to identify the principal name.
- Authentication to access ZooKeeper
CFK Blueprints supports the following authentication mechanisms for Kafka to access ZooKeeper:
- SASL/DIGEST authentication: Clients use a hashed username/password for authentication.
- mTLS authentication: Clients use TLS certificates for authentication.
- Authentication to access other Confluent Platform components
- By default, CFK Blueprints manages internal communications between Confluent Platform components, but you have an option to provide credentials at the cluster deployment level in the component cluster CRs.
Use the following custom resources (CRs) to configure and manage authentication and credentials:
CredentialStoreConfig CR
Credentials are managed through CredentialStoreConfig CR, and it contains information about the credential secrets.
A CredentialStoreConfig has to be deployed before the ConfluentPlatformBlueprint and cluster CRs.
One CredentialStoreConfig can only be used by one ConfluentPlatformBlueprint CR.
ConfluentPlatformBlueprint CR
The confluentPlatformBlueprints CR defines the global credential management strategy for all deployments.
Confluent cluster CRs, namely, ConnectCluster, ControlCenterCluster, KafkaCluster, KafkaRestProxyCluster, KsqlDBCluster, SchemaRegistryCluster, and ZookeeperCluster.
These are also referred to as deployment CRs in this document.
CFK Blueprints allows changing secret keys for server-side authentication configuration.
For the client side of credentials, specific keys are required if you choose to provide the client-side credentials.
Configure authentication to access Kafka¶
This section describes the configuration steps for the following methods for the server-side and client-side Kafka authentication:
SASL/PLAIN authentication¶
SASL/PLAIN is a simple username/password mechanism that is typically used with TLS network encryption to implement secure authentication.
Server-side SASL/PLAIN authentication for Kafka¶
Create a
.json
file and add the server-side SASL/PLAIN credentials in the following format:{ "username1": "password1", "username2": "password2", ... "usernameN": "passwordN" }
Create a Kubernetes secret using the key and the value file you created in the previous step.
The default key is
kafka-server-listener-<listener-name>-plain-users.json
. For example:kafka-server-listener-external-plain-users.json
kafka-server-listener-internal-plain-users.json
kafka-server-listener-replication-plain-interbroker.txt
If using a different key, specify that key in
credentialStoreRef.key
in the ConfluentPlatformBlueprint CR.kubectl create secret generic <secret name> \ --from-file=<key>=<path to the credential .json file> \ --namespace <namespace>
The following example command creates a Kubernetes secret for Kafka external listener, using the
kafka-server-listener-external-plain-users.json
key and the./creds-kafka-sasl-users.json
file that contains the credentials:kubectl create secret generic credential \ --from-file=kafka-server-listener-external-plain-users.json=./creds-kafka-sasl-users.json \ --namespace confluent
Create a CredentialStoreConfig CR with reference to the secret created in the previous step.
Configure the Kafka server-side SASL/PLAIN authentication in the Blueprint.
Client-side SASL/PLAIN authentication for Kafka¶
CFK Blueprints automatically manages the client-side credentials, and you are not required to pass these. However, there are scenarios you might need to pass your own credentials, such as:
- When
providerType
is set todeployment
in the ConfluentPlatformBlueprint CR, then each corresponding cluster has to pass the credential keys. - When not using the Blueprint-managed credentials but allowing custom credentials.
To provide the client-side SASL/PLAIN credentials for other Confluent components to authenticate to Kafka:
Create a
.txt
file and add the client-side SASL/PLAIN credentials in the following format:username=<username> password=<password>
Create a Kubernetes secret using the key and the value file you created in the previous step.
The expected key is in the
<component>-client-plain.txt
format. The client-side keys cannot be changed.For example:
ksqldb-client-plain.txt
connect-client-plain.txt
schemaregistry-client-plain.txt
controlcenter-client-plain.txt
kafkarestproxy-client-plain.txt
kubectl create secret generic <secret name> \ --from-file=<key>=<path to the credential .txt file> \ --namespace <namespace>
The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.txt
file that contains the credentials for ksqlDB to authenticate with Kafka:kubectl create secret generic credential \ --from-file=ksqldb-client-plain.txt=./creds-kafka-sasl-users.txt \ --namespace confluent
Create a CredentialStoreConfig CR with reference to the secret created in the previous step.
In the component cluster CR, reference the CrenetialStoreConfig CR you created in the previous step:
kind: <component>Cluster spec: credentialStoreConfigRef: name: <CredentialStoreConfig name>
SASL/PLAIN with LDAP authentication¶
SASL/PLAIN with LDAP callback handler is a variation of SASL/PLAIN. When you use SASL/PLAIN with LDAP for authentication, the username principals and passwords are retrieved from your LDAP server.
Server-side SASL/PLAIN with LDAP for Kafka¶
You must set up an LDAP server, for example, Active Directory (AD), before configuring and starting up a Kafka cluster with the SASL/PLAIN with LDAP authentication. For more information, see Configuring Kafka Client Authentication with LDAP.
Create a
.txt
file and add server-side SASL/PLAIN LDAP credentials in the following format:username=<user> password=<password>
The username and password must belong to a user that exists in LDAP. This is the user that each Kafka broker authenticates when the cluster starts.
Create a Kubernetes secret using the key and the value file you created in the previous step.
The default key is
kafka-server-listener-replication-plain-interbroker.txt
.If using a different key, specify that key in
credentialStoreRef.key
in the ConfluentPlatformBlueprint CR.kubectl create secret generic <secret name> \ --from-file=<key>=<path to the credential file> \ --namespace <namespace>
The following example command creates a Kubernetes secret, using the
./creds-kafka-ldap-users.txt
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-interbroker.txt=./creds-kafka-ldap-users.txt \ --namespace confluent
Create a CredentialStoreConfig CR with reference to the secret created in the previous step.
Configure the Kafka to use SASL/PLAIN LDAP authentication in the Blueprint.
Client-side SASL/PLAIN with LDAP for Kafka¶
When Kafka is configured with SASL/PLAIN with LDAP, Confluent components and clients authenticate to Kafka as SASL/PLAIN clients. The clients must authenticate as users in LDAP.
See Client-side SASL/PLAIN authentication for Kafka for configuration details.
mTLS authentication¶
Server-side mTLS authentication¶
mTLS utilizes TLS certificates as an authentication mechanism. The certificate provides the identity.
- Enable TLS, and create certificates for Kafka.
- In the ConfluentPlatformBlueprint CR, configure Kafka to use the mTLS authentication.
Client-side mTLS authentication for Kafka¶
CFK Blueprints automatically manages the client-side credentials, and you are not required to pass these. There are two scenarios you might need to pass your own credentials:
- When
providerType
is set todeployment
in the ConfluentPlatformBlueprint CR, then each corresponding cluster has to pass the credential keys. - When not using the Blueprint-managed credentials but allowing custom credentials.
To provide your custom certificates for the Confluent components to authenticate to Kafka using mTLS, enable TLS and create certificates as needed.
Configure Kafka authentication in Blueprint¶
In the ConfluentPlatformBlueprint, configure the Kafka listeners to use a specific authentication mechanism:
kind: ConfluentPlatformBlueprint
spec:
ConfluentPlatform:
kafkalisteners:
externalListener:
authentication:
type: --- [1]
plain | ldap: --- [2]
providerType: --- [3]
blueprint:
credentialStoreRef:
name: --- [4]
key: --- [5]
deployment:
credentialStoreRef:
key: --- [6]
customListeners:
authentication:
type: --- [7]
plain | ldap: --- [8]
providerType: --- [9]
blueprint:
credentialStoreRef:
name: --- [10]
key: --- [11]
deployment:
credentialStoreRef:
name: --- [12]
key: --- [13]
internalListener:
authentication:
type: --- [14]
plain | ldap: --- [15]
providerType: --- [16]
blueprint:
credentialStoreRef:
name: --- [17]
key: --- [18]
- [1] [7] [14] Required to enable authentication for the respective Kafka
listeners, namely, internal, external, or custom listeners. The following
method types are supported:
plain
for SASL/PLAIN.ldap
for SASL/PLAIN with LDAP.mtls
for mTLS.
- [2] [8] [15] Required for the respective Kafka listeners. Specify the object
based on the authentication type you set in [1] [10] and [19],
plain
orldap
. - [3] [9] Required. Specify where the credentials are provided. Specify
one of:
blueprint
if you will provide the credentials in this ConfluentPlatformBlueprint CR.deployment
if you will provide the credentials in the component cluster deployment CRs.
- [4] [10] [12] [17] The value of the
spec.credentialStoreConfigRefs.name
in the ConfluentPlatformBlueprint CR. - [5] [6] [11] [13] [18] The name of the key in the secret the
credentialStoreConfigRefs.name
refers to. - [16] Required. Specify where the credentials are provided. Specify
blueprint
. For the internal listener, credentials can only be specified in the ConfluentPlatformBlueprint CR.
Configure authentication to access ZooKeeper¶
This section describes the configuration steps for the following authentication mechanisms for Kafka to access ZooKeeper:
SASL/DIGEST authentication¶
Server-side SASL/DIGEST authentication for ZooKeeper¶
To configure ZooKeeper with the SASL/DIGEST-MD5 authentication:
Create a
.json
file and add the server-side SASL/DIGEST credentials, in the following format:{ "username1": "password1", "username2": "password2" }
Create a Kubernetes secret using the key and the value file you created in the previous step.
The default key for the server-side SASL/DIGEST credential is
zookeeper-server-digest-users.json
.If using a different key, specify that key in
spec.confluentPlatform.zookeeper.authentication.digest.credentialStoreRef.key
in the ConfluentPlatformBlueprint CR.kubectl create secret generic <secret name> \ --from-file=zookeeper-server-digest-users.json=<path to the credential file> \ --namespace <namespace>
The following example command creates a Kubernetes secret, using the
./digest-users.json
file that contains the credentials:kubectl create secret generic credential \ --from-file=zookeeper-server-digest-users.json=./digest-users.json \ --namespace confluent
Create a CredentialStoreConfig CR with reference to the secret created in the previous step.
Configure ZooKeeper authentication in the ConfluentPlatformBlueprint CR:
kind: ConfluentPlatformBlueprint spec: confluentPlatform: zookeeper: authentication type: --- [1] digest: --- [2] providerType: --- [3] blueprint: credentialStoreRef: name: --- [4] key: --- [5]
- [1] Required to enable authentication for ZooKeeper. Set to
digest
to use SASL/DIGEST-MD5. - [2] Required.
- [3] Required. Specify where the credentials are provided. Valid option is
blueprint
. - [4] The value of the
spec.credentialStoreConfigRefs.name
in the ConfluentPlatformBlueprint CR. - [5] The name of the key in the secret the
spec.credentialStoreConfigRefs.name
refers to.
Client-side SASL/DIGEST authentication for ZooKeeper¶
CFK Blueprints automatically manages the client-side credentials, and you are not required to pass these. There are two scenarios you might need to pass your own credentials:
- When
providerType
is set todeployment
in the ConfluentPlatformBlueprint CR, then each corresponding cluster has to pass the credential keys. - When not using the Blueprint-managed credentials but allowing custom credentials.
Optionally, configure the client-side credentials for Kafka to authenticate to ZooKeeper using SASL/DIGEST:
Create a
.txt
file and add the client-side SASL/DIGEST credentials, in the following format:username=<user> password=<password>
Create a Kubernetes Secret with the key and the value file you created in the previous step.
The default client-side key is
kafka-client-digest.txt
.If using a different key, specify that key in
credentialStoreRef.key
in the ConfluentPlatformBlueprint CR.kubectl create secret generic <secret name> \ --from-file=kafka-client-digest.txt=<path to the credential file> \ --namespace <namespace>
The following example command creates a Kubernetes secret, using the
./digest-users.txt
file that contains the credentials:kubectl create secret generic credential \ --from-file=kafka-client-digest.txt=./digest-users.txt \ --namespace confluent
Create a CredentialStoreConfig CR with reference to the secret created in the previous step.
In the KafkaCluster CR, reference the CredentialStoreConfig CR you created in the previous step:
kind: KafkaCluster spec: credentialStoreConfigRef: name: <CredentialStoreConfig name>
mTLS authentication¶
Server-side mTLS authentication for ZooKeeper¶
To enable the server-side mTLS authentication for ZooKeeper:
Enable TLS, and create certificates for ZooKeeper.
In the ConfluentPlatformBlueprint, configure the Kafka listener to use the mTLS authentication.
In the ConfluentPlatformBlueprint CR, set the ZooKeeper authentication type to mTLS:
kind: ConfluentPlatformBlueprint spec: confluentPlatform: zookeeper: authentication type: mtls
Client-side mTLS authentication ZooKeeper¶
CFK Blueprints automatically manages the client-side credentials, and you are not required to pass these. There are two scenarios you might need to pass your own credentials/certificates:
- When
providerType
is set todeployment
in the ConfluentPlatformBlueprint CR, then each corresponding cluster has to pass the credential keys. - When not using the Blueprint-managed credentials but allowing custom credentials.
To provide your custom certificates for Kafka to authenticate to ZooKeeper using mTLS authentication, enable TLS, and create certificates for Kafka.
Configure authentication to access Confluent Platform HTTP endpoints¶
This section describes the configuration steps for the following authentication methods for the Confluent Platform components (other than Kafka and ZooKeeper):
mTLS authentication for Confluent components¶
To configure Confluent components to authenticate to other Confluent Platform components using mTLS authentication:
Enable TLS, and create certificates for the Confluent components.
In the ConfluentPlatformBlueprint, configure the HTTP authentication type to mTLS:
kind: ConfluentPlatformBlueprint spec: confluentPlatform: http: authentication: type: mtls
Bearer authentication for Confluent components¶
When RBAC is enabled (spec.confluentPlatform.authorization.type: rbac
) in
the ConfluentPlatformBlueprint CR), CFK Blueprints uses the bearer authentication for
Confluent components, ignoring other authentication settings. For example, it is
not possible to set the component authentication type to mTLS when RBAC is
enabled.
The bearer authentication method is supported only when RBAC is enabled.
LDAP is used to store the server-side credentials, and you do not need to separately configure those credentials.
Client-side bearer authentication for Confluent components¶
Configure the ConfluentPlatformBlueprint CR to use bearer authentication as below:
For Confluent components:
kind: ConfluentPlatformBlueprint spec: confluentPlatform: http: authentication: type: bearer --- [1]
For MDS client:
kind: ConfluentPlatformBlueprint spec: confluentPlatform: authorization: mdsClient: authentication: type: --- [1] bearer: --- [2] providerType: --- [3] blueprint: credentialStoreRef: name: --- [4] key: --- [5] deployment: credentialStoreRef: key: --- [6]
- [1] Required. Set to
bearer
. - [2] Required.
- [3] Required. Specify where the credentials are provided. Valid options
are:
blueprint
if you will provide the credentials in the ConfluentPlatformBlueprint CR.deployment
if you will provide the credentials in the component cluster deployment CRs.
- [4] The value of the
spec.credentialStoreConfigRefs.name
in the ConfluentPlatformBlueprint CR. - [5] [6] The key of the secret
spec.credentialStoreConfigRefs.name
refers to.
- [1] Required. Set to