Configure Authentication for Kafka using Confluent for Kubernetes¶
This document presents the supported authentication concepts and describes how to configure authentication for Kafka using Confluent for Kubernetes (CFK). Kafka is configured without authentication by default.
For more details on security concepts in Confluent Platform, see Security in Confluent Platform.
For a comprehensive tutorial scenario for configuring authentication, see Deploy Secure Confluent Platform.
Configure authentication to access Kafka¶
This section describes the following methods for the server and client-side Kafka authentication:
SASL/PLAIN authentication¶
SASL/PLAIN is a simple username/password mechanism that is typically used with TLS network encryption to implement secure authentication.
The username is used as the authenticated principal, which can then be used in authorization.
Server-side SASL/PLAIN authentication for Kafka¶
Configure the server-side SASL/PLAIN authentication for Kafka.
You can use the JAAS and JAAS pass-through mechanisms to set up SASL/PLAIN credentials.
Create server-side SASL/PLAIN credentials using JAAS config¶
When you use jaasConfig
to provide required credentials for Kafka, CFK
automates configuration. For example, when you add, remove, or update users,
CFK automatically updates the JAAS config. This is the recommended way to
configure SASL/PLAIN for Kafka.
The expected key for jaasConfig
is plain-users.json
.
Create a
.json
file and add the expected value, in the following format:{ "username1": "password1", "username2": "password2", ... "usernameN": "passwordN" }
Create a Kubernetes secret using the expected key (
plain-users.json
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.json
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-users.json=./creds-kafka-sasl-users.json \ --namespace confluent
Create server-side SASL/PLAIN credentials using JAAS config pass-through¶
If you have customizations, such as using a custom login handler, you can
bypass the CFK automation and provide the configuration directly using
jaasConfigPassThrough
.
The expected key for jaasConfigPassThrough
is plain-jaas.conf
.
The expected value for the key (the data in the file) is your JAAS config text. See this Confluent Platform doc for understanding JAAS configs.
Create a
.conf
file and add the expected value, in the following format.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="<admin username>" \ password="<admin user password>" \ user_admin="<admin user password>" \ user_<additional user1>="<additional user1 password>" \ ... user_<additional userN>=”<additional userN password>”;
The following example uses the standard login module and specifies two additional users,
user1
anduser2
.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="admin" \ password="admin-secret" \ user_admin="admin-secret" \ user_user1="user1-secret" \ user_user2=”user2-secret”;
You can use a Kubernetes secret or a directory path in the container to store the credentials.
Create a Kubernetes secret using the expected key (
plain-jaas.conf
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.conf
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \ --namespace confluent
Use a directory path in the container to provide the required credentials.
If
jaasConfigPassThrough.directoryPathInContainer
is configured as/vaults/secrets
in the Kafka CR, the expected file,plain-jaas.conf
, must exist in the directory path.See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
See CFK GitHub examples for more information on using the
directoryPathInContainer
property with Vault.
Configure Kafka for SASL/PLAIN authentication¶
In the Kafka custom resource (CR), configure the Kafka listener to use SASL/PLAIN as the authentication mechanism:
kind: Kafka
spec:
listeners:
external:
authentication:
type: plain --- [1]
jaasConfig: --- [2]
secretRef: --- [3]
jaasConfigPassThrough: --- [4]
secretRef: --- [5]
directoryPathInContainer: --- [6]
[1] Required. Set to
plain
.[2] When you use
jaasConfig
, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates the JAAS config. This is the recommended way to configure SASL/PLAIN for Kafka.One of [3], [5], or [6] is required. Only specify one.
[3] Provide the name of the Kubernetes secret that you created in the previous section.
[4] If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly using
jaasConfigPassThrough
.[5] Provide a Kubernetes secret that you created in the previous section with the expected key and the value.
[6] Provide the directory path in the container that you set up for the credentials in the previous section.
See CFK GitHub examples for more information on using the
directoryPathInContainer
property with Vault.
Client-side SASL/PLAIN authentication for Kafka¶
Configure the client-side SASL/PLAIN authentication for other Confluent components to authenticate to Kafka.
You can use the JAAS and JAAS pass-through mechanisms to set up SASL/PLAIN credentials.
Create client-side SASL/PLAIN credentials using JAAS config¶
When you use jaasConfig
, you provide the user names and passwords, and CFK
automates configuration. For example, when you add, remove, or update users,
CFK automatically updates JAAS config.
The expected client-side key for jaasConfig
is plain.txt
.
Create a
.txt
file and add the expected value, in the following format:username=<username> password=<password>
Create a Kubernetes secret using the expected key (
plain.txt
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.txt
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain.txt=./creds-kafka-sasl-users.txt \ --namespace confluent
If the user name or password changes in the future, you need to update the credentials in the value file (e.g.
./creds-kafka-sasl-users.txt
file), update the secret for theplain.txt
key, and manually restart the Confluent Platform components that depends on theplain.txt
key. For details, see Update client-side SASL/PLAIN users using JAAS config.
Create client-side SASL/PLAIN credentials using JAAS config pass-through¶
If you have customizations, such as using a custom login handler, you can bypass
the CFK automation and provide the configuration directly using
jaasConfigPassThrough
.
The expected client-side key for jaasConfigPassThrough
is
plain-jaas.conf
.
Create a
.conf
file and add the expected value, in the following format.For example:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="kafka" \ password="kafka-secret";
You can use a Kubernetes secret or a directory path in the container to store the credentials.
Create a Kubernetes secret using the expected key (
plain-jaas.conf
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.conf
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \ --namespace confluent
Use a directory path in the container to provide the required credentials.
If
jaasConfigPassThrough.directoryPathInContainer
is configured as/vaults/secrets
in the component CR, the expected file,plain-jaas.conf
, must exist in that directory path.See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
Configure Confluent components for SASL/PLAIN authentication to Kafka¶
For each of the Confluent components that communicates with Kafka, configure SALS/PLAIN authentication in the component CR as below:
kind: <Confluent component>
spec:
dependencies:
kafka:
authentication:
type: plain --- [1]
jaasConfig: --- [2]
secretRef: --- [3]
jaasConfigPassThrough: --- [4]
secretRef: --- [5]
directoryPathInContainer: --- [6]
- [1] Required. Set to
plain
. - [2] When you use
jaasConfig
, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates JAAS config. - [3], [5] or [6] is required. Specify only one.
- [3] Provide a Kubernetes secret you created in the previous section for this Confluent component to authenticate to Kafka.
- [4] An alternate way to configure JAAS is to use
jaasConfigPassThrough
. If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly. - [5] Provide a Kubernetes secret that you created in the previous section.
- [6] Provide the directory path in the container that you set up in the previous section.
SASL/PLAIN with LDAP authentication¶
SASL/PLAIN with LDAP callback handler is a variation of SASL/PLAIN. When you use SASL/PLAIN with LDAP for authentication, the username principals and passwords are retrieved from an LDAP server.
Server-side SASL/PLAIN with LDAP for Kafka¶
You must set up an LDAP server, for example, Active Directory (AD), before configuring and starting up a Kafka cluster with the SASL/PLAIN with LDAP authentication. For more information, see Configuring Kafka Client Authentication with LDAP.
You can use the JAAS and JAAS pass-through mechanisms to set up the credentials.
Note
To implement both a SASL/PLAIN listener and a SASL/PLAIN with LDAP listener
in your Kafka cluster, the SASL/PLAIN listener must be configured with
authentication.jaasConfigPassThrough
that will materialize the
PlainLoginModule
. This is because the JVM only allows a single SASL
LoginModule
to be configured for the broker. However, configuring
authentication.jaasConfig
for the SASL/PLAIN will materialize the
FileBasedLoginModule
, and this will cause the clients connection to the
SASL/PLAIN with the LDAP listener to fail with a java.io.EOFException
.
Create server-side SASL/PLAIN LDAP credentials using JAAS config¶
The expected server-side key for jaasConfig
is plain-interbroker.txt
.
Create a
.txt
file and add the expected value, in the following format:username=<user> password=<password>
The username and password must belong to a user that exists in LDAP. This is the user that each Kafka broker authenticates when the cluster starts.
Create a Kubernetes Secret with the user name and password for inter-broker authentication.
The following example command creates a Kubernetes secret, using the
./creds-kafka-ldap-users.txt
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-interbroker.txt=./creds-kafka-ldap-users.txt \ --namespace confluent
Create server-side SASL/PLAIN LDAP credentials using JAAS config pass-through¶
The expected server-side key for jaasConfigPassThrough
is
plain-jaas.conf
.
Create a
.conf
file and add the expected value. For example:sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="kafka" \ password="kafka-secret";
You can use a Kubernetes secret or a directory path in the container to store the credentials.
Create a Kubernetes secret using the expected key (
plain-jaas.conf
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.conf
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \ --namespace confluent
Use a directory path in the container to provide the required credentials.
If
jaasConfigPassThrough.directoryPathInContainer
is configured as/vaults/secrets
in the Kafka CR, the expected file,plain-jaas.conf
, must exist in the directory path.See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
Configure Kafka for server-side SASL/PLAIN with LDAP authentication¶
Configure the listeners in the Kafka custom resource (CR):
kind: Kafka spec: listeners: internal: authentication: type: ldap --- [1] jaasConfig: --- [2] secretRef: --- [3] jaasConfigPassThrough: --- [4] secretRef: --- [5] directoryPathInContainer:--- [6] external: authentication: type: ldap --- [7] custom: authentication: type: ldap --- [8]
- [1] Required for the SASL/PLAIN with LDAP authentication for the internal Kafka listeners.
- [2] When you use
jaasConfig
to pass credentials, you provide the user name and password, and CFK automates configuration. When you add, remove, or update the user, CFK automatically updates the JAAS configuration. This is the recommended way to configure SASL/PLAIN LDAP for Kafka. - [3] Provide the name of the Kubernetes secret that you created in the previous section for inter-broker authentication.
- [4] An alternate way to configure JAAS is to use
jaasConfigPassThrough
. If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly. - [5] Provide the name of the Kubernetes secret that you created in the previous section for inter-broker authentication.
- [6] Provide the directory path in the container that you set up in the previous section.
- [7] Required for the SASL/PLAIN with LDAP authentication for the external Kafka listeners.
- [8] Required for the SASL/PLAIN with LDAP authentication for the custom Kafka listeners.
- [7] [8] To configure authentication type
ldap
on external or custom listeners, you do not need to specifyjaasConfig
orjaasConfigPassThrough
.
Configure the identity provider in the Kafka CR:
kind: Kafka spec: identityProvider: --- [1] type: ldap --- [2] ldap: --- [3] address: --- [4] authentication: --- [5] type: --- [6] simple: --- [7] tls: enabled: --- [8] configurations: --- [9]
[1] Required for the Kafka authentication type
ldap
. Specifies the identity provider configuration.When the MDS is enabled, this property is ignored, and the LDAP configuration in
spec.services.mds.provider
is used.[2] Required.
[3] This block includes the same properties used in the
spec.services.mds.provider.ldap
block in this Kafka CR.[4] Required. The address of the LDAP server, for example,
ldaps://ldap.confluent.svc.cluster.local:636
.[5] Required. The authentication method to access the LDAP server.
[6] Required. Specify
simple
ormtls
.[7] Required if the authentication type ([6]) is set to
simple
.[8] Required if the authentication type ([6]) is set to
mtls
. Set totrue
.[9] Required. The LDAP configuration settings.
Apply the configuration:
kubectl apply -f <Kafka CR>
Client-side SASL/PLAIN with LDAP for Kafka¶
When Kafka is configured with SASL/PLAIN with LDAP, Confluent components and clients authenticate to Kafka as SASL/PLAIN clients. The clients must authenticate as users in LDAP.
See Client-side SASL/PLAIN authentication for Kafka for configuration details.
mTLS authentication¶
Server-side mTLS authentication for Kafka¶
mTLS utilizes TLS certificates as an authentication mechanism. The certificate provides the identity.
The certificate Common Name (CN) is used as the authenticated principal, which can then be used in authorization.
Configure a Kafka listener as below, in the Kafka CR, to use mTLS as the authentication mechanism:
kind: Kafka
spec:
listeners:
external:
authentication:
type: mtls --- [1]
principalMappingRules:
- RULE:.*CN[\\s]?=[\\s]?([a-zA-Z0-9.]*)?.*/$1/ --- [2]
tls:
enabled: true --- [3]
[1] Required. Set to
mTLS
.[2] Optional. This specifies a mapping rule that extracts the principal name from the certificate Common Name (CN).
The regular expression (regex) used in the mapping rule is Java mapping API.
Shorthand character classes need to be escaped with another backslash. For example, to use a whitespace (
\s
), specify\\s
.[3] Required for mTLS authentication. Set to
true
.
Client-side mTLS authentication for Kafka¶
For each of the Confluent components that communicates with Kafka, configure the mTLS authentication mechanism in the component CR as below:
kind: <Confluent component>
spec:
dependencies:
kafka:
authentication:
type: mtls --- [1]
tls:
enabled: true --- [2]
- [1] Required. Set to
mtls
. - [2] Required for mTLS authentication. Set to
true
.