Configure Authentication for Kafka in Confluent Platform Using Confluent for Kubernetes¶
This document presents the supported authentication methods and describes how to configure authentication for Kafka using Confluent for Kubernetes (CFK).
Any configuration difference between KRaft and ZooKeeper-based deployments are noted where applicable.
Kafka is configured without authentication by default.
For more details on security concepts in Confluent Platform, see Security in Confluent Platform.
For a comprehensive tutorial scenario for configuring authentication, see Deploy Secure Confluent Platform.
Configure authentication to access Kafka¶
This section describes the following methods for the server and client-side Kafka authentication:
- SASL/PLAIN authentication
- SASL/PLAIN with LDAP authentication
- OAuth/OIDC authentication
- mTLS authentication
SASL/PLAIN authentication¶
SASL/PLAIN is a simple username/password mechanism that is typically used with TLS network encryption to implement secure authentication.
The username is used as the authenticated principal, which can then be used in authorization.
Note
The files that you use to create authentication secrets must use the Linux
style line ending that only uses line feed (\n
). The Windows style line
ending that uses carriage return and line feed (\r\n
) does not work in
Kubernetes secret files.
Server-side SASL/PLAIN authentication for Kafka¶
Configure the server-side SASL/PLAIN authentication for Kafka.
You can use the JAAS and JAAS pass-through mechanisms to set up SASL/PLAIN credentials.
Create server-side SASL/PLAIN credentials using JAAS config¶
When you use jaasConfig
to provide required credentials for Kafka, CFK
automates configuration. For example, when you add, remove, or update users,
CFK automatically updates the JAAS config. This is the recommended way to
configure SASL/PLAIN for Kafka.
The expected key for jaasConfig
is plain-users.json
.
Create a
.json
file and add the expected value, in the following format:{ "username1": "password1", "username2": "password2", ... "usernameN": "passwordN" }
Create a Kubernetes secret using the expected key (
plain-users.json
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.json
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-users.json=./creds-kafka-sasl-users.json \ --namespace confluent
Create server-side SASL/PLAIN credentials using JAAS config pass-through¶
If you have customizations, such as using a custom login handler, you can
bypass the CFK automation and provide the configuration directly using
jaasConfigPassThrough
.
The expected key for jaasConfigPassThrough
is plain-jaas.conf
.
The expected value for the key (the data in the file) is your JAAS config text. See this Confluent Platform doc for understanding JAAS configs.
Create a
.conf
file and add the expected value, in the following format.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="<admin username>" \ password="<admin user password>" \ user_admin="<admin user password>" \ user_<additional user1>="<additional user1 password>" \ ... user_<additional userN>=”<additional userN password>”;
- The
username
andpassword
properties are used by the broker to initiate connections to other brokers. - The
user_<username_N>
properties define the passwords for the users,<username_N>
, that connect to the broker. The broker validates all client connections, including those from other brokers using these properties.
The following example uses the standard login module and specifies two additional users,
user1
anduser2
.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="admin" \ password="admin-secret" \ user_admin="admin-secret" \ user_user1="user1-secret" \ user_user2=”user2-secret”;
- The
You can use a Kubernetes secret or a directory path in the container to store the credentials.
Create a Kubernetes secret using the expected key (
plain-jaas.conf
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.conf
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \ --namespace confluent
Use a directory path in the container to provide the required credentials.
If
jaasConfigPassThrough.directoryPathInContainer
is configured as/vaults/secrets
in the Kafka CR, the expected file,plain-jaas.conf
, must exist in the directory path.See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
See CFK GitHub examples for more information on using the
directoryPathInContainer
property with Vault.
Configure Kafka for SASL/PLAIN authentication¶
In the Kafka custom resource (CR), configure the Kafka listener to use SASL/PLAIN as the authentication mechanism:
kind: Kafka
spec:
listeners:
external:
authentication:
type: plain --- [1]
jaasConfig: --- [2]
secretRef: --- [3]
jaasConfigPassThrough: --- [4]
secretRef: --- [5]
directoryPathInContainer: --- [6]
[1] Required. Set to
plain
.[2] When you use
jaasConfig
, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates the JAAS config. This is the recommended way to configure SASL/PLAIN for Kafka.One of [3], [5], or [6] is required. Only specify one.
[3] Provide the name of the Kubernetes secret that you created in the previous section.
[4] If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly using
jaasConfigPassThrough
.[5] Provide a Kubernetes secret that you created in the previous section with the expected key and the value.
[6] Provide the directory path in the container that you set up for the credentials in the previous section.
See CFK GitHub examples for more information on using the
directoryPathInContainer
property with Vault.
Client-side SASL/PLAIN authentication for Kafka¶
Configure the client-side SASL/PLAIN authentication for other Confluent components to authenticate to Kafka.
You can use the JAAS and JAAS pass-through mechanisms to set up SASL/PLAIN credentials.
Create client-side SASL/PLAIN credentials using JAAS config¶
When you use jaasConfig
, you provide the user names and passwords, and CFK
automates configuration. For example, when you add, remove, or update users,
CFK automatically updates JAAS config.
The expected client-side key for jaasConfig
is plain.txt
.
Create a
.txt
file and add the expected value, in the following format:username=<username> password=<password>
You specify the name of this value file in the next step when you create a secret.
Create a Kubernetes secret using the expected key (
plain.txt
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.txt
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain.txt=./creds-kafka-sasl-users.txt \ --namespace confluent
If the user name or password changes in the future, you need to update the credentials in the value file (e.g.
./creds-kafka-sasl-users.txt
file), update the secret for theplain.txt
key, and manually restart the Confluent Platform components that depends on theplain.txt
key. For details, see Update client-side SASL/PLAIN users using JAAS config.
Create client-side SASL/PLAIN credentials using JAAS config pass-through¶
If you have customizations, such as using a custom login handler, you can bypass
the CFK automation and provide the configuration directly using
jaasConfigPassThrough
.
The expected client-side key for jaasConfigPassThrough
is
plain-jaas.conf
.
Create a
.conf
file and add the expected value, in the following format.For example:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="kafka" \ password="kafka-secret";
You specify the name of this value file in the next step when you create a secret.
You can use a Kubernetes secret or a directory path in the container to store the credentials.
Create a Kubernetes secret using the expected key (
plain-jaas.conf
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.conf
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \ --namespace confluent
Use a directory path in the container to provide the required credentials.
If
jaasConfigPassThrough.directoryPathInContainer
is configured as/vaults/secrets
in the component CR, the expected file,plain-jaas.conf
, must exist in that directory path.See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
Configure Confluent components for SASL/PLAIN authentication to Kafka¶
For each of the Confluent components that communicates with Kafka, configure SALS/PLAIN authentication in the component CR as below:
kind: <Confluent component>
spec:
dependencies:
kafka:
authentication:
type: plain --- [1]
jaasConfig: --- [2]
secretRef: --- [3]
jaasConfigPassThrough: --- [4]
secretRef: --- [5]
directoryPathInContainer: --- [6]
- [1] Required. Set to
plain
. - [2] When you use
jaasConfig
, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates JAAS config. - [3], [5] or [6] is required. Specify only one.
- [3] Provide a Kubernetes secret you created in the previous section for this Confluent component to authenticate to Kafka.
- [4] An alternate way to configure JAAS is to use
jaasConfigPassThrough
. If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly. - [5] Provide a Kubernetes secret that you created in the previous section.
- [6] Provide the directory path in the container that you set up in the previous section.
SASL/PLAIN with LDAP authentication¶
SASL/PLAIN with LDAP callback handler is a variation of SASL/PLAIN. When you use SASL/PLAIN with LDAP for authentication, the username principals and passwords are retrieved from an LDAP server.
Server-side SASL/PLAIN with LDAP for Kafka¶
You must set up an LDAP server, for example, Active Directory (AD), before configuring and starting up a Kafka cluster with the SASL/PLAIN with LDAP authentication. For more information, see Configuring Kafka Client Authentication with LDAP.
You can use the JAAS and JAAS pass-through mechanisms to set up the credentials.
Note
To implement both a SASL/PLAIN listener and a SASL/PLAIN with LDAP listener
in your Kafka cluster, the SASL/PLAIN listener must be configured with
authentication.jaasConfigPassThrough
. Configuring
authentication.jaasConfig
for the SASL/PLAIN will cause the clients
connection to the SASL/PLAIN with the LDAP listener to fail with a
java.io.EOFException
.
Create server-side SASL/PLAIN LDAP credentials using JAAS config¶
The expected server-side key for jaasConfig
is plain-interbroker.txt
.
Create a
.txt
file and add the expected value, in the following format. You specify the name of this value file in the next step when you create a secret.username=<user> password=<password>
The username and password must belong to a user that exists in LDAP. This is the user that each Kafka broker authenticates when the cluster starts.
Create a Kubernetes Secret with the user name and password for inter-broker authentication.
The following example command creates a Kubernetes secret, using the
./creds-kafka-ldap-users.txt
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-interbroker.txt=./creds-kafka-ldap-users.txt \ --namespace confluent
Create server-side SASL/PLAIN LDAP credentials using JAAS config pass-through¶
The expected server-side key for jaasConfigPassThrough
is
plain-jaas.conf
.
Create a
.conf
file and add the expected value. For example:sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="kafka" \ password="kafka-secret";
You specify the name of this value file in the next step when you create a secret.
You can use a Kubernetes secret or a directory path in the container to store the credentials.
Create a Kubernetes secret using the expected key (
plain-jaas.conf
) and the value file you created in the previous step.The following example command creates a Kubernetes secret, using the
./creds-kafka-sasl-users.conf
file that contains the credentials:kubectl create secret generic credential \ --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \ --namespace confluent
Use a directory path in the container to provide the required credentials.
If
jaasConfigPassThrough.directoryPathInContainer
is configured as/vaults/secrets
in the Kafka CR, the expected file,plain-jaas.conf
, must exist in the directory path.See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
Configure Kafka for server-side SASL/PLAIN with LDAP authentication¶
Configure the listeners in the Kafka custom resource (CR):
kind: Kafka spec: listeners: internal: authentication: type: ldap --- [1] jaasConfig: --- [2] secretRef: --- [3] jaasConfigPassThrough: --- [4] secretRef: --- [5] directoryPathInContainer:--- [6] external: authentication: type: ldap --- [7] custom: authentication: type: ldap --- [8]
- [1] Required for the SASL/PLAIN with LDAP authentication for the internal Kafka listeners.
- [2] When you use
jaasConfig
to pass credentials, you provide the user name and password, and CFK automates configuration. When you add, remove, or update the user, CFK automatically updates the JAAS configuration. This is the recommended way to configure SASL/PLAIN LDAP for Kafka. - [3] Provide the name of the Kubernetes secret that you created in the previous section for inter-broker authentication.
- [4] An alternate way to configure JAAS is to use
jaasConfigPassThrough
. If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly. - [5] Provide the name of the Kubernetes secret that you created in the previous section for inter-broker authentication.
- [6] Provide the directory path in the container that you set up in the previous section.
- [7] Required for the SASL/PLAIN with LDAP authentication for the external Kafka listeners.
- [8] Required for the SASL/PLAIN with LDAP authentication for the custom Kafka listeners.
- [7] [8] To configure authentication type
ldap
on external or custom listeners, you do not need to specifyjaasConfig
orjaasConfigPassThrough
.
Configure the identity provider in the Kafka CR:
kind: Kafka spec: identityProvider: --- [1] type: ldap --- [2] ldap: --- [3] address: --- [4] authentication: --- [5] type: --- [6] simple: --- [7] tls: enabled: --- [8] configurations: --- [9]
[1] Required for the Kafka authentication type
ldap
. Specifies the identity provider configuration.When the MDS is enabled, this property is ignored, and the LDAP configuration in
spec.services.mds.provider
is used.[2] Required.
[3] This block includes the same properties used in the
spec.services.mds.provider.ldap
block in this Kafka CR.[4] Required. The address of the LDAP server, for example,
ldaps://ldap.confluent.svc.cluster.local:636
.[5] Required. The authentication method to access the LDAP server.
[6] Required. Specify
simple
ormtls
.[7] Required if the authentication type ([6]) is set to
simple
.[8] Required if the authentication type ([6]) is set to
mtls
. Set totrue
.[9] Required. The LDAP configuration settings.
Apply the configuration:
kubectl apply -f <Kafka CR>
Configure Kafka for server-side SASL/PLAIN with LDAP authentication in KRaft mode¶
As mentioned in this note, mixing
FileBasedLoginModule
(used by SASL/PLAIN) and PlainLoginModule
(used by
SASL/PLAIN with LDAP) in the JAAS configs is not supported and can cause the
Kafka broker to fail. Therefore, to use both SASL/PLAIN listener and a SASL/PLAIN
with LDAP listener in a Kafka cluster, you must configure the SASL/PLAIN listener
with JAAS config pass-through which enforces the use of the PlainLoginModule
class.
To implement both a SASL/PLAIN listener and a SASL/PLAIN with LDAP listener for
a Kafka cluster in KRaft mode, in addition to using JAAS config pass-through,
you also must provide the config override setting,
listener.name.controller.plain.sasl.jaas.config
for the controller listener
in the Kafka CR.
An example Kafka CR snippet:
kind: Kafka
spec:
configOverrides:
server:
- listener.name.controller.plain.sasl.jaas.config=${file:/mnt/secrets/<internal_listener_SASLPLAIN_secret_name>/plain-jaas.conf:sasl.jaas.config}
See an end-to-end sample scenario in the CFK example repository in GitHub.
Client-side SASL/PLAIN with LDAP for Kafka¶
When Kafka is configured with SASL/PLAIN with LDAP, Confluent components and clients authenticate to Kafka as SASL/PLAIN clients. The clients must authenticate as users in LDAP.
See Client-side SASL/PLAIN authentication for Kafka for configuration details.
mTLS authentication¶
Server-side mTLS authentication for Kafka¶
mTLS utilizes TLS certificates as an authentication mechanism. The certificate provides the identity.
The certificate Common Name (CN) is used as the authenticated principal, which can then be used in authorization.
Staring in CFK 2.10, you can configure and use two authentication methods (mTLS and OAuth or mTLS and LDAP) for Kafka listeners, one with mTLS and another supported method.
Configure a Kafka listener as below, in the Kafka CR, to use mTLS as the authentication mechanism:
kind: Kafka
spec:
listeners:
external:
authentication:
type: --- [1]
principalMappingRules: --- [2]
- RULE:.*CN[\\s]?=[\\s]?([a-zA-Z0-9.]*)?.*/$1/
mtls: --- [3]
sslClientAuthentication: --- [4]
principalMappingRules: --- [5]
- RULE:.*CN[\\s]?=[\\s]?([a-zA-Z0-9.]*)?.*/$1/
tls:
enabled: true --- [6]
[1] Required. Set to
mTLS
.- Set to
mtls
for setting up mTLS as the only authentication method. - Set to another methods other than
mtls
described in this topic. If set tomtls
, the listener will be configure to only use the mTLS authentication.
- Set to
[2] Optional. This specifies a mapping rule that extracts the principal name from the certificate Common Name (CN) when using mTLS only authentication.
The regular expression (regex) used in the mapping rule is Java mapping API.
Shorthand character classes need to be escaped with another backslash. For example, to use a whitespace (
\s
), specify\\s
.[3] Required for mTLS authentication in the server-side mTLS authentication mode.
[4] Required. Set to
required
to enable mandatory mTLS authentication from the client-side and to enforce certificate presentation.Valid values are
requested
andrequired
. Userequested
for optional mTLS.[5] Optional. This specifies a principal mapping rules list from the certificate of the client when using mTLS authentication. See [2] for details.
[6] Required for mTLS authentication. Set to
true
.
Client-side mTLS authentication for Kafka¶
For each of the Confluent components that communicates with Kafka, configure the mTLS authentication mechanism in the component CR as below:
kind: <Confluent component>
spec:
dependencies:
kafka:
authentication:
type: mtls --- [1]
tls:
enabled: true --- [2]
- [1] Required. Set to
mtls
. - [2] Required for mTLS authentication. Set to
true
.
OAuth/OIDC authentication¶
Open Authentication (OAuth) 2.0 is an open-standard authorization protocol that provides applications the ability for securely designated access. You can leverage your own identity provider and centralize identity management across your Confluent Platform and other service deployments on the cloud and on-premises.
Starting with CFK 2.9 and Confluent Platform 7.7, you can configure Confluent components with OAuth/OIDC, an OAuth-based authentication mechanism.
For the OAuth overview in Confluent Platform, see OAuth 2.0 for Confluent Platform.
Server-side OAuth/OIDC authentication for Kafka and KRaft¶
Configure a Kafka listener as below in the Kafka CR to use OAuth/OIDC as the
authentication mechanism. For the kraftController CR, the authentication
object is under spec.listeners.controller
.
kind: Kafka
spec:
listeners:
internal / external:
authentication:
type: oauth --- [1]
jaasConfig/jaasConfigPassThrough:
secretRef: --- [2]
oauthSettings:
groupsClaimName: --- [3]
subClaimName: --- [4]
audience: --- [5]
expectedIssuer: --- [6]
jwksEndpointUri: --- [7]
tokenEndpointUri: --- [8]
scope: --- [9]
loginConnectTimeoutMs: --- [10]
loginReadTimeoutMs: --- [11]
loginRetryBackoffMs: --- [12]
loginRetryMaxBackoffMs: --- [13]
[1] Required. Set to
oauth
.[2] The secret that contains a OIDC client ID and the client secret for authorization and token request to the identity provider (IdP).
Create the secret that contains two keys with their respective values,
clientId
andclientSecret
as following:clientId=<client-id> clientSecret=<client-secret>
[3] Required. The name of the claim in token for identifying the groups of subject in the JSON Web Tokens (JWT). The default value is
groups
.[4] The subject name of the JWT (session token). The default value is
sub
. Used in SSO.[5] Required. The intended consumer of the access token. You can specify a comma-delimited list for multiple audiences.
[6] The issuer URL, which is typically the authorization server’s URL. This value is used to compare to the issuer claim in the JWT for verification.
[7] The JSON Web Key Set (JWKS) URI. It is used to verify any JWT issued by the IdP.
[8] The base endpoint URI for the authorize that initiates an OAuth authorization request.
[9] Required only when your identity provider does not have a default scope or your groups claim is linked to a scope.
[10] Connect timeout with IdP in ms.
[11] Read timeout with IdP in ms.
[12] Retry backoff with IdP in ms.
[13] Max retry backoff with IdP in ms
Connect to IdP with self-signed certificates¶
The current set of SSL properties in Kafka listeners does not let you connect to your IdP with self signed certificates. You will receive an error in the Kafka pod:
[2024-04-17 13:33:53,712] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException:
The OAuth validator configuration encountered an error when initializing the VerificationKeyResolver
Use one of the following workaround options:
Specify a custom trust store through JVM arguments in the
spec.configOverrides.jvm
section of the Kafka custom resource:kind: Kafka spec: configOverrides: jvm: - "-Djavax.net.ssl.trustStoreType=JKS" - "-Djavax.net.ssl.trustStore=/mnt/jvmtruststore/truststore.jks" - "-Djavax.net.ssl.trustStorePassword=mystorepassword"
Add the trust store to the Kafka listeners as JAAS config, using the parameter
unsecuredLoginStringClaim_sub="thePrincipalName"
.kind: Kafka spec: configOverrides: server: - listener.name.controller.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required clientId="${file:/mnt/secrets/oauth-jass/oauth.txt:clientId}" clientSecret="${file:/mnt/secrets/oauth-jass/oauth.txt:clientSecret}" refresh_ms="3000" ssl.truststore.location="/mnt/sslcerts/truststore.jks" ssl.truststore.password="mystorepassword" unsecuredLoginStringClaim_sub="thePrincipalName";
Alternatively, you can use a secret to pass the JAAS config passthrough information as described in Create server-side SASL/PLAIN credentials using JAAS config pass-through.
Client-side OAuth/OIDC authentication for Kafka and KRaft¶
For each of the Confluent components that communicates with Kafka or KRaft,
configure the OAuth/OIDC authentication mechanism in the component CR as below.
For KRaft, the authentication
object is under
dependencies.kRaftController.controllerListener.authentication
.
kind: <Confluent component>
spec:
dependencies:
kafka:
authentication:
type: oauth --- [1]
jaasConfig/jaasConfigPassThrough:
secretRef: --- [2]
oauthSettings:
tokenEndpointUri: --- [3]
[1] Required. Set to
oauth
.[2] The secret that contains a OIDC client ID and the client secret for authorization and token request to IdP.
Create the secret that contains two keys with their respective values,
clientId
andclientSecret
as following:clientId=<client-id> clientSecret=<client-secret>
[3] The base endpoint URI for the authorize that initiates an OAuth authorization request.
For the full list of the OAuth settings, see OAuth configuration.
Configure authentication to access MDS¶
You can use the following authentication methods to communicate with MDS.
- Bearer authentication: You use the bearer token issued by MDS.
- OAuth/OIDC authentication: You use the token, provided by your own identity provider and accepted by MDS.
- mTLS authentication: You use the principal extracted from the certificate data that MDS server gets when authenticating client.
Bearer authentication¶
Create client-side Bearer credentials for MDS¶
Provide the required Bearer credentials.
The expected key is bearer.txt
.
Create a
.txt
file with the expected value in the following format:username=<username> password=<password>
You can use a Kubernetes secret or a directory path in the container to store the bearer credentials.
Create a Kubernetes secret using the expected key (
bearer.txt
) and the value.For example, using the
./c3-mds-client.txt
file that you create for Control Center credentials:kubectl create secret generic c3-mds-client \ --from-file=bearer.txt=./c3-mds-client.txt \ --namespace confluent
Use a directory path in the container to provide the required credentials.
See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
See CFK GitHub examples for more information on using the
directoryPathInContainer
property with Vault.
Client-side Bearer authentication for MDS¶
For each of the Confluent components that communicate with MDS, configure bearer authentication mechanism in the component CR:
kind: <Confluent component>
spec:
dependencies:
mds:
authentication:
type: bearer --- [1]
bearer:
secretRef: --- [2]
directoryPathInContainer --- [3]
- [1] Required. Set to
bearer
. - [2] or [3] Required. Do not specify both.
- [2] To load username and secret, set to the Kubernetes secret you created in the previous section.
- [3] Provide the path in the container in the
previous section. The expected file,
bearer.txt
, must exist in the specified directory path.
OAuth/OIDC authentication¶
Server-side OAuth/OIDC authentication for MDS¶
Configure the MDS as below in the Kafka CR to use OAuth/OIDC as the authentication mechanism:
kind: Kafka
spec:
services:
mds:
provider:
oidc:
clientCredentials:
secretRef: --- [1]
oauth:
configurations:
groupsClaimName: --- [2]
subClaimName: --- [3]
expectedIssuer: --- [4]
jwksEndpointUri: --- [5]
[1] Required only when enabling SSO. The secret that contains a OIDC client ID and the client secret for authorization and token request to IdP.
Create the secret that contains two keys with their respective values,
clientId
andclientSecret
as following:clientId=<client-id> clientSecret=<client-secret>
[2] Groups in JSON Web Tokens (JWT). The default value is
groups
.JWT is issued by Confluent (MDS) to maintain the session. This contains information passed by IdP in id token along with custom claims (if any) added by the MDS.
[3] The subject name of the JWT. The default value is
sub
.[4] The issuer URL, which is typically the authorization server’s URL. This value is used to compare to the issuer claim in the JWT for verification.
[5] The JSON Web Key Set (JWKS) URI. It is used to verify any JWT issued by the IdP.
For the full list of the OAuth settings, see OAuth configuration.
Client-side OAuth/OIDC authentication for MDS¶
For each of the Confluent components that communicate with MDS, configure OAuth/OIDC authentication mechanism in the component CR:
kind: <Confluent component>
spec:
dependencies:
mds:
authentication:
type: oauth --- [1]
oauth:
secretRef: --- [2a]
directoryPathInContainer --- [2b]
configuration:
tokenEndpointUri: --- [3]
[1] Required. Set to
oauth
.[2a] or [2b] Specify only one setting.
[2a] The secret that contains the OIDC client ID and the client secret for authorization and token request to the identity provider.
Create the secret that contains two keys with their respective values,
clientId
andclientSecret
as following:clientId=<client-id> clientSecret=<client-secret>
[2b] The OIDC client ID and the client secret for authorization and token request to the identity provider (IdP).
See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.
[3] The base endpoint URI for the authorize that initiates an OAuth authorization request.
For the full list of the OAuth settings, see OAuth configuration.
mTLS authentication¶
Server-side mTLS authentication for MDS¶
Starting in CFK 2.10 and Confluent Platform 7.8, MDS supports mTLS authentication in a single authentication mode or in a dual authentication mode.
Configure the MDS as below in the Kafka CR to use the mTLS authentication mechanism:
kind: Kafka
spec:
services:
mds:
provider:
mtls: --- [1]
sslClientAuthentication: --- [2]
principalMappingRules: --- [3]
- ^CN=([a-zA-Z0-9.]*).*$/$1/L
[1] Required.
[2] Required. Set to
required
to enable mandatory mTLS authentication from the client-side and to enforce certificate presentation.Valid values are
requested
andrequired
. Userequested
for optional mTLS.[3] Mapping rules for extracting principal from the certificate.
Client-side mTLS authentication for MDS¶
For each of the Confluent components that communicate with MDS, configure mTLS authentication mechanism in the component CR:
kind: <Confluent component>
spec:
dependencies:
mds:
authentication:
type: --- [1]
sslClientAuthentication: --- [2]
[1] If only using mTLS, set to
mtls
.If configuring two authentication methods for MDS, set to
bearer
oroauth
.[2] can be a boolean true or false and will enable mTLS authentication from the client side.
File-based authentication¶
When you set up RBAC only with mTLS authentication, you need to use the file-based authentication provider for Control Center and Confluent CLI.
File-based provider is not supported when MDS uses LDAP or OAuth authentication.
Server-side file-based authentication for MDS¶
Create a file with user credentials in the following format:
<username-1>:<password-1> <username-2>:<password-2>
Create a Kubernetes secret that contains file based username/password you created in the previous step. They secret should have key
userstore.txt
. For example:kubectl create secret generic file-secret \ --from-file=userstore.txt=./fileUserPassword.txt \ --namespace confluent
Configure the MDS as below in the Kafka CR to use the file-based authentication mechanism:
kind: Kafka spec: services: mds: provider: file: --- [1] secretRef: --- [2]
- [1] Required.
- [2] Create a Kubernetes secret that contains the credential file.