Configure Authentication with Confluent for Kubernetes

Confluent Platform components are configured without authentication by default. This document presents the supported authentication concepts and describes how to configure authentication for Confluent Platform using Confluent for Kubernetes (CFK).

For more details on security concepts in Confluent Platform, see Security in Confluent Platform.

For a comprehensive tutorial scenario for configuring authentication, see Deploy Secure Confluent Platform.

Authentication overview

Authentication verifies the identity of users and applications connecting to Kafka and other Confluent components.

Authentication to access Kafka

CFK supports the following authentication mechanisms for client applications and Confluent Platform components to access Kafka:

  • SASL/PLAIN authentication: Clients use a username/password for authentication. The username/passwords are stored server-side in a Kubernetes secret or a directory in the container.

  • SASL/PLAIN with LDAP authentication: Clients use a username/password for authentication. The username/passwords are stored in an LDAP server.

  • mTLS authentication: Clients use TLS certificates for authentication.

    The client application principal name can be identified in the certificate as a Common Name (CN). Alternatively, prinicipalMappingRules in the Kafka CR can be used to identify the principal name.

The Kerberos and SASL/Scram methods are currently not supported in CFK.

Authentication to access ZooKeeper

CFK supports the following authentication mechanisms for Kafka to access ZooKeeper:

Authentication to access other Confluent Platform components

CFK supports the following authentication mechanisms for the rest of the Confluent components, specifically Connect, ksqlDB, Schema Registry, and Confluent Control Center:

Authentication to access MDS

CFK supports the following authentication mechanism for client applications and Confluent Platform components to access Metadata Service (MDS):

Configure authentication to access Kafka

This section describes the following methods for the server and client-side Kafka authentication:

SASL/PLAIN authentication

SASL/PLAIN is a simple username/password mechanism that is typically used with TLS network encryption to implement secure authentication.

The username is used as the authenticated principal, which can then be used in authorization.

Server-side SASL/PLAIN authentication for Kafka

Configure the server-side SASL/PLAIN authentication for Kafka.

You can use the JAAS and JAAS pass-through mechanisms to set up SASL/PLAIN credentials.

Create server-side SASL/PLAIN credentials using JAAS config

When you use jaasConfig to provide required credentials for Kafka, CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates the JAAS config. This is the recommended way to configure SASL/PLAIN for Kafka.

The expected key for jaasConfig is plain-users.json.

  1. Create a .json file and add the expected value, in the following format:

    {
    "username1": "password1",
    "username2": "password2",
    ...
    "usernameN": "passwordN"
    }
    
  2. Create a Kubernetes secret using the expected key (plain-users.json) and the value file you created in the previous step.

    The following example command creates a Kubernetes secret, using the ./creds-kafka-sasl-users.json file that contains the credentials:

    kubectl create secret generic credential \
      --from-file=plain-users.json=./creds-kafka-sasl-users.json \
      --namespace confluent
    
Create server-side SASL/PLAIN credentials using JAAS config pass-through

If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly using jaasConfigPassThrough.

The expected key for jaasConfigPassThrough is plain-jaas.conf.

The expected value for the key (the data in the file) is your JAAS config text. See this Confluent Platform doc for understanding JAAS configs.

  1. Create a .conf file and add the expected value, in the following format.

    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
      username="<admin username>" \
      password="<admin user password>" \
      user_admin="<admin user password>" \
      user_<additional user1>="<additional user1 password>" \
      ...
      user_<additional userN>=”<additional userN password>”;
    

    The following example uses the standard login module and specifies two additional users, user1 and user2.

    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
      username="admin" \
      password="admin-secret" \
      user_admin="admin-secret" \
      user_user1="user1-secret" \
      user_user2=”user2-secret”;
    
  2. You can use a Kubernetes secret or a directory path in the container to store the credentials.

    • Create a Kubernetes secret using the expected key (plain-jaas.conf) and the value file you created in the previous step.

      The following example command creates a Kubernetes secret, using the ./creds-kafka-sasl-users.conf file that contains the credentials:

      kubectl create secret generic credential \
        --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      If jaasConfigPassThrough.directoryPathInContainer is configured as /vaults/secrets in the Kafka CR, the expected file, plain-jaas.conf, must exist in the directory path.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

      See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

Configure Kafka for SASL/PLAIN authentication

In the Kafka custom resource (CR), configure the Kafka listener to use SASL/PLAIN as the authentication mechanism:

kind: Kafka
spec:
  listeners:
    external:
      authentication:
        type: plain                 --- [1]
        jaasConfig:                 --- [2]
          secretRef:                --- [3]
        jaasConfigPassThrough:      --- [4]
          secretRef:                --- [5]
          directoryPathInContainer: --- [6]
  • [1] Required. Set to plain.

  • [2] When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates the JAAS config. This is the recommended way to configure SASL/PLAIN for Kafka.

  • One of [3], [5], or [6] is required. Only specify one.

  • [3] Provide the name of the Kubernetes secret that you created in the previous section.

  • [4] If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly using jaasConfigPassThrough.

  • [5] Provide a Kubernetes secret that you created in the previous section with the expected key and the value.

  • [6] Provide the directory path in the container that you set up for the credentials in the previous section.

    See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

Client-side SASL/PLAIN authentication for Kafka

Configure the client-side SASL/PLAIN authentication for other Confluent components to authenticate to Kafka.

You can use the JAAS and JAAS pass-through mechanisms to set up SASL/PLAIN credentials.

Create client-side SASL/PLAIN credentials using JAAS config

When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates JAAS config.

The expected client-side key for jaasConfig is plain.txt.

  1. Create a .txt file and add the expected value, in the following format:

    username=<username>
    password=<password>
    
  2. Create a Kubernetes secret using the expected key (plain.txt) and the value file you created in the previous step.

    The following example command creates a Kubernetes secret, using the ./creds-kafka-sasl-users.txt file that contains the credentials:

    kubectl create secret generic credential \
      --from-file=plain.txt=./creds-kafka-sasl-users.txt \
      --namespace confluent
    
Create client-side SASL/PLAIN credentials using JAAS config pass-through

If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly using jaasConfigPassThrough.

The expected client-side key for jaasConfigPassThrough is plain-jaas.conf.

  1. Create a .conf file and add the expected value, in the following format.

    For example:

    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
      username="kafka" \
      password="kafka-secret";
    
  2. You can use a Kubernetes secret or a directory path in the container to store the credentials.

    • Create a Kubernetes secret using the expected key (plain-jaas.conf) and the value file you created in the previous step.

      The following example command creates a Kubernetes secret, using the ./creds-kafka-sasl-users.conf file that contains the credentials:

      kubectl create secret generic credential \
        --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      If jaasConfigPassThrough.directoryPathInContainer is configured as /vaults/secrets in the component CR, the expected file, plain-jaas.conf, must exist in that directory path.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

Configure Confluent components for SASL/PLAIN authentication to Kafka

For each of the Confluent components that communicates with Kafka, configure SALS/PLAIN authentication in the component CR as below:

kind: <Confluent component>
spec:
  dependencies:
    kafka:
      authentication:
        type: plain                 --- [1]
        jaasConfig:                 --- [2]
          secretRef:                --- [3]
        jaasConfigPassThrough:      --- [4]
          secretRef:                --- [5]
          directoryPathInContainer: --- [6]
  • [1] Required. Set to plain.
  • [2] When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates JAAS config.
  • [3], [5] or [6] is required. Specify only one.
  • [3] Provide a Kubernetes secret you created in the previous section for this Confluent component to authenticate to Kafka.
  • [4] An alternate way to configure JAAS is to use jaasConfigPassThrough. If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly.
  • [5] Provide a Kubernetes secret that you created in the previous section.
  • [6] Provide the directory path in the container that you set up in the previous section.

SASL/PLAIN with LDAP authentication

SASL/PLAIN with LDAP callback handler is a variation of SASL/PLAIN. When you use SASL/PLAIN with LDAP for authentication, the username principals and passwords are retrieved from an LDAP server.

Server-side SASL/PLAIN with LDAP for Kafka

You must set up an LDAP server, for example, Active Directory (AD), before configuring and starting up a Kafka cluster with the SASL/PLAIN with LDAP authentication. For more information, see Configuring Kafka Client Authentication with LDAP.

Note

If you configure any Kafka listeners with the SASL/PLAIN authentication mode along with a SASL/PLAIN LDAP listener, you must use authentication.jaasConfigPassThrough for the SASL/PLAIN listener with PlainLoginModule. You cannot use authentication.jaasConfig for the SASL/PLAIN listener, and you cannot use FileBasedLoginModule.

You can use the JAAS and JAAS pass-through mechanisms to set up the credentials.

Create server-side SASL/PLAIN LDAP credentials using JAAS config

The expected server-side key for jaasConfig is plain-interbroker.txt.

  1. Create a .txt file and add the expected value, in the following format:

    username=<user>
    password=<password>
    

    The username and password must belong to a user that exists in LDAP. This is the user that each Kafka broker authenticates when the cluster starts.

  2. Create a Kubernetes Secret with the user name and password for inter-broker authentication.

    The following example command creates a Kubernetes secret, using the ./creds-kafka-ldap-users.txt file that contains the credentials:

    kubectl create secret generic credential \
      --from-file=plain-interbroker.txt=./creds-kafka-ldap-users.txt \
      --namespace confluent
    
Create server-side SASL/PLAIN LDAP credentials using JAAS config pass-through

The expected server-side key for jaasConfigPassThrough is plain-jaas.conf.

  1. Create a .conf file and add the expected value. For example:

    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
       username="kafka" \
       password="kafka-secret";
    
  2. You can use a Kubernetes secret or a directory path in the container to store the credentials.

    • Create a Kubernetes secret using the expected key (plain-jaas.conf) and the value file you created in the previous step.

      The following example command creates a Kubernetes secret, using the ./creds-kafka-sasl-users.conf file that contains the credentials:

      kubectl create secret generic credential \
        --from-file=plain-jaas.conf=./creds-kafka-sasl-users.conf \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      If jaasConfigPassThrough.directoryPathInContainer is configured as /vaults/secrets in the Kafka CR, the expected file, plain-jaas.conf, must exist in the directory path.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

Configure Kafka for server-side SASL/PLAIN with LDAP authentication
  1. Configure the listeners in the Kafka custom resource (CR):

    kind: Kafka
    spec:
      listeners:
        internal:
          authentication:
            type: ldap                 --- [1]
            jaasConfig:                --- [2]
              secretRef:               --- [3]
            jaasConfigPassThrough:     --- [4]
              secretRef:               --- [5]
              directoryPathInContainer:--- [6]
        external:
          authentication:
            type: ldap                 --- [7]
        custom:
          authentication:
            type: ldap                 --- [8]
    
    • [1] Required for the SASL/PLAIN with LDAP authentication for the internal Kafka listeners.
    • [2] When you use jaasConfig to pass credentials, you provide the user name and password, and CFK automates configuration. When you add, remove, or update the user, CFK automatically updates the JAAS configuration. This is the recommended way to configure SASL/PLAIN LDAP for Kafka.
    • [3] Provide the name of the Kubernetes secret that you created in the previous section for inter-broker authentication.
    • [4] An alternate way to configure JAAS is to use jaasConfigPassThrough. If you have customizations, such as using a custom login handler, you can bypass the CFK automation and provide the configuration directly.
    • [5] Provide the name of the Kubernetes secret that you created in the previous section for inter-broker authentication.
    • [6] Provide the directory path in the container that you set up in the previous section.
    • [7] Required for the SASL/PLAIN with LDAP authentication for the external Kafka listeners.
    • [8] Required for the SASL/PLAIN with LDAP authentication for the custom Kafka listeners.
    • [7] [8] To configure authentication type ldap on external or custom listeners, you do not need to specify jaasConfig or jaasConfigPassThrough.
  2. Configure the identity provider in the Kafka CR:

    kind: Kafka
    spec:
      identityProvider:                --- [1]
        type: ldap                     --- [2]
        ldap:                          --- [3]
          address:                     --- [4]
          authentication:              --- [5]
            type:                      --- [6]
            simple:                    --- [7]
          tls:
            enabled:                   --- [8]
          configurations:              --- [9]
    
    • [1] Required for the Kafka authentication type ldap. Specifies the identity provider configuration.

      When the MDS is enabled, this property is ignored, and the LDAP configuration in spec.services.mds.provider is used.

    • [2] Required.

    • [3] This block includes the same properties used in the spec.services.mds.provider.ldap block in this Kafka CR.

    • [4] Required. The address of the LDAP server, for example, ldaps://ldap.confluent.svc.cluster.local:636.

    • [5] Required. The authentication method to access the LDAP server.

    • [6] Required. Specify simple or mtls.

    • [7] Required if the authentication type ([6]) is set to simple.

    • [8] Required if the authentication type ([6]) is set to mtls. Set to true.

    • [9] Required. The LDAP configuration settings.

  3. Apply the configuration:

    kubectl apply -f <Kafka CR>
    

Client-side SASL/PLAIN with LDAP for Kafka

When Kafka is configured with SASL/PLAIN with LDAP, Confluent components and clients authenticate to Kafka as SASL/PLAIN clients. The clients must authenticate as users in LDAP.

See Client-side SASL/PLAIN authentication for Kafka for configuration details.

mTLS authentication

Server-side mTLS authentication for Kafka

mTLS utilizes TLS certificates as an authentication mechanism. The certificate provides the identity.

The certificate Common Name (CN) is used as the authenticated principal, which can then be used in authorization.

Configure a Kafka listener as below, in the Kafka CR, to use mTLS as the authentication mechanism:

kind: Kafka
spec:
  listeners:
    external:
      authentication:
        type: mtls                                     --- [1]
        principalMappingRules:
        - RULE:.*CN[\s]?=[\s]?([a-zA-Z0-9.]*)?.*/$1/   --- [2]
      tls:
        enabled: true                                  --- [3]
  • [1] Required. Set to mTLS.

  • [2] Optional. This specifies a mapping rule that extracts the principal name from the certificate Common Name (CN).

    The regular expression (regex) used in the mapping rule is Java mapping API. You can use a site, such as Regular Expression Test Drive, to validate the regex.

  • [3] Required for mTLS authentication. Set to true.

Client-side mTLS authentication for Kafka

For each of the Confluent components that communicates with Kafka, configure the mTLS authentication mechanism in the component CR as below:

kind: <Confluent component>
spec:
  dependencies:
    kafka:
      authentication:
        type: mtls               --- [1]
      tls:
        enabled: true            --- [2]
  • [1] Required. Set to mtls.
  • [2] Required for mTLS authentication. Set to true.

Configure authentication to access ZooKeeper

This section describes the following authentication mechanisms for Kafka to access ZooKeeper:

SASL/DIGEST authentication

Server-side SASL/DIGEST authentication for ZooKeeper

ZooKeeper supports authentication using the SASL DIGEST-MD5 mechanism.

You can use the JAAS and JAAS pass-through mechanisms to set up the credentials.

Create server-side SASL/DIGEST credentials using JAAS config

When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates JAAS config. This is the recommended way to configure SASL/DIGEST for ZooKeeper.

The expected key for the server-side SASL/DIGEST credential is digest-users.json.

  1. Create a .json file and add the expected value, in the following format:

    {
    "username1": "password1",
    "username2": "password2"
    }
    
  2. Create a Kubernetes secret using the expected key (digest-users.json) and the value file you created in the previous step.

    The following example command creates a Kubernetes secret, using the ./digest-users.json file that contains the credentials:

    kubectl create secret generic credential \
      --from-file=digest-users.json=./digest-users.json \
      --namespace confluent
    
Create server-side SASL/DIGEST credentials using JAAS config pass-through

An alternate way to configure JAAS is to use jaasConfigPassThrough. If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly using jaasConfigPassThrough.

The expected server-side key for jaasConfigPassThrough is digest-jaas.conf.

  1. Create a .conf file and add the expected value. For example:

    Server {
      org.apache.zookeeper.server.auth.DigestLoginModule required
        user_super="adminsecret"
        user_user1="user1-secret";
     };
    
  2. You can use a Kubernetes secret or a directory path in the container to store the credentials.

    • Create a Kubernetes Secret with the expected key (digest-jaas.conf) and the value file you created in the previous step.

      The following example command creates a Kubernetes secret, using the ./digest-users.conf file that contains the credentials:

      kubectl create secret generic credential \
        --from-file=digest-jaas.conf=./digest-users.conf \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      If jaasConfigPassThrough.directoryPathInContainer is configured as /vaults/secrets in the ZooKeeper CR, the expected file, digest-jaas.conf, must exist in that directory path.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

      See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

Configure ZooKeeper for server-side SASL/DIGEST authentication

Enable the server-side SASL/DIGEST authentication in the ZooKeeper CR as below:

kind: Zookeeper
spec:
  authentication:
    type: digest                 --- [1]
    jaasConfig:                  --- [2]
      secretRef:                 --- [3]
    jaasConfigPassThrough:       --- [4]
      secretRef:                 --- [5]
      directoryPathInContainer:  --- [6]
  • [1] Required. Set to digest.
  • [2] When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates JAAS config. This is the recommended way to configure SASL/DIGEST for ZooKeeper.
  • One of [3], [5], or [6] is required. Only specify one.
  • [3] Provide the name of the Kubernetes secret that you created in the previous section.
  • [4] An alternate way to configure JAAS is to use jaasConfigPassThrough. If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly.
  • [5] Provide a Kubernetes secret that you created in the previous section.
  • [6] Provide the directory path in the container you set up in the previous section. The expected file, digest-jaas.conf, must exist in the directory path.

Client-side SASL/DIGEST authentication for ZooKeeper

Configure client-side Kafka to authenticate to ZooKeeper using SASL/DIGEST.

You can use the JAAS and JAAS pass-through mechanisms to set up the credentials.

Create client-side SASL/DIGEST credentials using JAAS config

When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates the JAAS config. This is the recommended way to configure SASL/DIGEST for ZooKeeper.

The expected client-side key for jaasConfig is digest.txt.

  1. Create a .txt file and add the expected value, in the following format:

    username=<user>
    password=<password>
    
  2. Create a Kubernetes Secret with the expected key (digest.txt) and the value file you created in the previous step.

    The following example command creates a Kubernetes secret, using the ./digest-users.txt file that contains the credentials:

    kubectl create secret generic credential \
      --from-file=digest.txt=./digest-users.txt \
      --namespace confluent
    
Create client-side SASL/DIGEST credentials using JAAS config pass-through

If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly using jaasConfigPassThrough.

The expected server-side key for jaasConfigPassThrough is digest-jaas.conf.

  1. Create a .conf file and add the expected value to the file.

    The following is an example value (the data in the file) for digest-jaas.conf with a standard login module and a user, bob.

    Client { //zookeeper dependencies
        org.apache.zookeeper.server.auth.DigestLoginModule required
        username="bob"
        password="password";
    };
    
  2. You can use a Kubernetes secret or a directory path in the container to store the credentials.

    • Create a Kubernetes secret using the expected key (digest-jaas.conf) and the value file you created in the previous step.

      The following example command creates a Kubernetes secret, using the ./digest-jaas-users.conf file that contains the credentials:

      kubectl create secret generic credential \
        --from-file=digest-jaas.conf=./digest-jaas-users.conf \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      If jaasConfigPassThrough.directoryPathInContainer is configured as /vaults/secrets in the Kafka CR, the expected file, digest-jaas.conf, must exist in that directory path.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

      See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

Configure Kafka for client-side SASL/DIGEST authentication

For Kafka to authenticate to ZooKeeper using SASL/DIGEST authentication, configure the Kafka CR as below:

kind: Kafka
spec:
  dependencies:
    zookeeper:
      authentication:
        type: digest                --- [1]
        jaasConfig:                 --- [2]
          secretRef:                --- [3]
        jaasConfigPassThrough:      --- [4]
          secretRef:                --- [5]
          directoryPathInContainer: --- [6]
  • [1] Required. Set to digest.

  • [2] When you use jaasConfig, you provide the user names and passwords, and CFK automates configuration. For example, when you add, remove, or update users, CFK automatically updates the JAAS config. This is the recommended way to configure SASL/DIGEST for ZooKeeper.

  • One of [3], [5], or [6] is required. Only specify one.

  • [3] Provide the name of the Kubernetes secret you set up in the previous section for Kafka to authenticate to ZooKeeper.

  • [4] An alternate way to configure JAAS is to use jaasConfigPassThrough. If you have customizations, such as using custom login handlers, you can bypass the CFK automation and provide the configuration directly.

  • [5] Provide the name of the Kubernetes secret you set up in the previous section for Kafka to authenticate to ZooKeeper.

  • [6] Provide the container directory path you set up in the previous section.

    See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

    See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

mTLS authentication

Server-side mTLS authentication for ZooKeeper

Enable mTLS authentication in the ZooKeeper CR as below:

kind: Zookeeper
spec:
  authentication:
    type: mtls

Client-side mTLS authentication ZooKeeper

For Kafka to authenticate to ZooKeeper using mTLS authentication, configure the Kafka CR as below:

kind: Kafka
spec:
  dependencies:
    zookeeper:
      authentication:
        type: mtls               --- [1]
      tls:
        enabled: true            --- [2]
  • [1] Required. Set to mtls.
  • [2] Required for mTLS authentication. Set to true.

Configure authentication to access other Confluent Platform components

This section describes the following authentication methods for the Confluent Platform components (other than Kafka and ZooKeeper):

Basic authentication

Server-side basic authentication for Confluent components

Create server-side basic credentials

The expected server-side key for basic authentication is basic.txt.

  1. Create a .txt file and add the expected value, in the following format:

    <username1>: <password1>, <role that username1 is assigned to>
    ...
    <usernameN>: <passwordN>, <role that usernameN is assigned to>
    

    The following default roles are supported:

    • For REST Proxy: The admin, developer, user, and krp-user roles are available.
    • For ksqlDB: The admin, developer, user, and ksql-user roles are available.
    • For Schema Registry: The admin, developer, user, and sr-user roles are available.
    • For Confluent Control Center: The Administrators and Restricted roles are available.

    Warning

    For Connect, there is no support for roles.

  2. You can use a Kubernetes secret or a directory path in the container to store the basic credentials.

    • Create a Kubernetes secret using the expected key (basic.txt) and the value file you created in the previous step.

      The following example command creates a Kubernetes secret, using the ./creds-basic.txt file that contains the credentials:

      kubectl create secret generic credential \
        --from-file=basic.txt=./creds-basic.txt \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

      See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

Configure Confluent component for server-side basic authentication

Configure the server-side basic authentication in the component CR as below:

kind: <Confluent component>
spec:
  authentication:
    type: basic                  --- [1]
    basic:
      secretRef:                 --- [2]
      directoryPathInContainer:  --- [3]
      restrictedRoles:           --- [4]
      roles:                     --- [5]
  • [1] Required. Set to basic.
  • [2] or [3] Required. Do not specify both.
  • [2] Provide the name of the Kubernetes secret you created in the previous section.
  • [3] Provide the path in the container in the previous section. The expected file, basic.txt, must exist in the specified directory path.
  • [4] Optional. A list of restricted roles on the server in Confluent Control Center.
  • [5] Optional. A list of roles on the server side.

Client-side basic authentication for Confluent components

Configure client-side Confluent components to authenticate to other Confluent Platform component using basic authentication.

Create client-side basic credentials

The expected client-side key for basic authentication is basic.txt.

  1. Create a .txt file and add the expected value, in the following format:

    username=<username>
    password=<password>
    
  2. The username/password for basic authentication are either loaded through the secretRef or through directoryPathInContainer.

Configure Confluent components for client-side basic authentication

Enable the client-side basic authentication in the component CR as below. <component> is the Confluent Platform component that this component needs to authenticate to:

kind: <this Confluent component>
spec:
  dependencies:
    <component>:
      url:
      authentication:
        type: basic                --- [1]
        basic:
          secretRef:               --- [2]
          directoryPathInContainer --- [3]

mTLS authentication

Server-side mTLS authentication for Confluent components

Configure mTLS authentication in the Confluent component CR as below:

kind: <Confluent component>
spec:
  authentication:
    type: mtls                --- [1]
  • [1] Required. Set to mtls.

Client-side mTLS authentication for Confluent components

To configure Confluent components to authenticate to other Confluent Platform components using mTLS authentication, enable mTLS authentication in the component CR as below. <component> is the Confluent Platform component that this component needs to authenticate to:

kind: <this client Confluent component>
spec:
  dependencies:
    <component>:
      url:
      authentication:
        type: mtls               --- [1]
      tls:
        enabled: true            --- [2]
  • [1] Required. Set to mtls.
  • [2] Required for mTLS authentication. Set to true.

LDAP authentication for Confluent Control Center

In addition to the basic and mTLS authentication methods, Confluent Control Center supports LDAP as an authentication method.

Create LDAP credentials

The expected server-side key for LDAP authentication is ldap.txt.

  1. Create a .txt file and add the expected value, in the following format:

    username=<bindDn_value>
    password=<bindPassword_value>
    

    For the password for bindDn, escape any restricted LDAP characters. For best results, avoid characters that require escaping. Follow Best Practices for LDAP Naming Attributes.

  2. Create a Kubernetes secret using the expected key (ldap.txt) and the value file you created in the previous step.

    The following example command creates a Kubernetes secret, using the ./creds-ldap.txt file that contains the credentials:

    kubectl create secret generic credential \
      --from-file=ldap.txt=./creds-ldap.txt \
      --namespace confluent
    

Configure Confluent Control Center for server-side LDAP authentication

Configure the Control Center CR to pull users and groups from an LDAP server:

kind: ControlCenter
spec:
  authentication:
    type: ldap                                      --- [1]
    ldap:
      secretRef:                                    --- [2]
      roles:                                        --- [3]
      restrictedRoles:                              --- [4]
      property:                                     --- [5]
        #  useLdaps: "true"
        #  contextFactory: "com.sun.jndi.ldap.LdapCtxFactory"
        #  hostname: ""
        #  port: "389"
        #  bindDn: ""                               --- [6]
        #  bindPassword: ""                         --- [7]
        #  authenticationMethod: ""
        #  forceBindingLogin: "true"
        #  userBaseDn: ""
        #  userRdnAttribute: "sAMAccountName"
        #  userIdAttribute: "sAMAccountName"
        #  userPasswordAttribute: "userPassword"
        #  userObjectClass: "user"
        #  roleBaseDn: ""
        #  roleNameAttribute: "cn"
        #  roleMemberAttribute: "member"
        #  roleObjectClass: "group"
  • [1] Required. Set to ldap.
  • [2] Provide the Kubernetes secret you created in the previous section for LDAP credentials.
  • [3] Optional. By default, it’s set to ["Administrators", "Restricted"].
  • [4] Optional. List of roles with limited read-only access. No editing or creating allowed in Control Center.
  • [5] Required. See Configure LdapLoginModule for details.
  • [6] [7] If you have security considerations, pass empty values for bindDn and bindPassword in the CR. The CFK will replace them and add them as appropriately.

LDAP over SSL authentication for Confluent Control Center

When configuring Control Center for encryption and authentication using LDAP over SSL (LDAPS), the LDAPS trust store must be exported to the JVM using the configOverrides feature.

This requirement does not apply to RBAC-enabled environments where Control Center relies on the external MDS for authentication.

Add the LDAP SSL certificate to the JVM trust store in the Confluent Control Center CR as shown below:

kind:ControlCenter
spec:
  configOverrides:
    jvm:
      -Djavax.net.ssl.trustStore=<path to truststore.jks>
      -Djavax.net.ssl.trustStorePassword=<password for the truststore>

Configure authentication to access MDS

Bearer authentication

When RBAC is enabled (spec.authorization.type: rbac), CFK always uses the Bearer authentication for Confluent components, ignoring the spec.authentication setting. It is not possible to set the component authentication type to mTLS when RBAC is enabled.

Create client-side Bearer credentials for MDS

Provide the required Bearer credentials.

The expected key is bearer.txt.

  1. Create a .txt file with the expected value in the following format:

    username=<username>
    password=<password>
    
  2. You can use a Kubernetes secret or a directory path in the container to store the bearer credentials.

    • Create a Kubernetes secret using the expected key (bearer.txt) and the value.

      For example, using the ./c3-mds-client.txt file that you create for Control Center credentials:

      kubectl create secret generic c3-mds-client \
        --from-file=bearer.txt=./c3-mds-client.txt \
        --namespace confluent
      
    • Use a directory path in the container to provide the required credentials.

      See Provide secrets for Confluent Platform component CR for providing the credential and required annotations when using Vault.

      See CFK GitHub examples for more information on using the directoryPathInContainer property with Vault.

Client-side Bearer authentication for MDS

For each of the Confluent components that communicate with MDS, configure bearer authentication mechanism in the component CR:

kind: <Confluent component>
spec:
  dependencies:
    mds:
      authentication:
        type: bearer               --- [1]
        bearer:
          secretRef:               --- [2]
          directoryPathInContainer --- [3]
  • [1] Required. Set to bearer.
  • [2] or [3] Required. Do not specify both.
  • [2] To load username and secrete, set to the Kubernetes secret you created in the previous section.
  • [3] Provide the path in the container in the previous section. The expected file, bearer.txt, must exist in the specified directory path.