Configure Authentication for Confluent Platform with Ansible Playbooks

This topic describes the authentication features supported in Confluent Platform with Ansible Playbooks for Confluent Platform (Confluent Ansible) explains how to configure to use those features.

Kafka authentication

Confluent Ansible supports the following authentication modes for Kafka:

  • SASL/PLAIN: Uses a simple username and password for authentication.
  • SASL/SCRAM: Uses salted and hashed passwords for authentication. Credentials get created during installation.
  • SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
  • mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
  • OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.

By default, Kafka is installed with no authentication.

Configure SASL/PLAIN authentication

To configure SASL/PLAIN authentication, set the following in the hosts.yml inventory file.

  • The default keys for sasl_plain_users are required for Confluent Platform components, including admin for the Kafka brokers, the client user for use by external components, schema_registry, kafka_connect, ksql, control_center_next_gen, kafka-rest, kafka_connect_replicator.
  • In addition to the default users, the code snippet adds three users, user1, user2, user3, as an example.
all:
  vars:
    sasl_protocol: plain
    sasl_plain_users:
      admin:
        principal: 'admin'
        password: 'admin-secret'
      schema_registry:
        principal: 'schema_registry'
        password: 'schema_registry-secret'
      kafka_connect:
        principal: 'kafka_connect'
        password: 'kafka_connect-secret'
      ksql:
        principal: 'ksql'
        password: 'ksql-secret'
      kafka_rest:
        principal: 'kafka_rest'
        password: 'kafka_rest-secret'
      control_center_next_gen:
        principal: 'control_center'
        password: 'control_center-secret'
      kafka_connect_replicator:
        principal: 'kafka_connect_replicator'
        password: 'kafka_connect_replicator-secret'
      client:
        principal: 'client'
        password: 'client-secret'
      user1:
        principal: 'user1'
        password: my-secret
      user2:
        principal: 'user2'
        password: my-secret
      user3:
        principal: 'user3'
        password: my-secret

Configure SASL/SCRAM (SHA-512) authentication

To configure SASL/SCRAM authentication with SHA-512, set the following option in the hosts.yml inventory file:

all:
  vars:
    sasl_protocol: scram

During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.

To configure additional users, add the following section in the hosts.yml inventory file:

all:
  vars:
    sasl_scram_users:
      user1:
        principal: user1
        password: my-secret

When configuring SASL/SCRAM on Kafka in KRaft mode, you must configure the value of kafka_controller_sasl_protocol as described in the Configure SASL/SCRAM authentication section.

Configure SASL/SCRAM (SHA-256) authentication

To configure SASL/SCRAM authentication with SHA-256, set the following option in the hosts.yml inventory file:

all:
  vars:
    sasl_protocol: scram256

During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.

To configure additional users, add the following section in the hosts.yml inventory file:

all:
  vars:
    sasl_scram256_users:
      user1:
        principal: user1
        password: my-secret

When configuring SASL/SCRAM on Kafka in KRaft mode, you must configure the value of kafka_controller_sasl_protocol as described in the Configure SASL/SCRAM authentication section.

Configure SASL/GSSAPI (Kerberos) authentication

The Ansible playbook does not currently configure Key Distribution Center (KDC) and Active Directory KDC configurations. You must set up your own KDC independently of the playbook and provide your own keytabs to configure SASL/GSSAPI (SASL with Kerberos):

  • Create principals within your organization’s Kerberos KDC server for each component and for each host in each component.
  • Generate keytabs for these principals. The keytab files must be present on the Ansible control node.

To install Kerberos packages and configure the client configuration file on each host, add the following configuration parameters in the hosts.yaml file.

  • Specify whether to install Kerberos packages and to configure the client configuration file. The default value is true.

    If the hosts already have the client configuration file configured, set kerberos_configure to false.

    all:
      vars:
        kerberos_configure: <true-or-false>
    
  • Specify the client configuration file. The default value is /etc/krb5.conf.

    Use this variable only when you want to specify a custom location of the client configuration file.

    all:
      vars:
        kerberos_client_config_file_dest:
    

    If kerberos_configure is set to true, Confluent Ansible will generate the client config file at this location on the host nodes.

    If kerberos_configure is set to false, Confluent Ansible will expect the client configuration file to be present at this location on the host nodes.

  • Specify the realm part of the Kafka broker Kerberos principal and the hostname of machine with KDC running.

    all:
      vars:
        kerberos:
          realm: <kafka-principal-realm>
          kdc_hostname: <kdc-hostname>
          admin_hostname: <kdc-hostname>
    

The example below shows the Kerberos configuration settings for the Kerberos principal, kafka/kafka1.hostname.com@EXAMPLE.COM.

all:
  vars:
    kerberos_configure: true
    kerberos:
      realm: example.com
      kdc_hostname: ip-192-24-45-82.us-west.compute.internal
      admin_hostname: ip-192-24-45-82.us-west.compute.internal

Each host in the inventory file also needs to set variables that define their Kerberos principal and the location of the keytab on the Ansible controller.

The hosts.yml inventory file should look like:

kafka_controller:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
    ip-192-24-37-15.us-west.compute.internal:
      kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
    ip-192-24-34-224.us-west.compute.internal:
      kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_broker:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
    ip-192-24-37-15.us-west.compute.internal:
      kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
    ip-192-24-34-224.us-west.compute.internal:
      kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
schema_registry:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      schema_registry_kerberos_keytab_path: /tmp/keytabs/schemaregistry-ip-192-24-34-224.us-west.compute.internal.keytab
      schema_registry_kerberos_principal: schemaregistry/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_connect:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_connect_kerberos_keytab_path: /tmp/keytabs/connect-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_connect_kerberos_principal: connect/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_rest:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_rest_kerberos_keytab_path: /tmp/keytabs/restproxy-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_rest_kerberos_principal: restproxy/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ksql:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      ksql_kerberos_keytab_path: /tmp/keytabs/ksql-ip-192-24-34-224.us-west.compute.internal.keytab
      ksql_kerberos_principal: ksql/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
control_center_next_gen:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      control_center_next_gen_kerberos_keytab_path: /tmp/keytabs/controlcenter-ip-192-24-34-224.us-west.compute.internal.keytab
      control_center_next_gen_kerberos_principal: controlcenter/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

Note

To better support Active Directory, Confluent Ansible enables canonicalization by default. If canonicalization has not been enabled during the Confluent Platform cluster creation, explicitly set the following property in the hosts.yml inventory file.

kerberos:
  canonicalize: false

Configure mTLS authentication

To configure mutual TLS (mTLS) authentication, you must enable TLS encryption as described in Configure Encryption for Confluent Platform with Ansible Playbooks.

Set the following parameters in the hosts.yml inventory file:

all:
  vars:
    ssl_enabled: true
    ssl_mutual_auth_enabled: true
    ssl_client_authentication: required

KRaft authentication

By default, KRaft controllers inherit the authentication configuration of the Kafka cluster. A specific authentication configuration just for KRaft is not required.

Confluent Ansible supports the following authentication modes for Kafka brokers and KRaft controllers in KRaft mode:

  • SASL/PLAIN: Uses a simple username and password for authentication.

  • SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.

    You can override the global Kafka authentication and configure KRaft with Kerberos.

  • SASL/SCRAM: Uses salted and hashed passwords for authentication.

    SCRAM is only supported for controller-to-broker communications and is not supported for controller-to-controller communications.

  • mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.

    You can override the global Kafka authentication and configure KRaft with mTLS.

  • OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.

Configure SASL/GSSAPI (Kerberos) authentication

By default, KRaft controllers inherit the Kafka Kerberos settings.

To enable SASL/GSSAPI (Kerberos) authentication specifically for KRaft, set the following variables in hosts.yml:

all:
  vars:
    kafka_controller_sasl_protocol: kerberos

Each host also needs these variables set. The KRaft controller and the Kafka brokers must have the same primary names (set in the Kerberos principal).

kafka_controller:
  vars:
    kafka_controller_kerberos_keytab_path: "/tmp/keytabs/kafka-{{inventory_hostname}}.keytab"
    kafka_controller_kerberos_principal: "kafka/{{inventory_hostname}}@confluent.example.com"

For example:

kafka_controller:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
    ip-192-24-37-15.us-west.compute.internal:
      kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
    ip-192-24-34-224.us-west.compute.internal:
      kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

For additionally required Kerberos settings, see Kafka Kerberos settings.

Configure mTLS authentication

To configure mutual TLS (mTLS) authentication, you must enable TLS encryption as described in Configure Encryption for Confluent Platform with Ansible Playbooks.

By default, KRaft controllers inherit the global TLS and mTLS settings.

If want to enable or disable mTLS specifically for KRaft, specify a boolean value to enable or disable mTLS authentication on the KRaft controllers (Server to Server and Client to Server) in the hosts.yml inventory file:

all:
  vars:

    kafka_controller_ssl_enabled:
    kafka_controller_ssl_mutual_auth_enabled:
    kafka_controller_ssl_client_authentication:

Configure SASL/SCRAM authentication

You can configure KRaft controllers with the SASL/SCRAM authentication for communicating with Kafka brokers.

SASL/SCRAM is not supported for KRaft controller-to-controller communication.

To configure KRaft controllers with the SASL/SCRAM authentication, provide two values in a comma-separated string for kafka_controller_sasl_protocol: in the kafka_controller: group in your inventory file:

kafka_controller:
  vars:
    kafka_controller_sasl_protocol: <value-1>,<value-2>
  • <value-1> specifies the authentication method for controller-to-controller communication. Specify kerberos or plain. scram and scram256 are NOT allowed.
  • <value-2> specifies the authentication method for controller-to-broker communication. Specify scram or scram256 to configure KRaft with SASL/SCRAM.

The following example configures SASL/SCRAM for KRaft:

all:
  vars:
    ansible_connection: ssh
    ansible_user: ec2-user
    ansible_become: true
    ansible_ssh_private_key_file: /home/ec2-user/guest.pem
    ansible_python_interpreter: /usr/bin/python3
    ssl_enabled: true
    sasl_protocol: scram
kafka_controller:
  vars:
    kafka_controller_sasl_protocol: plain,scram
  hosts:
    ec2-35-160-193-90.us-west-2.compute.amazonaws.com:
  • Controller-to-controller: SASL/PLAIN

    kafka_controller:
      vars:
        kafka_controller_sasl_protocol: plain,scram
    
  • Controller-to-broker authentication: SASL/SCRAM

    kafka_controller:
      vars:
        kafka_controller_sasl_protocol: plain,scram
    
  • Inter-broker authentication and other inter-component authentication: SASL/SCRAM

    all:
      vars:
        sasl_protocol: scram
    

The following example configures SASL/SCRAM for Kafka brokers and SASL/PLAIN for KRaft:

all:
  vars:
    ansible_connection: ssh
    ansible_user: ec2-user
    ansible_become: true
    ansible_ssh_private_key_file: /home/ec2-user/guest.pem
    ansible_python_interpreter: /usr/bin/python3
    sasl_protocol: scram
    kafka_controller_sasl_protocol: plain

kafka_controller:
  hosts:
    ec2-35-85-153-223.us-west-2.compute.amazonaws.com:
  • controller-to-controller and controller-to-broker: SASL/PLAIN

    When you set a single value for kafka_controller_sasl_protocol: in the all: section of the inventory file, you specify the same authentication method for controller-to-controller and controller-to-broker communications.

    You can specify plain and kerberos because SCRAM is not supported for the controller-to-controller authentication.

    all:
      vars:
        kafka_controller_sasl_protocol: plain
    
  • Inter-broker authentication and other inter-component authentication: SASL/SCRAM

    all:
      vars:
        sasl_protocol: scram
    

REST-based Confluent components authentication

Confluent Ansible supports the following authentication modes for all REST-based Confluent Platform components, besides Kafka:

  • HTTP Basic: Authenticates with a username and password.
  • mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
  • OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.
  • For Control Center and Confluent CLI, the OIDC SSO is supported.

By default, Confluent Platform components are installed with no authentication.

Configure mTLS authentication

To enable mTLS for all components, set the following parameters in the hosts.yml inventory file:

all:
  vars:
    ssl_enabled: true
    kafka_broker_rest_proxy_authentication_type: mtls
    schema_registry_authentication_type: mtls
    kafka_connect_authentication_type: mtls
    kafka_rest_authentication_type: mtls
    ksql_authentication_type: mtls
    control_center_next_gen_authentication_type: mtls

Configure basic authentication

To enable basic authentication for Confluent Platform component, set the corresponding variables in the hosts.yml inventory file.

For example:

all:
  vars:
    kafka_broker_rest_proxy_authentication_type: basic
    schema_registry_authentication_type: basic
    kafka_connect_authentication_type: basic
    kafka_rest_authentication_type: basic
    ksql_authentication_type: basic
    control_center_next_gen_authentication_type: basic

    kafka_broker_rest_proxy_basic_users:
       client:
         principal: client
         password: client-secret
         roles: client,admin

    schema_registry_basic_users:
      client:
        principal: client
        password: client-secret
        roles: client,developer,admin

    kafka_connect_basic_users:
      admin:
        principal: user1
        password: password

    ksql_basic_users:
      admin:
        principal: user1
        password: user1-secret
        roles: user1
      client:
        principal: client
        password: client-secret
        roles: client

    kafka_rest_basic_users:
      client:
        principal: client
        password: client-secret
        roles: client

    control_center_next_gen_basic_users:
      client:
        principal: client
        password: client-secret
        roles: client

In Control Center with basic authentication, the users with the Restricted role have read-only access.

For example, the following variables restrict the client user to read-only access and ensure that only admin user has administrator rights:

control_center_next_gen_authentication_type: basic
control_center_next_gen_basic_users:
  client:
    principal: client
    password: client-secret
    roles: Restricted        --- [1]
  admin:
    principal: user1
    password: user1-secret
    roles: Administrator     --- [2]
  • [1] Set to Restricted for the users you want read-only access for.
  • [2] Set to Administrator for the users you want administrator access for.

Configure single sign-on authentication for Confluent Control Center and Confluent CLI

In Confluent Ansible, you can configure single sign-on (SSO) authentication for Control Center using OpenID Connect (OIDC).

As a prerequisite for SSO, you need to configure:

To use SSO in Control Center or Confluent CLI, specify the following variables in your inventory file. For details on the setting variables, refer to Configure SSO for Confluent Control Center using OIDC.

  • sso_mode

    To enable SSO, set to oidc.

  • sso_groups_claim

    Groups in JSON Web Tokens (JWT)

    Default: groups

  • sso_sub_claim: Sub in JWT.

    Default: sub

  • sso_issuer_url

    The issuer URL, which is typically the authorization server’s URL. This value is compared to the issuer claim in the JWT token for verification.

  • sso_jwks_uri

    The JSON Web Key Set (JWKS) URI. It is used to verify any JSON Web Token (JWT) issued by the IdP.

  • sso_authorize_uri

    The base URI for authorize endpoint, that initiates an OAuth authorization request.

  • sso_token_uri

    The IdP token endpoint, from where a token is requested by the MDS.

  • sso_client_id

    The client id for authorization and token request to IdP.

  • sso_client_password

    The client password for authorize and token request to IdP.

  • sso_groups_scope

    Optional. The name of the custom groups. Use this setting to handle a case where the groups field is not present in tokens by default, and you have configured a custom scope for issuing groups. The name of the scope could be anything, such as groups, allow_groups, offline_access, etc.

    offline_access is a well-defined scope used to request refresh token. This scope can be requested when the sso_refresh_token setting is set to true. The scope is defined in OIDC RFC, and is not specific to any IdP.

    Possible values: groups, openid, offline_access, etc.

    Default: groups

  • sso_refresh_token

    Configures whether the offline_access scope can be requested in the authorization URI. Set this to false if offline tokens are not allowed for the user or client in the IdP.

    As described in SSO Session management, for RBAC to work as expected, the default value of true should not be changed to false.

    Default: true

  • sso_cli

    To enable SSO in Confluent CLI, set it to true. When enabling SSO in CLI, you must also provide ``sso_device_authorization_uri.

    Default: false

  • sso_device_authorization_uri

    Device Authorization endpoint of Idp, Required to enable SSO in Confluent CLI.

  • sso_idp_cert_path

    TLS certificate (full path of file on the control node) of the IdP domain for OIDC SSO in Control Center or Confluent CLI. Required when the IdP server has TLS enabled with custom certificate.

The following is an example snippet of an inventory file for setting up Confluent Platform with RBAC, SASL/PLAIN protocol, and Control Center SSO:

all:
  vars:
    ansible_connection: ssh
    ansible_user: ec2-user
    ansible_become: true
    ansible_ssh_private_key_file: /home/ec2-user/guest.pem

    ## TLS Configuration - Custom Certificates
    ssl_enabled: true

    #### SASL Authentication Configuration ####
    sasl_protocol: plain

    ## RBAC Configuration
    rbac_enabled: true

    ## LDAP CONFIGURATION
    kafka_broker_custom_properties:
      ldap.java.naming.factory.initial: com.sun.jndi.ldap.LdapCtxFactory
      ldap.com.sun.jndi.ldap.read.timeout: 3000
      ldap.java.naming.provider.url: ldaps://ldap1:636
      ldap.java.naming.security.principal: uid=mds,OU=rbac,DC=example,DC=com
      ldap.java.naming.security.credentials: password
      ldap.java.naming.security.authentication: simple
      ldap.user.search.base: OU=rbac,DC=example,DC=com
      ldap.group.search.base: OU=rbac,DC=example,DC=com
      ldap.user.name.attribute: uid
      ldap.user.memberof.attribute.pattern: CN=(.*),OU=rbac,DC=example,DC=com
      ldap.group.name.attribute: cn
      ldap.group.member.attribute.pattern: CN=(.*),OU=rbac,DC=example,DC=com
      ldap.user.object.class: account

    ## LDAP USERS
    mds_super_user: mds
    mds_super_user_password: password
    kafka_broker_ldap_user: kafka_broker
    kafka_broker_ldap_password: password
    schema_registry_ldap_user: schema_registry
    schema_registry_ldap_password: password
    kafka_connect_ldap_user: connect_worker
    kafka_connect_ldap_password: password
    ksql_ldap_user: ksql
    ksql_ldap_password: password
    kafka_rest_ldap_user: rest_proxy
    kafka_rest_ldap_password: password
    control_center_next_gen_ldap_user: control_center
    control_center_next_gen_ldap_password: password

    ## Variables to enable SSO in Control Center
    sso_mode: oidc

    # necessary configs in MDS server for sso in C3
    sso_groups_claim: groups
    sso_sub_claim: sub
    sso_groups_scope: groups
    sso_issuer_url: <issuer url>
    sso_jwks_uri: <jwks uri>
    sso_authorize_uri: <OAuth authorization endpoint>
    sso_token_uri: <IdP token endpoint>
    sso_client_id: <client id>
    sso_client_password: <client password>
    sso_refresh_token: true

kafka_controller:
  hosts:
    demo-controller-0:
    demo-controller-1:
    demo-controller-2:

kafka_broker:
  hosts:
    demo-broker-0:
    demo-broker-1:
    demo-broker-2:

schema_registry:
  hosts:
    demo-sr-0:

kafka_connect:
  hosts:
    demo-connect-0:

kafka_rest:
  hosts:
    demo-rest-0:

ksql:
  hosts:
    demo-ksql-0:

control_center_next_gen:
  hosts:
    demo-c3-0:

OAuth/OIDC authentication for Kafka and other Confluent components

In Confluent Ansible, OAuth authentications can be configured to use client credentials or client assertions.

With the credential-based OAuth, you use a user name and password to authenticate.

With the assertion-based passwordless OAuth, you use a client assertion to authenticate. A client assertion is JSON Web Token (JWT) with a collection of information for sharing identity and security information, and it is presented as proof of the client’s identity.

Configuration variables for OAuth

To configure OAuth/OIDC authentication, set the required and optional variables in the hosts.yml inventory file.

Kafka broker (kafka_broker_) and KRaft controller (kafka_controller_) inherit the superuser properties (oauth_superuser_).

The following are the most commonly used variables to enable OAuth:

  • auth_mode

    Authorization mode on all Confluent Platform components. Possible values are ldap, oauth, ldap_with_oauth, mtls, and none.

    Set to oauth for OAuth cluster and ldap_with_oauth for cluster with both LDAP and OAuth support. When set to oauth or ldap_with_oauth, you must set oauth_jwks_uri, oauth_token_uri, oauth_issuer_url, oauth_superuser_client_id, oauth_superuser_client_password.

  • oauth_superuser_client_id

    Client ID for authorization and token request to an identify provider (Idp). The super user for all MDS API requests.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_superuser_client_password

    The password for oauth_superuser_client_id.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_token_uri

    The IdP token endpoint, from where a token is requested by MDS when OAuth is enabled.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_issuer_url

    The issuer URL, which is typically the authorization server’s URL. This value is used to compare to issuer claim in the JSON Web Token (JWT) for verification.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_jwks_uri

    The OAuth/OIDC provider URL from which the provider’s JWKS (JSON Web Key Set) can be retrieved.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • <component_prefix>_oauth_user

    OAuth Client Id for the component to authenticate as.

  • <component_prefix>_oauth_client_assertion_issuer

    The issuer for the client assertion.

  • <component_prefix>_oauth_client_assertion_sub

    The Subject for the client assertion.

  • <component_prefix>_oauth_client_assertion_audience

    The Audience for the client assertion.

  • <component_prefix>_oauth_client_assertion_private_key_file

    Path to the file containing the private key for the client assertion.

  • <component_prefix>_oauth_client_assertion_template_file

    Path to the file containing the template for the client assertion.

  • <component_prefix>_client_assertion_private_key_passphrase

    Passphrase for the private key for the client assertion.

  • <component_prefix>_oauth_client_assertion_jti_include

    JSON Web Token ID (JTI) for the client assertion.

  • <component_prefix>_oauth_client_assertion_nbf_include

    The “Not before time” for the client assertion.

For a full list of variables related to OAuth, see the Confluent Ansible variables file at:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/VARIABLES.md

Configure OAuth authentication using client credentials

To enable credential-based OAuth on all Confluent Platform components, where clients authenticate with server using a client ID and a password, set the following variables:

all:
  vars:
    auth_mode: oauth
    oauth_superuser_client_id: <superuser_client_id>
    oauth_superuser_client_password: <superuser_client_secret>
    oauth_sub_claim: client_id
    oauth_groups_claim: groups
    oauth_token_uri: <idp_token_uri>
    oauth_issuer_url: <idp_issuer_url>
    oauth_jwks_uri: <idp_jwks_uri>
    oauth_expected_audience: Confluent,account,api://default
    schema_registry_oauth_user: <sr_client_id>
    schema_registry_oauth_password: <sr_client_secret>
    kafka_rest_oauth_user: <rp_client_id>
    kafka_rest_oauth_password: <rp_client_secret>
    kafka_connect_oauth_user: <connect_client_id>
    kafka_connect_oauth_password: <connect_client_secret>
    ksql_oauth_user: <ksql_client_id>
    ksql_oauth_password: <ksql_client_secret>
    control_center_next_gen_oauth_user: <c3_client_id>
    control_center_next_gen_oauth_password: <c3_client_secret>
    # Only needed when OAuth IdP server has TLS enabled with custom certificate.
    oauth_idp_cert_path: <cert_path>

For an example inventory file for a greenfield credential-based OAuth configuration, see the sample inventory file at:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/sample_inventories/oauth_greenfield.yml

Configure passwordless OAuth authentication

Starting with version 8.0, Confluent Ansible supports client assertion for Confluent Platform, a secure credential management with passwordless authentication. It uses asymmetric encryption-based authentication, extending Confluent Platform OAuth, and allows you to:

  • Avoid deploying username, password while securing Confluent Platform.
  • Streamline and automate client credential rotation on a periodic basis without manual intervention for the client applications.

In Confluent Ansible 8.0, OAuth client assertion is not supported for Confluent Control Center.

To configure client assertion on Confluent Platform components:

  1. Enable client assertion for Confluent Platform components using the following variables:

    Kafka broker (kafka_broker_) and KRaft controller (kafka_controller_) inherit the superuser properties (oauth_superuser_) if not set.

    oauth_superuser_oauth_client_assertion_enabled: true
    kafka_broker_oauth_client_assertion_enabled: true
    kafka_controller_oauth_client_assertion_enabled: true
    schema_registry_oauth_client_assertion_enabled: true
    kafka_connect_oauth_client_assertion_enabled: true
    ksql_oauth_client_assertion_enable: true
    kafka_rest_oauth_client_assertion_enable: true
    kafka_connect_replicator_oauth_client_assertion_enable: true
    kafka_connect_replicator_producer_oauth_client_assertion_enable: true
    kafka_connect_replicator_erp_oauth_client_assertion_enable: true
    kafka_connect_replicator_consumer_erp_oauth_client_assertion_enable: true
    
  2. Set other dependent variables listed below. Refer to the previous step for <component_prefix>.

    <component_prefix>_oauth_user:  //client ID, currently in use.
    <component_prefix>_oauth_client_assertion_issuer:
    <component_prefix>_oauth_client_assertion_sub:
    <component_prefix>_oauth_client_assertion_audience:
    <component_prefix>_oauth_client_assertion_private_key_file:
    <component_prefix>_oauth_client_assertion_template_file: //optional
    <component_prefix>_client_assertion_private_key_passphrase: //optional
    <component_prefix>_oauth_client_assertion_jti_include: //optional
    <component_prefix>_oauth_client_assertion_nbf_include: //optional
    

    Example configurations:

    ksql_oauth_client_assertion_enabled: true
    ksql_oauth_client_assertion_issuer: ksql
    ksql_oauth_client_assertion_audience: https://oauth1:8443/realms/cp-ansible-realm
    ksql_oauth_client_assertion_private_key_file: "my-tokenKeypair.pem"
    

    Currently, there is no first-class support for the properties listed below, which are optional fields in OAuth and also in client assertion. You can set them using custom properties, <component_prefix>_custome_properties.

    kafka_broker_custom_properties:
      *.login.connect.timeout.ms
      *.login.read.timeout.ms
      *.login.retry.backoff.max.ms
      *.login.retry.backoff.ms
    

JWT assertion retrieval from file flow

In JSON Web Token (JWT) assertion retrieval from file flow authentication, the JWT is retrieved from a file.

Note

JWT assertion retrieval from file flow is not recommended for production environments. Use local client assertion flow instead.

To configure JWT assertion retrieval from file flow:

  1. Set the OAuth client assertion variables

  2. Enable JWT assertion retrieval from file flow using the following variables for Confluent Platform components. Set the variable to the directory where client assertion files exist.

    oauth_superuser_oauth_client_assertion_file_base_path:
    kafka_broker_oauth_client_assertion_file_base_path:
    kafka_controller_oauth_client_assertion_file_base_path:
    schema_registry_oauth_client_assertion_file_base_path:
    kafka_connect_oauth_client_assertion_file_base_path:
    ksql_oauth_client_assertion_file_base_path:
    kafka_rest_oauth_client_assertion_file_base_path:
    kafka_connect_replicator_oauth_client_assertion_file_base_path:
    kafka_connect_replicator_producer_oauth_client_assertion_file_base_path:
    kafka_connect_replicator_erp_oauth_client_assertion_file_base_path:
    kafka_connect_replicator_consumer_erp_oauth_client_assertion_file_base_path:
    
  3. Each component acting as a client to the server component must have an individual assertion file at the above base file path you set above (<server component>_oauth_client_assertion_file_base_path:) to prevent token reuse issues.

    The following is an example ksqlDB directory structure for JWT assertion retrieval from file flow:

    ksql_oauth_client_assertion_file_base_path/kafka_client.jwt
    ksql_oauth_client_assertion_file_base_path/schema_registry_client.jwt
    ksql_oauth_client_assertion_file_base_path/mds_client.jwt
    ksql_oauth_client_assertion_file_base_path/ksql_client.jwt
    

    For a full list of client assertion files, see the Confluent Ansible variables file at:

    https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/roles/variables/vars/main.yml