Configure Authorization for Confluent Platform with Ansible Playbooks

Role-based access control

You can use Ansible Playbooks for Confluent Platform (Confluent Ansible) to configure role-based access control (RBAC). The Kafka broker hosts will be configured as the Metadata Service (MDS) hosts.

By default, there are two Kafka listeners: one for inter-broker communication and one for the Confluent Platform components. However, the component listener uses an authentication mode unsupported for external clients, and it is recommended that you configure at least one additional listener for your Kafka clients. See Configure listeners for details.

Do NOT customize the listener named internal when configuring RBAC.

Role-based access control using OAuth/OIDC

Open Authorization (OAuth) 2.0 is an open-standard authorization protocol that provides applications the ability for securely designated access. You can leverage your own identity provider and centralize identity management across your Confluent Platform cluster and other service deployments on the cloud and on-premises.

Starting with Confluent Ansible 7.7, you can configure Confluent components with OAuth and OIDC, an OAuth-based authentication mechanism.

For the OAuth overview in Confluent Platform, see OAuth 2.0 for Confluent Platform.

To enable OAuth/OIDC on Confluent components using Confluent Ansible, Confluent Platform version 7.7 or later is required. For the OAuth overview in Confluent Platform, see OAuth 2.0 for Confluent Platform.

The auth_mode: oauth setting enables OAuth authorization and authentication for Kafka, MDS, and all Confluent Platform components.

In the LDAP and OAuth dual authentication mode, you have the option to disable the server-level OAuth in specific Confluent Platform components. For example, if you want to use the LDAP-based basic authentication on Schema Registry, you set schema_registry_auth_mode: ldap. All Schema Registry clients in this case need to have basic authentication credentials, and all Kafka and MDS clients need OAuth credentials.

Requirements

  • An OIDC-compliant identity provider (IdP).
  • Port 8090 must be opened on the Kafka brokers and accessible by all hosts.
  • Set up one principal in OIDC for the MDS admin user to bootstrap roles and permissions for the Confluent Platform component principals. It is recommended that you create a user named superuser.
  • Set up one principal per Confluent Platform component in your OIDC server. These users are used by the Confluent Platform components to authenticate to MDS and access their respective resources. In the below examples, the following component users are used:
    • Confluent Server: kafka_broker
    • Schema Registry: schema_registry
    • Connect: kafka_connect
    • ksqlDB: ksql
    • REST Proxy: kafka_rest
    • Confluent Server REST API: kafka_broker
    • Control Center: control_center
  • Set up Confluent Platform with OAuth/OIDC authentication.

Settings for OAuth

To enable OAuth, set the required and optional variables in the hosts.yml inventory file.

The following are the most commonly used variables to enable OAuth:

  • auth_mode

    Authorization mode on all Confluent Platform components. Possible values are ldap, oauth, ldap_with_oauth, mtls, and none.

    Set to oauth for OAuth cluster and ldap_with_oauth for cluster with both LDAP and OAuth support. When set to oauth or ldap_with_oauth, you must set oauth_jwks_uri, oauth_token_uri, oauth_issuer_url, oauth_superuser_client_id, oauth_superuser_client_password.

  • oauth_superuser_client_id

    Client ID for authorization and token request to an identify provider (Idp). The super user for all MDS API requests.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_superuser_client_password

    The password for oauth_superuser_client_id.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_token_uri

    The IdP token endpoint, from where a token is requested by MDS when OAuth is enabled.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_issuer_url

    The issuer URL, which is typically the authorization server’s URL. This value is used to compare to issuer claim in the JSON Web Token (JWT) for verification.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_jwks_uri

    The OAuth/OIDC provider URL from which the provider’s JWKS (JSON Web Key Set) can be retrieved.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

For a full list of variables related to OAuth, see the Confluent Ansible variables file at:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/VARIABLES.md

For an example inventory file for a greenfield OAuth configuration, see the sample inventory file at:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/sample_inventories/oauth_greenfield.yml

Required settings for RBAC

Sample inventory files for RBAC configuration are provided in the sample_inventories directory under the Confluent Ansible home directory:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/sample_inventories/

Add the required variables in your hosts.yml to enable and configure RBAC.

Enable RBAC with Ansible

all:
  vars:
    rbac_enabled: true

Provide OAuth details for RBAC to look up and validate users

all:
  vars:
    auth_mode: oauth
    oauth_token_uri: <idp_token_endpoint>
    oauth_issuer_url: <idp_issuer_url>
    oauth_jwks_uri: <idp_jwks_uri>
  • oauth_token_uri

    The Idp token endpoint, from where a token is requested by MDS when OAuth is enabled.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_issuer_url

    The issuer URL, which is typically the authorization server’s URL. This value is used to compare to issuer claim in the JSON Web Token (JWT) for verification.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

  • oauth_jwks_uri

    The OAuth/OIDC provider URL from which the provider’s JWKS (JSON Web Key Set) can be retrieved.

    Required when auth_mode is set to oauth or ldap_with_oauth.

    Default: none

Provide the super user credentials for bootstrapping RBAC within Confluent Platform

The superuser user is used in the inventory file in the sample_inventories directory mentioned above.

all:
  vars:
    oauth_superuser_client_id: superuser
    oauth_superuser_client_password: password

Provide OAuth users for Confluent Platform components

The following users are configured in the inventory file in the sample_inventories directory mentioned above.

all:
  vars:
    kafka_broker_oauth_user: kafka_broker
    kafka_broker_oauth_password: password
    schema_registry_oauth_user: schema_registry
    schema_registry_oauth_password: password
    kafka_connect_oauth_user: connect_worker
    kafka_connect_oauth_password: password
    ksql_oauth_user: ksql
    ksql_oauth_password: password
    kafka_rest_oauth_user: rest_proxy
    kafka_rest_oauth_password: password
    control_center_next_gen_oauth_user: control_center
    control_center_next_gen_oauth_password: password

Migrate from LDAP to OAuth

You can migrate a Confluent Platform deployment of version 7.7.0 or higher from LDAP to OAuth. Upgrading the Confluent Platform version and migrating to OAuth simultaneously is not supported.

To migrate an existing Confluent Platform deployment from LDAP to OAuth:

  1. All the required LDAP variables must be set in the hosts.yml inventory file.

  2. Set the required OAuth variables listed above.

  3. Set auth_mode: ldap_with_oauth in the hosts.yml inventory file.

  4. Run the following command:

    ansible-playbook -i <hosts.yml> confluent.platform.all --skip-tags package
    
  5. In the dual-mode, LDAP with OAuth, migrate the clients to OAuth.

Role-based access control using LDAP

Requirements

  • An LDAP server reachable from your Kafka broker hosts.
  • Port 8090 must be opened on the Kafka brokers and accessible by all hosts.
  • Set up one principal in LDAP for the MDS admin user to bootstrap roles and permissions for the Confluent Platform component principals. This principal must have the search privilege to read and query users and groups in the LDAP server. It is recommended that you create a user named mds.
  • Set up one principal per Confluent Platform component in your LDAP server. These users are used by the Confluent Platform components to authenticate to MDS and access their respective resources. In the below examples, the following component users are used:
    • Confluent Server: kafka_broker
    • Schema Registry: schema_registry
    • Connect: connect_worker
    • ksqlDB: ksql
    • REST Proxy: rest_proxy
    • Confluent Server REST API: kafka_broker
    • Control Center: control_center
  • (Optional) Generate a key pair to be used by the OAuth-enabled listener as described in Create a PEM key pair.

Note

If using mTLS or Kerberos for inter-broker authentication, you don’t need to set up an LDAP user for Kafka brokers.

Required settings for RBAC

Sample inventory files for RBAC configuration are provided in the sample_inventories directory under the Confluent Ansible home directory:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/sample_inventories/

Add the required variables in your hosts.yml to enable and configure RBAC.

Enable RBAC with Ansible

all:
  vars:
    rbac_enabled: true

Provide LDAP server details for RBAC to look up and validate users

Consult your LDAP admins and see Configure LDAP Group-Based Authorization for MDS and Configure LDAP Authentication for the custom properties you need to set.

The following is an example ldap_config with sample LDAP properties.

kafka_broker:
  vars:
    kafka_broker_custom_properties:
      ldap.java.naming.factory.initial: com.sun.jndi.ldap.LdapCtxFactory
      ldap.com.sun.jndi.ldap.read.timeout: 3000
      ldap.java.naming.provider.url: ldap://ldap1:389
      ldap.java.naming.security.principal: uid=mds,OU=rbac,DC=example,DC=com
      ldap.java.naming.security.credentials: password
      ldap.java.naming.security.authentication: simple
      ldap.user.search.base: OU=rbac,DC=example,DC=com
      ldap.group.search.base: OU=rbac,DC=example,DC=com
      ldap.user.name.attribute: uid
      ldap.user.memberof.attribute.pattern: CN=(.*),OU=rbac,DC=example,DC=com
      ldap.group.name.attribute: cn
      ldap.group.member.attribute.pattern: CN=(.*),OU=rbac,DC=example,DC=com
      ldap.user.object.class: account

If using LDAPS and if the certificate authority is in Kafka’s truststore, include the following properties in the kafka_broker_custom_propertiesdictionary:

kafka_broker:
vars:
  kafka_broker_custom_properties:
    ldap.java.naming.provider.url: ldaps://<ldap-host>:636
    ldap.java.naming.security.protocol: SSL
    ldap.ssl.truststore.location: "{{kafka_broker_truststore_path}}"
    ldap.ssl.truststore.password: "{{kafka_broker_truststore_storepass}}"

If using LDAPs and if the CA is not in Kafka’s truststore, you can make use of the copy file feature:

kafka_broker:
  vars:
    kafka_broker_copy_files:
      - source_path: /path/to/truststore.jks
        destination_path: /var/ssl/private/ldaps.truststore.jks
    kafka_broker_custom_properties:
      ldap.java.naming.provider.url: ldaps://<ldap-host>:636
      ldap.java.naming.security.protocol: SSL
      ldap.ssl.truststore.location: /var/ssl/private/ldaps.truststore.jks
      ldap.ssl.truststore.password: <password>

Provide the super user credentials for bootstrapping RBAC within Confluent Platform

The mds user is used in the inventory file in the sample_inventories directory mentioned above.

all:
  vars:
    mds_super_user: mds
    mds_super_user_password: password

Provide LDAP users for Confluent Platform components

The following users are configured in the inventory file in the sample_inventories directory mentioned above.

all:
  vars:
    kafka_broker_ldap_user: kafka_broker
    kafka_broker_ldap_password: password
    schema_registry_ldap_user: schema_registry
    schema_registry_ldap_password: password
    kafka_connect_ldap_user: connect_worker
    kafka_connect_ldap_password: password
    ksql_ldap_user: ksql
    ksql_ldap_password: password
    kafka_rest_ldap_user: rest_proxy
    kafka_rest_ldap_password: password
    control_center_next_gen_ldap_user: control_center
    control_center_next_gen_ldap_password: password

Note

When installing one or more individual Confluent Platform components, use the certificate_authority tag. The tag creates the pem files that MDS requires and copies the files to the managed servers.

ansible-playbook -i hosts.yml confluent.platform.all --tags=certificate_authority

For complete information on installing individual components, see Install individual Confluent Platform components.

Role-based access control using mTLS

Settings for RBAC with mTLS

Sample inventory files for RBAC configurations are provided in the sample_inventories directory under the Confluent Ansible home directory:

https://github.com/confluentinc/cp-ansible/blob/8.0.0-post/docs/sample_inventories/

Add the required variables in your inventory file to enable and configure RBAC with mTLS.

The following are the most commonly used variables to enable RBAC with mTLS:

  • rbac_enabled

    Set to true for RBAC.

  • auth_mode

    Authorization mode on all Confluent Platform components.

    Set to mtls for RBAC with mTLS only.

  • mds_ssl_client_authentication

    The configuration of the MDS server to enforce SSL client authentication on MDS.

    The MDS server will use mTLS certificates for authentication and the principal extracted from certificates for authorization.

    Options are:

    • none: The client does not need to to send certificate. If the client sends certificate it will be ignored.
    • requested: The clients may or may not send the certificates. In case clients do not send certificates, there should be either LDAP or OAuth credentials/token which should provide the principal. This option is used during upgrades.
    • required: The client must send certificates to the server.

    Default: none

  • ssl_client_authentication

    Kafka broker listeners configuration to enforce SSL client authentication.

    Options are:

    • none: The client does not need to to send certificate. If the client sends certificate it will be ignored.
    • requested: The clients may or may not send the certificates. In case clients do not send certificates, there should be either LDAP or OAuth credentials/token which should provide the principal. This option is used during upgrades.
    • required: The client must send certificates to the server.

    Default: none

  • <component>_ssl_client_authentication

    The component-level setting for Schema Registry, Connect, REST Proxy to enforce SSL client authentication.

    Options are:

    • none: The client does not need to to send certificate. If the client sends certificate it will be ignored.
    • requested: The clients may or may not send the certificates. In case clients do not send certificates, there should be either LDAP or OAuth credentials/token which should provide the principal. This option is used during upgrades.
    • required: The client must send certificates to the server.

    Default: The value of ssl_client_authentication

  • erp_ssl_client_authentication

    Embedded REST Proxy server’s configuration to enforce SSL client authentication on Embedded REST Proxy.

    Options are:

    • none: The client does not need to to send certificate. If the client sends certificate it will be ignored.
    • requested: The clients may or may not send the certificates. In case clients do not send certificates, there should be either LDAP or OAuth credentials/token which should provide the principal. This option is used during upgrades.
    • required: The client must send certificates to the server.

    Default: mds_ssl_client_authentication value

  • impersonation_super_users

    Required for auth_mode: mtls.

    A list of principals allowed to get an impersonation token for other users except the impersonation-protected users (impersonation_protected_users).

    For more information, see Enable Token-based Authentication for RBAC.

    For example:

    impersonation_super_users:
      - 'kafka_broker'
      - 'kafka_rest'
      - 'schema_registry'
      - 'kafka_connect'
    

    Default: None

  • impersonation_protected_users

    Required for RBAC with mTLS only.

    A list of principals who cannot be impersonated by REST Proxy. Super users should be added here to disallow them from being impersonated.

    For example:

    impersonation_protected_users:
      - 'super_user'
    
  • principal_mapping_rules

    The rules to map a distinguished name from the certificates to a short principal name.

    Default: DEFAULT

    For example:

    principal_mapping_rules:
       - "RULE:.*CN=([a-zA-Z0-9.-_]*).*$/$1/"
       - "DEFAULT"
    

    For details about principal mapping rules, see Principal Mapping Rules for SSL Listeners.

Note

In any communication, if LDAP credentials or IDP token are present, the principal will come from the credential or token. If only a certificate is present, then the principal will be extracted from it.

File-based authentication for Confluent Control Center

For RBAC only configured with mTLS authentication, the file-based authentication for Control Center should be set up using the custom properties in the inventory file. It is needed for Control Center UI login. For example:

all:
  vars:
    mds_file_based_user_store_enabled: true --- [1]
    mds_file_based_user_store_src_path:     --- [2]
    mds_file_based_user_store_dest_path:    --- [3]
    mds_file_based_user_store_remote_src:   --- [4]
  • [1] Set true for file-based authentication.

  • [2] The credential file path. Each line in the file has <username>:<password>. For example:

    user1: password1
    user2: password2
    

    Confluent Ansible does not validate the correctness and syntax of the credentials file.

  • [3] The destination path. Confluent Ansible copies the credential file to ALL Kafka brokers running MDS and puts it in mds_file_based_user_store_dest_path.

  • [4] Set true if mds_file_based_user_store_src_path is on the control node or on the target node.

    • If set to false, the credential file will be copied from the control node to the target host.
    • If set to true, the credential file will be moved from the source path to the destination path on the target host.

Migrate to mTLS

Using Confluent Ansible, you can migrate an RBAC-enabled Confluent Platform deployment to use mTLS. The feature extends the support for mTLS RBAC from greenfield (new) deployments to brownfield (existing) deployments, enhancing security for your Confluent Platform environments.

Note that dual authentications are supported for OAuth and mTLS or LDAP and mTLS.

This section describes the steps to migration from an OAuth or LDAP-based RBAC deployment to an mTLS-based RBAC.

Note

Migration from a dual OAuth and LDAP-based RBAC deployment to an mTLS-based RBAC can be achieved in two steps:

  1. Migrate LDAP and OAuth to OAuth only
  2. Migrate OAuth to mTLS

For comprehensive sample scenarios with example inventory files, see the Confluent Ansible GitHub repository.

Migrate from OAuth or LDAP to mTLS

This section describes a workflow to migrate from OAuth or LDAP-based RBAC to mTLS-based RBAC.

  1. Enable RBAC with mTLS as described in Role-based access control using mTLS.

  2. Update the SSL client authentication and other required changes in the inventory file and apply the ansible-playbook -i hosts.yml confluent.platform.all command.

    The updates mentioned in this step have to be applied in a single ansible-playbook run.

    1. Update the SSL client authentication (ssl_client_authentication) to requested for MDS, Confluent Platform components, and Kafka listeners:

      The following is an example snippet of an inventory file with additional variables and comments:

      # Do a rolling upgrade to avoid downtime
      deployment_strategy: rolling
      
      # Enable mTLS on MDS
      mds_ssl_client_authentication: requested
      
      # Enable mTLS on all other components, except for Zookeeper
      <component>_ssl_client_authentication: requested
      <component>_ssl_mutual_auth_enabled: true
      
      # Zookeeper to Kafka (for Confluent Platform 7.8 or 7.9 only) should remain
      # as it is.
      # When enabling mTLS, do NOT enable zookeeper_ssl_client_authentication
      # for Zookeeper to Kafka communication.
      # Instead, use the following variable.
      # If Zookeeper to Kafka communication has mTLS set up, set the below to true.
      # If Zookeeper to Kafka communication does not have mTLS, set it to false.
      zookeeper_ssl_mutual_auth_enabled: false
      
      kafka_broker_custom_listeners:
        # This is an existing listener having clients
        # thus enabling in ssl_client_authentication: requested mode
         external:
           name: external1
           port: <port>
           sasl_protocol: plain
           ssl_enabled: true
           ssl_mutual_auth_enabled: true
           ssl_client_authentication: requested
      
    2. Add a new listener with mTLS authentication in the inventory file. Since it is a new listener, it has no clients when created and thus can have ssl_client_authentication: required.

      kafka_broker_custom_listeners:
      
         puremtls:
           name: puremtls
           port: <port>
           sasl_protocol: none
           ssl_enabled: true
           ssl_client_authentication: required
      
         # Adding this changes CP component to MDS communication from LDAP/OAuth+mTLS to mTLS
         <component>_mds_cert_auth_only: true
      
         # Change the internal listener to the new custom listener for Confluent
         # components to communicate with Kafka using mTLS
         <component>_kafka_listener_name: puremtls
      
    3. Add Confluent Platform component principals used for communicating with MDS.

      Since the communication is happening over mTLS, you need to add certificate principals. Certificate principals can be extracted by applying the principal mapping rules on the Common Name (CN).

      1. Extract the subject:

        openssl x509 -noout -subject -nameopt RFC2253 -in <certificate>
        

        If you do not have any principal mapping rules, the subject is the principal.

        If you have principal mapping rules, apply the rules to the subject. For example, if you have the output, C=US,ST=Ca,L=PaloAlto,O=CONFLUENT,OU=TEST,CN=kafka_broker ``, and ``RULE:.*CN=([a-zA-Z0-9.-_]*).*$/$1/,DEFAULT, as the principal mapping rule, the principal is kafka_broker.

      2. Add the Kafka broker, REST Proxy, Schema Registry, and Connect principals in the inventory file:

        impersonation_super_users:
         - 'kafka_broker'
         - 'kafka_rest'
         - 'schema_registry'
         - 'kafka_connect'
        
    4. For Control Center, you can use SSO or MDS file-based authentication when migrating from LDAP. But do not configure both SSO and file-based authentication.

  3. Upgrade the clients.

    1. Add the proper role bindings for all the client certificates that will be used in the following steps. You can use the confluent iam rbac role-binding create CLI command.
    2. Update all the clients (Kafka clients and other Confluent Platform component clients) to send certificates when talking to the server.

    When all the clients start sending certificates, proceed to the next step.

  4. Move to the mTLS required mode for MDS, Confluent Platform components, and Kafka listeners.

    auth_mode: mtls
    
    mds_ssl_client_authentication: required
    
    <component>_ssl_client_authentication: required
    <component>_ssl_mutual_auth_enabled: true
    
    kafka_broker_custom_listeners:
       external:
         name: external1
         port: <port>
         sasl_protocol: plain
         ssl_enabled: true
         ssl_mutual_auth_enabled: true
         ssl_client_authentication: required
    
  5. Clean up the LDAP and OAuth settings.

    For the dual-mode RBAC (LDAP or OAuth with mTLS) where the inter-broker communication happens over LDAP or OAuth, keep the LDAP or OAuth configuration variables.

    For the mTLS-only RBAC where the inter-broker and Confluent Platform component to Kafka communicate over mTLS, remove the LDAP and OAuth configuration variables.

Note

Even when the server is in the client authentication requested mode, if a client only sends a certificate, then the principal is extracted from the certificate.

When a client sends both a certificate and a token/username-password, then the principal is extracted from the token/username, and the certificate is only used for authentication and not authorization.

Optional settings for RBAC

Add the optional settings in your hosts.yml to enable and configure RBAC.

Provide your own MDS server certificates and key pair for OAuth

all:
  vars:
    create_mds_certs: false

    token_services_public_pem_file: # Path to public.pem
    token_services_private_pem_file: # Path to tokenKeypair.pem

Disable MDS-based ACLs

all:
  vars:
    mds_acls_enabled: false

By default, MDS-based ACLs are enabled when RBAC is enabled.

Configure additional principals as system admins

user1 and group1 are used in this example.

all:
  vars:
    rbac_component_additional_system_admins:
      - User:user1
      - Group:group1

Other considerations

Kerberos

When setting sasl_protocol: kerberos, you need keytabs/principals for KRaft, Kafka, and external clients. See Configure SASL/GSSAPI (Kerberos) authentication.

However, because the rest of Confluent Platform components use their own listener, there is no need to create Kerberos principals and keytabs for those components. They will authenticate to Kafka using their LDAP or OAuth user/password.

Role-based access control with centralized MDS

Starting in Confluent Platform 6.0.0, you can use Ansible Playbooks for Confluent Platform to configure role-based access control (RBAC) with a centralized Metadata Service (MDS) on a remote Confluent Platform cluster.

This section provides the additional requirements and settings for the configuration.

The requirements and configuration details in Role-based access control also apply for this configuration.

Requirements

  • The public key used by the MDS for its OAuth-enabled listener must be provided to the current cluster that you are setting RBAC on.
  • The principal used for authenticating to the remote MDS cluster must be a super user on the remote MDS cluster.

Required settings for RBAC with centralized MDS

To enable and configure RBAC with the centralized MDS, add the additional mandatory variables in your inventory file to.

Enable RBAC centralized MDS with Ansible

all:
  vars:
    external_mds_enabled: true

Provide the centralized MDS bootstrap URLs

Specify the URL for the MDS REST API on the Kafka cluster hosting MDS:

all:
  vars:
    mds_bootstrap_server_urls:

For example:

all:
  vars:
    mds_bootstrap_server_urls: https://ip-172-31-34-246.us-east-1.compute.internal:8090,https://ip-172-31-34-246.us-east-2.compute.internal:8090

Provide the centralized MDS bootstrap servers

Specify a list of the hostnames and ports for the listeners hosting the MDS that you wish to connect to: <mds-broker-hostname1>:<port>,<mds-broker-hostname2>:<port>

all:
  vars:
    mds_broker_bootstrap_servers:

For example:

all:
  vars:
    mds_broker_bootstrap_servers: ip-172-31-43-14.us-west-1.compute.internal:9093,ip-172-31-43-14.us-west-2.compute.internal:9093

Provide the centralized MDS broker listener security configuration

Specify the security settings of the remote Kafka broker that the centralized MDS runs on: (mds_broker_bootstrap_servers):

all:
  vars:
    mds_broker_listener:
      ssl_enabled:               --- [1]
      ssl_client_authentication: --- [2]
      ssl_mutual_auth_enabled:   --- [3]
      sasl_protocol:             --- [4]
  • [1] Set ssl_enabled to true if the remote MDS uses TLS.

  • [2] Set ssl_mutual_auth_enabled to true if the remote MDS uses mTLS.

  • [3] Set ssl_client_authentication to required if the remote MDS uses mTLS.

  • [4] Set sasl_protocol to the SASL protocol for the remote MDS. Options are: none, kerberos, sasl_plain, sasl_scram

    The MDS listener must have an authentication mode, mTLS, Kerberos, SASL/PLAIN, or SASL/SCRAM.

    You can set sasl_protocol to none only if ssl_enabled ([1]) is set to true and ssl_client_authentication ([2]) is set to required, therefor specifying mTLS authentication mode for the listener.

The following example is for mTLS on the centralized MDS brokers:

all:
  vars:
    mds_broker_listener:
      ssl_enabled: true
      ssl_mutual_auth_enabled: true
      sasl_protocol: none

Provide the paths to the centralized MDS server certificates and key pair for OAuth

all:
  vars:
    create_mds_certs: false
    token_services_public_pem_file:
    token_services_private_pem_file:

Cluster registry

You can use Ansible Playbooks for Confluent Platform to name your clusters within the cluster registries in Confluent Platform.

Cluster registry provides a way to centrally register and identify Kafka clusters in the metadata service (MDS) to simplify the RBAC role binding process and to enable centralized audit logging.

Register the Kafka clusters in the MDS cluster registry using the following variables in the inventory file of the cluster.

  • To register a Kafka cluster in the MDS:

    kafka_broker_cluster_name:
    
  • To register a Schema Registry cluster in the MDS:

    schema_registry_cluster_name:
    
  • To register a Kafka Connect cluster in the MDS:

    kafka_connect_cluster_name:
    
  • To register a ksqlDB cluster in the MDS:

    ksql_cluster_name: