Configure Authentication for Confluent Platform with Ansible Playbooks

Kafka authentication

Ansible Playbooks for Confluent Platform supports the following authentication modes for Kafka:

  • SASL PLAIN: Uses a simple username and password for authentication.
  • SASL SCRAM: Uses usernames and password stored in ZooKeeper. Credentials get created during installation.
  • SASL GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
  • mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.

By default, Kafka is installed with no authentication.

Configure SASL PLAIN authentication

To configure SASL PLAIN authentication, set the following option in the hosts.yml file.

all:
  vars:
    sasl_protocol: plain

During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.

To configure additional users, add the following section in the hosts.yml file. The following example shows an additional three users added.

all:
  vars:
    sasl_plain_users:
      user1:
        principal: user1
        password: my-secret
      user2:
        principal: user2
        password: my-secret
      user3:
        principal: user3
        password: my-secret

Configure SASL SCRAM authentication

To configure SASL SCRAM authentication, set the following option in the hosts.yml file:

all:
  vars:
    sasl_protocol: scram

During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.

To configure additional users, add the following section in the hosts.yml file:

all:
  vars:
    sasl_scram_users:
      user1:
        principal: user1
        password: my-secret

Configure SASL GSSAPI (Kerberos) authentication

The Ansible playbook does not currently configure Key Distribution Center (KDC) and Active Directory KDC configurations. You must set up your own KDC independently of the playbook and provide your own keytabs to configure SASL GSSAPI (SASL with Kerberos):

  • Create principals within your organization’s Kerberos KDC server for each component and for each host in each component.
  • Generate keytabs for these principals. The keytab files must be present on the Ansible control node.

To install Kerberos packages and configure the /etc/krb5.conf file on each host, add the following configuration parameters in the hosts.yaml file.

  • Specify the primary part of the Kafka broker principal. For example, for the principal, kafka/kafka1.hostname.com@EXAMPLE.COM, the primary is kafka.

    all:
      vars:
        kerberos_kafka_broker_primary: <kafka-principal-primary>
    
  • Specify whether to install Kerberos packages and to configure the /etc/krb5.conf file. The default is true.

    If the hosts already have the /etc/krb5.conf file configured, set kerberos_configure to false.

    all:
      vars:
        kerberos_configure: <true-or-false>
    
  • Specify the realm part of the Kafka broker Kerberos principal.

    all:
      vars:
        kerberos:
          realm: <kafka-principal-realm>
    
  • Specify the hostname of machine with KDC running.

    all:
      vars:
        kdc_hostname: <kdc-hostname>
        admin_hostname: <kdc-hostname>
    

The example below shows the Kerberos configuration settings for the Kerberos principal, kafka/kafka1.hostname.com@EXAMPLE.COM.

all:
  vars:
    kerberos_kafka_broker_primary: kafka
    kerberos_configure: true
    kerberos:
      realm: example.com
      kdc_hostname: ip-192-24-45-82.us-west.compute.internal
      admin_hostname: ip-192-24-45-82.us-west.compute.internal

The following examples show the variables that need to be added for each component to use Kerberos for authentication. Each host must have these variables set in the hosts.yml file.

Zookeeper section

Use the following example to enter the zookeeper_kerberos_keytab_path and zookeeper_kerberos_principal variables. Each host must include these two variables. Three hosts are shown in the example.

zookeeper:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
     zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
     zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
   ip-192-24-37-15.us-west.compute.internal:
     zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
     zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
   ip-192-24-34-224.us-west.compute.internal:
     zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
     zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

Kafka broker section

Use the following example to enter the kafka_broker_kerberos_keytab_path and kafka_broker_kerberos_principal variables. Each host must include these two variables. The example shows three hosts.

kafka_broker:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
     kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
     kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
   ip-192-24-37-15.us-west.compute.internal:
     kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
     kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
   ip-192-24-34-224.us-west.compute.internal:
     kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
     kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

Schema Registry section

Use the following example to enter the schema_registry_kerberos_keytab_path and schema_registry_kerberos_principal variables. Each host must include these two variables. The example shows one host.

schema_registry:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      schema_registry_kerberos_keytab_path: /tmp/keytabs/schemaregistry-ip-192-24-34-224.us-west.compute.internal.keytab
      schema_registry_kerberos_principal: schemaregistry/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

Kafka Connect section

Use the following example to enter the kafka_connect_kerberos_keytab_path and kafka_connect_kerberos_principal variables. Each host must include these two variables. The example shows one host.

kafka_connect:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_connect_kerberos_keytab_path: /tmp/keytabs/connect-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_connect_kerberos_principal: connect/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

REST proxy section

Use the following example to enter the kafka_rest_kerberos_keytab_path and kafka_rest_kerberos_principal variables. Each host must include these two variables. The example shows one host.

kafka_rest:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      kafka_rest_kerberos_keytab_path: /tmp/keytabs/restproxy-ip-192-24-34-224.us-west.compute.internal.keytab
      kafka_rest_kerberos_principal: restproxy/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

ksqlDB section

Use the following example to enter the ksql_kerberos_keytab_path and ksql_kerberos_principal variables. Each host must include these two variables. The example shows one host.

ksql:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      ksql_kerberos_keytab_path: /tmp/keytabs/ksql-ip-192-24-34-224.us-west.compute.internal.keytab
      ksql_kerberos_principal: ksql/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

Control Center section

Use the following example to enter the control_center_kerberos_keytab_path and control_center_kerberos_principal variables. Each host must include these two variables. The example shows one host.

control_center:
  hosts:
    ip-192-24-34-224.us-west.compute.internal:
      control_center_kerberos_keytab_path: /tmp/keytabs/controlcenter-ip-192-24-34-224.us-west.compute.internal.keytab
      control_center_kerberos_principal: controlcenter/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM

Configure mTLS authentication

To configure mutual TLS (mTLS) authentication, you must enable TLS encryption.

Set the following parameters in hosts.yml file:

all:
  vars:
    ssl_enabled: true
    ssl_mutual_auth_enabled: true

ZooKeeper authentication

Ansible Playbooks for Confluent Platform supports the following authentication modes for ZooKeeper:

  • SASL with DIGEST-MD5: Uses hashed values of the user’s password for authentication.
  • SASL GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
  • mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.

By default, ZooKeeper is installed with no authentication.

Configure SASL DIGEST-MD5

To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following variable in hosts.yml:

all:
  vars:
    zookeeper_sasl_protocol: digest

Configure SASL GSSAPI (Kerberos)

To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following variable in hosts.yml:

all:
  vars:
    zookeeper_sasl_protocol: kerberos

Configure mTLS

To enable mTLS authentication for ZooKeeper, set the following variable in hosts.yml:

all:
  vars:
    zookeeper_ssl_enabled: true
    zookeeper_ssl_mutual_auth_enabled: true

Components authentication

Ansible Playbooks for Confluent Platform supports mTLS authentication for all other Confluent Platform components.

By default, Confluent Platform components are installed with no authentication.

To enable mTLS for all components, set the following parameters in hosts.yml file:

all:
  vars:
    ssl_enabled: true
    ssl_mutual_auth_enabled: true