Configure Authentication for Confluent Platform with Ansible Playbooks¶
Kafka authentication¶
Ansible Playbooks for Confluent Platform (Confluent Ansible) supports the following authentication modes for Kafka in the ZooKeeper mode:
- SASL/PLAIN: Uses a simple username and password for authentication.
- SASL/SCRAM: Uses usernames and password stored in ZooKeeper. Credentials get created during installation.
- SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
Confluent Ansible supports the following authentication modes for Kafka brokers and Kafka controllers in the KRaft mode:
- SASL/PLAIN: Uses a simple username and password for authentication.
- SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
By default, Kafka is installed with no authentication.
Configure SASL/PLAIN authentication¶
To configure SASL/PLAIN authentication, set the following in the hosts.yml
file. In addition to the default users, the code snippet adds three users,
user1
, user2
, user3
, as an example.
The default keys for sasl_plain_users
are required for Confluent Platform components,
including admin
for the Kafka brokers, the client
user for use by
external components, schema_registry
, kafka_connect
, ksql
,
control_center
, kafka-rest
, kafka_connect_replicator
.
all:
vars:
sasl_protocol: plain
sasl_plain_users:
admin:
principal: 'admin'
password: 'admin-secret'
schema_registry:
principal: 'schema-registry'
password: 'schema_registry-secret'
kafka_connect:
principal: 'kafka-connect'
password: 'kafka_connect-secret'
ksql:
principal: 'ksqldb'
password: 'ksql-secret'
kafka_rest:
principal: 'kafka_rest'
password: 'kafka_rest-secret'
control_center:
principal: 'control-center'
password: 'control_center-secret'
kafka_connect_replicator:
principal: 'kafka_connect_replicator'
password: 'kafka_connect_replicator-secret'
client:
principal: 'client'
password: 'client-secret'
user1:
principal: 'user1'
password: my-secret
user2:
principal: 'user2'
password: my-secret
user3:
principal: 'user3'
password: my-secret
Configure SASL/SCRAM (SHA-512) authentication¶
To configure SASL/SCRAM authentication with SHA-512, set the following option in
the hosts.yml
file:
all:
vars:
sasl_protocol: scram
During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.
To configure additional users, add the following section in the hosts.yml
file:
all:
vars:
sasl_scram_users:
user1:
principal: user1
password: my-secret
Configure SASL/SCRAM (SHA-256) authentication¶
To configure SASL/SCRAM authentication with SHA-256, set the following option in
the hosts.yml
file:
all:
vars:
sasl_protocol: scram256
During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.
To configure additional users, add the following section in the hosts.yml
file:
all:
vars:
sasl_scram256_users:
user1:
principal: user1
password: my-secret
Configure SASL/GSSAPI (Kerberos) authentication¶
The Ansible playbook does not currently configure Key Distribution Center (KDC) and Active Directory KDC configurations. You must set up your own KDC independently of the playbook and provide your own keytabs to configure SASL/GSSAPI (SASL with Kerberos):
- Create principals within your organization’s Kerberos KDC server for each component and for each host in each component.
- Generate keytabs for these principals. The keytab files must be present on the Ansible control node.
To install Kerberos packages and configure the client configuration file on each
host, add the following configuration parameters in the hosts.yaml
file.
Specify whether to install Kerberos packages and to configure the client configuration file. The default value is
true
.If the hosts already have the client configuration file configured, set
kerberos_configure
tofalse
.all: vars: kerberos_configure: <true-or-false>
Specify the client configuration file. The default value is
/etc/krb5.conf
.Use this variable only when you want to specify a custom location of the client configuration file.
all: vars: kerberos_client_config_file_dest:
If
kerberos_configure
is set totrue
, Confluent Ansible will generate the client config file at this location on the host nodes.If
kerberos_configure
is set tofalse
, Confluent Ansible will expect the client configuration file to be present at this location on the host nodes.Specify the realm part of the Kafka broker Kerberos principal and the hostname of machine with KDC running.
all: vars: kerberos: realm: <kafka-principal-realm> kdc_hostname: <kdc-hostname> admin_hostname: <kdc-hostname>
The example below shows the Kerberos configuration settings for the Kerberos
principal, kafka/kafka1.hostname.com@EXAMPLE.COM
.
all:
vars:
kerberos_configure: true
kerberos:
realm: example.com
kdc_hostname: ip-192-24-45-82.us-west.compute.internal
admin_hostname: ip-192-24-45-82.us-west.compute.internal
Each host in the inventory also needs to set variables that define their Kerberos principal and the location of the keytab on the Ansible controller.
The hosts.yml
should look like:
zookeeper:
hosts:
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_controller:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_broker:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
schema_registry:
hosts:
ip-192-24-34-224.us-west.compute.internal:
schema_registry_kerberos_keytab_path: /tmp/keytabs/schemaregistry-ip-192-24-34-224.us-west.compute.internal.keytab
schema_registry_kerberos_principal: schemaregistry/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_connect:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_connect_kerberos_keytab_path: /tmp/keytabs/connect-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_connect_kerberos_principal: connect/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_rest:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_rest_kerberos_keytab_path: /tmp/keytabs/restproxy-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_rest_kerberos_principal: restproxy/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ksql:
hosts:
ip-192-24-34-224.us-west.compute.internal:
ksql_kerberos_keytab_path: /tmp/keytabs/ksql-ip-192-24-34-224.us-west.compute.internal.keytab
ksql_kerberos_principal: ksql/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
control_center:
hosts:
ip-192-24-34-224.us-west.compute.internal:
control_center_kerberos_keytab_path: /tmp/keytabs/controlcenter-ip-192-24-34-224.us-west.compute.internal.keytab
control_center_kerberos_principal: controlcenter/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Note
To better support Active Directory, Confluent Ansible enables canonicalization
by default. If canonicalization has not been enabled during the Confluent Platform cluster
creation, such as when you upgrade a Confluent Platform cluster that uses Kerberos to
authenticate Kafka brokers to ZooKeeper, explicitly set the following property in
the hosts.yml
inventory file.
kerberos:
canonicalize: false
Configure mTLS authentication¶
To configure mutual TLS (mTLS) authentication, you must enable TLS encryption as described in Configure Encryption for Confluent Platform with Ansible Playbooks.
Set the following parameters in hosts.yml
file:
all:
vars:
ssl_enabled: true
ssl_mutual_auth_enabled: true
Principal mapping rules for Kafka¶
By default, the user name associated with an SSL connection is of the form,
"CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"
. You can
customize the SSL principal name by extracting one of the fields from
the long distinguished name, for example, CN
, as the principal name.
In the Kafka broker configuration, use the ssl.principal.mapping.rules
property in the Kafka custom properties
to specify a list of rules for mapping a distinguished name to a short
principal name.
For example:
kafka_broker:
hosts:
ip-192-24-10-207.us-west.compute.internal:
kafka_broker_custom_properties:
ssl.principal.mapping.rules: "RULE:.O=(.?),OU=TEST.$$/$$1/,RULE:^cn=(.?),ou=(.?),dc=(.?),dc=(.*?)$$/$$1@$$2/L"
The $
characters used in the rule need to be escaped with another $
as
shown in the example above.
For details about principal mapping rules, see Principal Mapping Rules for SSL Listeners.
KRaft authentication¶
By default, KRaft controllers inherit the authentication configuration of the Kafka cluster and does not require specific authentication configuration just for KRaft.
Confluent Ansible supports the following authentication modes for Kafka brokers and Kafka controllers in the KRaft mode:
SASL/PLAIN: Uses a simple username and password for authentication.
This authentication method is set at the Kafka level cannot be overridden just for KRaft.
SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
You can override the global Kafka authentication and configure KRaft with Kerberos.
mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
You can override the global Kafka authentication and configure KRaft with mTLS.
Configure SASL/GSSAPI (Kerberos) authentication¶
By default, KRaft controllers inherit the Kafka Kerberos settings.
To enable SASL/GSSAPI (Kerberos) authentication specifically for KRaft, set
the following variables in hosts.yml
:
all:
vars:
kafka_controller_sasl_protocol: kerberos
Each host also need these variables set. The Kafka controller and the brokers must have the same primary names (set in the Kerberos principal).
kafka_controller:
vars:
kafka_controller_kerberos_keytab_path: "/tmp/keytabs/kafka-{{inventory_hostname}}.keytab"
kafka_controller_kerberos_principal: "kafka/{{inventory_hostname}}@confluent.example.com"
For example:
kafka_controller:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
For additionally required Kerberos settings, see Kafka Kerberos settings.
Configure mTLS authentication¶
To configure mutual TLS (mTLS) authentication, you must enable TLS encryption as described in Configure Encryption for Confluent Platform with Ansible Playbooks.
By default, KRaft controllers inherit the global TLS and mTLS settings.
If want to enable or disable mTLS specifically for KRaft, specify a boolean
value to enable or disable mTLS authentication on the KRaft controllers
(Server to Server and Client to Server) in the hosts.yml
file:
all:
vars:
kafka_controller_ssl_enabled:
kafka_controller_ssl_mutual_auth_enabled:
ZooKeeper authentication¶
Ansible Playbooks for Confluent Platform supports configuring ZooKeeper Server to Server and Client to Server authentication modes. Kafka acts as a ZooKeeper client.
By default, ZooKeeper is installed with no authentication.
Server to Server authentication¶
Ansible Playbooks for Confluent Platform supports the following ZooKeeper server to server authentication modes:
- SASL with DIGEST: Uses hashed values of the user’s password for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
By default, ZooKeeper is installed with no authentication.
Configure SASL with DIGEST-MD5 authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in hosts.yml
:
all:
vars:
zookeeper_quorum_authentication_type: digest
Configure mTLS authentication¶
To enable mTLS authentication for ZooKeeper, set the following variable in
hosts.yml
:
all:
vars:
zookeeper_quorum_authentication_type: mtls
Client to Server authentication¶
Ansible Playbooks for Confluent Platform supports the following ZooKeeper client to server authentication modes:
- SASL with DIGEST-MD5: Uses hashed values of the user’s password for authentication.
- SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
Configure SASL with DIGEST-MD5 authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in hosts.yml
:
all:
vars:
zookeeper_client_authentication_type: digest
Configure SASL/GSSAPI (Kerberos) authentication¶
To enable SASL SASL/GSSAPI (Kerberos) authentication for ZooKeeper, set the following
variable in hosts.yml
:
all:
vars:
zookeeper_client_authentication_type: kerberos
Each host will also need these variables set:
zookeeper_kerberos_principal: "zk/{{inventory_hostname}}.confluent@{{kerberos.realm | upper}}"
zookeeper_kerberos_keytab_path: "roles/confluent.test/molecule/{{scenario_name}}/keytabs/zookeeper-{{inventory_hostname}}.keytab"
Note
To better support Active Directory, Confluent Ansible enables canonicalization
by default. If canonicalization has not been enabled during the Confluent Platform cluster
creation, such as when you upgrade a Confluent Platform cluster that uses Kerberos to
authenticate Kafka brokers to ZooKeeper, explicitly set the following property in
the hosts.yml
inventory file.
kerberos:
canonicalize: false
Configure mTLS authentication¶
To enable mTLS authentication for ZooKeeper, set the following variable in
hosts.yml
:
all:
vars:
zookeeper_client_authentication_type: mtls
Components authentication¶
Ansible Playbooks for Confluent Platform supports mTLS and basic authentication modes for all other Confluent Platform components, besides Kafka and ZooKeeper.
By default, Confluent Platform components are installed with no authentication.
Configure mTLS authentication¶
To enable mTLS for all components, set the following parameters in hosts.yml
file:
all:
vars:
ssl_enabled: true
kafka_broker_rest_proxy_authentication_type: mtls
schema_registry_authentication_type: mtls
kafka_connect_authentication_type: mtls
kafka_rest_authentication_type: mtls
ksql_authentication_type: mtls
control_center_authentication_type: mtls
Configure basic authentication¶
To enable basic authentication for a component, set the following parameters in
hosts.yml
:
all:
vars:
kafka_broker_rest_proxy_authentication_type: basic
schema_registry_authentication_type: basic
kafka_connect_authentication_type: basic
kafka_rest_authentication_type: basic
ksql_authentication_type: basic
control_center_authentication_type: basic
kafka_broker_rest_proxy_basic_users:
client:
principal: client
password: client-secret
roles: client,admin
schema_registry_basic_users:
client:
principal: client
password: client-secret
roles: client,developer,admin
kafka_connect_basic_users:
admin:
principal: user1
password: password
ksql_basic_users:
admin:
principal: user1
password: user1-secret
roles: user1
client:
principal: client
password: client-secret
roles: client
kafka_rest_basic_users:
client:
principal: client
password: client-secret
roles: client
control_center_basic_users:
client:
principal: client
password: client-secret
roles: client