Configure Authentication for Confluent Platform with Ansible Playbooks¶
Kafka authentication¶
Ansible Playbooks for Confluent Platform supports the following authentication modes for Kafka:
- SASL PLAIN: Uses a simple username and password for authentication.
- SASL SCRAM: Uses usernames and password stored in ZooKeeper. Credentials get created during installation.
- SASL GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
By default, Kafka is installed with no authentication.
Configure SASL PLAIN authentication¶
To configure SASL PLAIN authentication, set the following in the hosts.yml
file. In addition to the default users, the code snippet adds three users,
user1
, user2
, user3
, as an example.
The default keys for sasl_plain_users
are required for Confluent Platform components,
including admin
for the Kafka brokers, the client
user for use by
external components, schema_registry
, kafka_connect
, ksql
,
control_center
, kafka-rest
, kafka_connect_replicator
.
all:
vars:
sasl_protocol: plain
sasl_plain_users:
admin:
principal: 'kafka'
password: 'admin-secret'
schema_registry:
principal: 'schema-registry'
password: 'schema_registry-secret'
kafka_connect:
principal: 'kafka-connect'
password: 'kafka_connect-secret'
ksql:
principal: 'ksqldb'
password: 'ksql-secret'
kafka_rest:
principal: 'kafka_rest'
password: 'kafka_rest-secret'
control_center:
principal: 'control-center'
password: 'control_center-secret'
kafka_connect_replicator:
principal: 'kafka_connect_replicator'
password: 'kafka_connect_replicator-secret'
client:
principal: 'client'
password: 'client-secret'
user1:
principal: 'user1'
password: my-secret
user2:
principal: 'user2'
password: my-secret
user3:
principal: 'user3'
password: my-secret
Configure SASL SCRAM (SHA-512) authentication¶
To configure SASL SCRAM authentication with SHA-512, set the following option in
the hosts.yml
file:
all:
vars:
sasl_protocol: scram
During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.
To configure additional users, add the following section in the hosts.yml
file:
all:
vars:
sasl_scram_users:
user1:
principal: user1
password: my-secret
Configure SASL SCRAM (SHA-256) authentication¶
To configure SASL SCRAM authentication with SHA-256, set the following option in
the hosts.yml
file:
all:
vars:
sasl_protocol: scram256
During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.
To configure additional users, add the following section in the hosts.yml
file:
all:
vars:
sasl_scram256_users:
user1:
principal: user1
password: my-secret
Configure SASL GSSAPI (Kerberos) authentication¶
The Ansible playbook does not currently configure Key Distribution Center (KDC) and Active Directory KDC configurations. You must set up your own KDC independently of the playbook and provide your own keytabs to configure SASL GSSAPI (SASL with Kerberos):
- Create principals within your organization’s Kerberos KDC server for each component and for each host in each component.
- Generate keytabs for these principals. The keytab files must be present on the Ansible control node.
To install Kerberos packages and configure the /etc/krb5.conf
file on each
host, add the following configuration parameters in the hosts.yaml
file.
Specify whether to install Kerberos packages and to configure the
/etc/krb5.conf
file. The default istrue
.If the hosts already have the
/etc/krb5.conf
file configured, setkerberos_configure
tofalse
.all: vars: kerberos_configure: <true-or-false>
Specify the realm part of the Kafka broker Kerberos principal and the hostname of machine with KDC running.
all: vars: kerberos: realm: <kafka-principal-realm> kdc_hostname: <kdc-hostname> admin_hostname: <kdc-hostname>
The example below shows the Kerberos configuration settings for the Kerberos
principal, kafka/kafka1.hostname.com@EXAMPLE.COM
.
all:
vars:
kerberos_configure: true
kerberos:
realm: example.com
kdc_hostname: ip-192-24-45-82.us-west.compute.internal
admin_hostname: ip-192-24-45-82.us-west.compute.internal
Each host in the inventory also needs to set variables that define their
Kerberos principal and the location of the keytab on the Ansible controller. The
hosts.yml
should look like:
zookeeper:
hosts:
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_broker:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
schema_registry:
hosts:
ip-192-24-34-224.us-west.compute.internal:
schema_registry_kerberos_keytab_path: /tmp/keytabs/schemaregistry-ip-192-24-34-224.us-west.compute.internal.keytab
schema_registry_kerberos_principal: schemaregistry/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_connect:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_connect_kerberos_keytab_path: /tmp/keytabs/connect-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_connect_kerberos_principal: connect/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_rest:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_rest_kerberos_keytab_path: /tmp/keytabs/restproxy-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_rest_kerberos_principal: restproxy/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ksql:
hosts:
ip-192-24-34-224.us-west.compute.internal:
ksql_kerberos_keytab_path: /tmp/keytabs/ksql-ip-192-24-34-224.us-west.compute.internal.keytab
ksql_kerberos_principal: ksql/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
control_center:
hosts:
ip-192-24-34-224.us-west.compute.internal:
control_center_kerberos_keytab_path: /tmp/keytabs/controlcenter-ip-192-24-34-224.us-west.compute.internal.keytab
control_center_kerberos_principal: controlcenter/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Configure mTLS authentication¶
To configure mutual TLS (mTLS) authentication, you must enable TLS encryption.
Set the following parameters in hosts.yml
file:
all:
vars:
ssl_enabled: true
ssl_mutual_auth_enabled: true
ZooKeeper authentication¶
Ansible Playbooks for Confluent Platform supports configuring ZooKeeper Server to Server and Client to Server Authentication. Kafka acts as a ZooKeeper client.
By default, ZooKeeper is installed with no authentication.
Server to Server authentication¶
Ansible Playbooks for Confluent Platform supports the following ZooKeeper server to server authentication modes:
- SASL with DIGEST-MD5: Uses hashed values of the user’s password for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
By default, ZooKeeper is installed with no authentication.
Configure SASL DIGEST-MD5 authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in hosts.yml
:
all:
vars:
zookeeper_quorum_authentication_type: digest
Configure mTLS authentication¶
To enable mTLS authentication for ZooKeeper, set the following variable in
hosts.yml
:
all:
vars:
zookeeper_quorum_authentication_type: mtls
Client to Server authentication¶
Ansible Playbooks for Confluent Platform supports the following ZooKeeper client to server authentication modes:
- SASL with DIGEST-MD5: Uses hashed values of the user’s password for authentication.
- SASL GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
Configure SASL DIGEST-MD5 authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in hosts.yml
:
all:
vars:
zookeeper_client_authentication_type: digest
Configure SASL GSSAPI (Kerberos) authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in hosts.yml
:
all:
vars:
zookeeper_client_authentication_type: kerberos
Each host will also need these variables set:
zookeeper_kerberos_principal: "zk/{{inventory_hostname}}.confluent@{{kerberos.realm | upper}}"
zookeeper_kerberos_keytab_path: "roles/confluent.test/molecule/{{scenario_name}}/keytabs/zookeeper-{{inventory_hostname}}.keytab"
Configure mTLS authentication¶
To enable mTLS authentication for ZooKeeper, set the following variable in
hosts.yml
:
all:
vars:
zookeeper_client_authentication_type: mtls
Components authentication¶
Ansible Playbooks for Confluent Platform supports mTLS and basic authentication for all other Confluent Platform components.
By default, Confluent Platform components are installed with no authentication.
Configure mTLS authentication¶
To enable mTLS for all components, set the following parameters in hosts.yml
file:
all:
vars:
ssl_enabled: true
kafka_broker_rest_proxy_authentication_type: mtls
schema_registry_authentication_type: mtls
kafka_connect_authentication_type: mtls
kafka_rest_authentication_type: mtls
ksql_authentication_type: mtls
control_center_authentication_type: mtls
Configure basic authentication¶
To enable basic authentication for a component, set the following parameters in
hosts.yml
:
all:
vars:
kafka_broker_rest_proxy_authentication_type: basic
schema_registry_authentication_type: basic
kafka_connect_authentication_type: basic
kafka_rest_authentication_type: basic
ksql_authentication_type: basic
control_center_authentication_type: basic