Configure Authentication for Confluent Platform with Ansible Playbooks¶
This topic describes the authentication features supported in Confluent Platform with Ansible Playbooks for Confluent Platform (Confluent Ansible) explains how to configure to use those features.
Kafka authentication¶
Confluent Ansible supports the following authentication modes for Kafka in the ZooKeeper mode:
- SASL/PLAIN: Uses a simple username and password for authentication.
- SASL/SCRAM: Uses usernames and password stored in ZooKeeper. Credentials get created during installation.
- SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
- OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.
Confluent Ansible supports the following authentication modes for Kafka brokers and Kafka controllers in the KRaft mode:
- SASL/PLAIN: Uses a simple username and password for authentication.
- SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
- OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.
By default, Kafka is installed with no authentication.
Configure SASL/PLAIN authentication¶
To configure SASL/PLAIN authentication, set the following in the hosts.yml
inventory file. In addition to the default users, the code snippet adds three
users, user1
, user2
, user3
, as an example.
The default keys for sasl_plain_users
are required for Confluent Platform components,
including admin
for the Kafka brokers, the client
user for use by
external components, schema_registry
, kafka_connect
, ksql
,
control_center
, kafka-rest
, kafka_connect_replicator
.
all:
vars:
sasl_protocol: plain
sasl_plain_users:
admin:
principal: 'admin'
password: 'admin-secret'
schema_registry:
principal: 'schema-registry'
password: 'schema_registry-secret'
kafka_connect:
principal: 'kafka-connect'
password: 'kafka_connect-secret'
ksql:
principal: 'ksqldb'
password: 'ksql-secret'
kafka_rest:
principal: 'kafka_rest'
password: 'kafka_rest-secret'
control_center:
principal: 'control-center'
password: 'control_center-secret'
kafka_connect_replicator:
principal: 'kafka_connect_replicator'
password: 'kafka_connect_replicator-secret'
client:
principal: 'client'
password: 'client-secret'
user1:
principal: 'user1'
password: my-secret
user2:
principal: 'user2'
password: my-secret
user3:
principal: 'user3'
password: my-secret
Configure SASL/SCRAM (SHA-512) authentication¶
To configure SASL/SCRAM authentication with SHA-512, set the following option in
the hosts.yml
inventory file:
all:
vars:
sasl_protocol: scram
During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.
To configure additional users, add the following section in the hosts.yml
inventory file:
all:
vars:
sasl_scram_users:
user1:
principal: user1
password: my-secret
Configure SASL/SCRAM (SHA-256) authentication¶
To configure SASL/SCRAM authentication with SHA-256, set the following option in
the hosts.yml
inventory file:
all:
vars:
sasl_protocol: scram256
During installation, users are created for each component. This includes an admin user for the Kafka brokers and a client user for use by external components.
To configure additional users, add the following section in the hosts.yml
inventory file:
all:
vars:
sasl_scram256_users:
user1:
principal: user1
password: my-secret
Configure SASL/GSSAPI (Kerberos) authentication¶
The Ansible playbook does not currently configure Key Distribution Center (KDC) and Active Directory KDC configurations. You must set up your own KDC independently of the playbook and provide your own keytabs to configure SASL/GSSAPI (SASL with Kerberos):
- Create principals within your organization’s Kerberos KDC server for each component and for each host in each component.
- Generate keytabs for these principals. The keytab files must be present on the Ansible control node.
To install Kerberos packages and configure the client configuration file on each
host, add the following configuration parameters in the hosts.yaml
file.
Specify whether to install Kerberos packages and to configure the client configuration file. The default value is
true
.If the hosts already have the client configuration file configured, set
kerberos_configure
tofalse
.all: vars: kerberos_configure: <true-or-false>
Specify the client configuration file. The default value is
/etc/krb5.conf
.Use this variable only when you want to specify a custom location of the client configuration file.
all: vars: kerberos_client_config_file_dest:
If
kerberos_configure
is set totrue
, Confluent Ansible will generate the client config file at this location on the host nodes.If
kerberos_configure
is set tofalse
, Confluent Ansible will expect the client configuration file to be present at this location on the host nodes.Specify the realm part of the Kafka broker Kerberos principal and the hostname of machine with KDC running.
all: vars: kerberos: realm: <kafka-principal-realm> kdc_hostname: <kdc-hostname> admin_hostname: <kdc-hostname>
The example below shows the Kerberos configuration settings for the Kerberos
principal, kafka/kafka1.hostname.com@EXAMPLE.COM
.
all:
vars:
kerberos_configure: true
kerberos:
realm: example.com
kdc_hostname: ip-192-24-45-82.us-west.compute.internal
admin_hostname: ip-192-24-45-82.us-west.compute.internal
Each host in the inventory file also needs to set variables that define their Kerberos principal and the location of the keytab on the Ansible controller.
The hosts.yml
inventory file should look like:
zookeeper:
hosts:
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_controller:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_broker:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
schema_registry:
hosts:
ip-192-24-34-224.us-west.compute.internal:
schema_registry_kerberos_keytab_path: /tmp/keytabs/schemaregistry-ip-192-24-34-224.us-west.compute.internal.keytab
schema_registry_kerberos_principal: schemaregistry/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_connect:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_connect_kerberos_keytab_path: /tmp/keytabs/connect-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_connect_kerberos_principal: connect/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
kafka_rest:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_rest_kerberos_keytab_path: /tmp/keytabs/restproxy-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_rest_kerberos_principal: restproxy/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ksql:
hosts:
ip-192-24-34-224.us-west.compute.internal:
ksql_kerberos_keytab_path: /tmp/keytabs/ksql-ip-192-24-34-224.us-west.compute.internal.keytab
ksql_kerberos_principal: ksql/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
control_center:
hosts:
ip-192-24-34-224.us-west.compute.internal:
control_center_kerberos_keytab_path: /tmp/keytabs/controlcenter-ip-192-24-34-224.us-west.compute.internal.keytab
control_center_kerberos_principal: controlcenter/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Note
To better support Active Directory, Confluent Ansible enables canonicalization
by default. If canonicalization has not been enabled during the Confluent Platform cluster
creation, such as when you upgrade a Confluent Platform cluster that uses Kerberos to
authenticate Kafka brokers to ZooKeeper, explicitly set the following property in
the hosts.yml
inventory file.
kerberos:
canonicalize: false
Configure mTLS authentication¶
To configure mutual TLS (mTLS) authentication, you must enable TLS encryption as described in Configure Encryption for Confluent Platform with Ansible Playbooks.
Set the following parameters in the hosts.yml
inventory file:
all:
vars:
ssl_enabled: true
ssl_mutual_auth_enabled: true
Principal mapping rules for Kafka¶
By default, the user name associated with an SSL connection is of the form,
"CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"
. You can
customize the SSL principal name by extracting one of the fields from
the long distinguished name, for example, CN
, as the principal name.
In the Kafka broker configuration, use the ssl.principal.mapping.rules
property in the Kafka custom properties
to specify a list of rules for mapping a distinguished name to a short
principal name.
Notes:
- The
$
characters used in the rule need to be escaped with another$
as shown in the example below. - Shorthand character classes need to be escaped with another backslash.
For example, to use a whitespace (
\s
), specify\\s
.
For example:
kafka_broker:
hosts:
ip-192-24-10-207.us-west.compute.internal:
kafka_broker_custom_properties:
ssl.principal.mapping.rules: "RULE:.O=(.?),OU=TEST.$$/$$1/,RULE:^cn=(.?),ou=(.?),dc=(.?),dc=(.*?)$$/$$1@$$2/L"
For details about principal mapping rules, see Principal Mapping Rules for SSL Listeners.
KRaft authentication¶
By default, KRaft controllers inherit the authentication configuration of the Kafka cluster and does not require specific authentication configuration just for KRaft.
Confluent Ansible supports the following authentication modes for Kafka brokers and Kafka controllers in the KRaft mode:
SASL/PLAIN: Uses a simple username and password for authentication.
This authentication method is set at the Kafka level cannot be overridden just for KRaft.
SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
You can override the global Kafka authentication and configure KRaft with Kerberos.
mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
You can override the global Kafka authentication and configure KRaft with mTLS.
OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.
Configure SASL/GSSAPI (Kerberos) authentication¶
By default, KRaft controllers inherit the Kafka Kerberos settings.
To enable SASL/GSSAPI (Kerberos) authentication specifically for KRaft, set
the following variables in hosts.yml
:
all:
vars:
kafka_controller_sasl_protocol: kerberos
Each host also need these variables set. The Kafka controller and the brokers must have the same primary names (set in the Kerberos principal).
kafka_controller:
vars:
kafka_controller_kerberos_keytab_path: "/tmp/keytabs/kafka-{{inventory_hostname}}.keytab"
kafka_controller_kerberos_principal: "kafka/{{inventory_hostname}}@confluent.example.com"
For example:
kafka_controller:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_controller_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_controller_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
For additionally required Kerberos settings, see Kafka Kerberos settings.
Configure mTLS authentication¶
To configure mutual TLS (mTLS) authentication, you must enable TLS encryption as described in Configure Encryption for Confluent Platform with Ansible Playbooks.
By default, KRaft controllers inherit the global TLS and mTLS settings.
If want to enable or disable mTLS specifically for KRaft, specify a boolean
value to enable or disable mTLS authentication on the KRaft controllers
(Server to Server and Client to Server) in the hosts.yml
inventory file:
all:
vars:
kafka_controller_ssl_enabled:
kafka_controller_ssl_mutual_auth_enabled:
ZooKeeper authentication¶
Ansible Playbooks for Confluent Platform supports configuring ZooKeeper Server to Server and Client to Server authentication modes. Kafka acts as a ZooKeeper client.
By default, ZooKeeper is installed with no authentication.
Server to Server authentication¶
Ansible Playbooks for Confluent Platform supports the following ZooKeeper server to server authentication modes:
- SASL with DIGEST: Uses hashed values of the user’s password for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
By default, ZooKeeper is installed with no authentication.
Configure SASL with DIGEST-MD5 authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in the hosts.yml
inventory file:
all:
vars:
zookeeper_quorum_authentication_type: digest
Configure mTLS authentication¶
To enable mTLS authentication for ZooKeeper, set the following variable in the
hosts.yml
inventory file:
all:
vars:
zookeeper_quorum_authentication_type: mtls
Client to Server authentication¶
Ansible Playbooks for Confluent Platform supports the following ZooKeeper client to server authentication modes:
- SASL with DIGEST-MD5: Uses hashed values of the user’s password for authentication.
- SASL/GSSAPI (Kerberos): Uses your Kerberos or Active Directory server for authentication.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
Configure SASL with DIGEST-MD5 authentication¶
To enable SASL with DIGEST-MD5 authentication for ZooKeeper, set the following
variable in the hosts.yml
inventory file:
all:
vars:
zookeeper_client_authentication_type: digest
Configure SASL/GSSAPI (Kerberos) authentication¶
To enable SASL SASL/GSSAPI (Kerberos) authentication for ZooKeeper, set the following
variable in the hosts.yml
file:
all:
vars:
zookeeper_client_authentication_type: kerberos
Each host will also need these variables set:
zookeeper_kerberos_principal: "zk/{{inventory_hostname}}.confluent@{{kerberos.realm | upper}}"
zookeeper_kerberos_keytab_path: "roles/confluent.test/molecule/{{scenario_name}}/keytabs/zookeeper-{{inventory_hostname}}.keytab"
Note
To better support Active Directory, Confluent Ansible enables canonicalization
by default. If canonicalization has not been enabled during the Confluent Platform cluster
creation, such as when you upgrade a Confluent Platform cluster that uses Kerberos to
authenticate Kafka brokers to ZooKeeper, explicitly set the following property in
the hosts.yml
inventory file.
kerberos:
canonicalize: false
Configure mTLS authentication¶
To enable mTLS authentication for ZooKeeper, set the following variable in the
hosts.yml
inventory:
all:
vars:
zookeeper_client_authentication_type: mtls
Components authentication¶
Confluent Ansible supports the following authentication modes for all REST-based Confluent Platform components, besides Kafka and ZooKeeper:
- HTTP Basic: Authenticates with a username and password.
- mTLS: Ensures that traffic is secure and trusted in both directions between Kafka and clients.
- OAuth/OIDC: Uses your own identity provider to manage authentication and authorization across your Confluent Platform and deployments on cloud and on-premises.
- For Control Center and Confluent CLI, the OIDC SSO is supported.
By default, Confluent Platform components are installed with no authentication.
Configure mTLS authentication¶
To enable mTLS for all components, set the following parameters in the
hosts.yml
inventory file:
all:
vars:
ssl_enabled: true
kafka_broker_rest_proxy_authentication_type: mtls
schema_registry_authentication_type: mtls
kafka_connect_authentication_type: mtls
kafka_rest_authentication_type: mtls
ksql_authentication_type: mtls
control_center_authentication_type: mtls
Configure basic authentication¶
To enable basic authentication for a component, set the following parameters in
the hosts.yml
inventory:
all:
vars:
kafka_broker_rest_proxy_authentication_type: basic
schema_registry_authentication_type: basic
kafka_connect_authentication_type: basic
kafka_rest_authentication_type: basic
ksql_authentication_type: basic
control_center_authentication_type: basic
kafka_broker_rest_proxy_basic_users:
client:
principal: client
password: client-secret
roles: client,admin
schema_registry_basic_users:
client:
principal: client
password: client-secret
roles: client,developer,admin
kafka_connect_basic_users:
admin:
principal: user1
password: password
ksql_basic_users:
admin:
principal: user1
password: user1-secret
roles: user1
client:
principal: client
password: client-secret
roles: client
kafka_rest_basic_users:
client:
principal: client
password: client-secret
roles: client
control_center_basic_users:
client:
principal: client
password: client-secret
roles: client
Configure single sign-on authentication for Control Center or Confluent CLI¶
In Confluent Ansible, you can configure single sign-on (SSO) authentication for Control Center using OpenID Connect (OIDC).
As a prerequisite for SSO, you need to configure:
-
For SSO, RBAC needs to be enabled, and RBAC requires MDS.
To use SSO in Control Center or Confluent CLI, specify the following variables in your inventory file. For details on the setting variables, refer to Configure SSO for Confluent Control Center using OIDC.
sso_mode
To enable SSO, set to
oidc
.sso_groups_claim
Groups in JSON Web Tokens (JWT)
Default:
groups
sso_sub_claim
: Sub in JWT.Default:
sub
sso_issuer_url
The issuer URL, which is typically the authorization server’s URL. This value is compared to the issuer claim in the JWT token for verification.
sso_jwks_uri
The JSON Web Key Set (JWKS) URI. It is used to verify any JSON Web Token (JWT) issued by the IdP.
sso_authorize_uri
The base URI for authorize endpoint, that initiates an OAuth authorization request.
sso_token_uri
The IdP token endpoint, from where a token is requested by the MDS.
sso_client_id
The client id for authorization and token request to IdP.
sso_client_password
The client password for authorize and token request to IdP.
sso_groups_scope
Optional. The name of the custom groups. Use this setting to handle a case where the
groups
field is not present in tokens by default, and you have configured a custom scope for issuing groups. The name of the scope could be anything, such asgroups
,allow_groups
,offline_access
, etc.offline_access
is a well-defined scope used to request refresh token. This scope can be requested when thesso_refresh_token
setting is set totrue
. The scope is defined in OIDC RFC, and is not specific to any IdP.Possible values:
groups
,openid
,offline_access
, etc.Default:
groups
sso_refresh_token
Configures whether the
offline_access
scope can be requested in the authorization URI. Set this tofalse
if offline tokens are not allowed for the user or client in the IdP.As described in SSO Session management, for RBAC to work as expected, the default value of
true
should not be changed tofalse
.Default:
true
sso_cli
To enable SSO in Confluent CLI, set it to
true. When enabling SSO in CLI, you must also provide ``sso_device_authorization_uri
.Default:
false
sso_device_authorization_uri
Device Authorization endpoint of Idp, Required to enable SSO in Confluent CLI.
sso_idp_cert_path
TLS certificate (full path of file on the control node) of the IdP domain for OIDC SSO in Control Center or Confluent CLI. Required when the IdP server has TLS enabled with custom certificate.
The following is an example snippet of an inventory file for setting up Confluent Platform with RBAC, SASL/PLAIN protocol, and Control Center SSO:
all:
vars:
ansible_connection: ssh
ansible_user: ec2-user
ansible_become: true
ansible_ssh_private_key_file: /home/ec2-user/guest.pem
## TLS Configuration - Custom Certificates
ssl_enabled: true
#### SASL Authentication Configuration ####
sasl_protocol: plain
## RBAC Configuration
rbac_enabled: true
## LDAP CONFIGURATION
kafka_broker_custom_properties:
ldap.java.naming.factory.initial: com.sun.jndi.ldap.LdapCtxFactory
ldap.com.sun.jndi.ldap.read.timeout: 3000
ldap.java.naming.provider.url: ldaps://ldap1:636
ldap.java.naming.security.principal: uid=mds,OU=rbac,DC=example,DC=com
ldap.java.naming.security.credentials: password
ldap.java.naming.security.authentication: simple
ldap.user.search.base: OU=rbac,DC=example,DC=com
ldap.group.search.base: OU=rbac,DC=example,DC=com
ldap.user.name.attribute: uid
ldap.user.memberof.attribute.pattern: CN=(.*),OU=rbac,DC=example,DC=com
ldap.group.name.attribute: cn
ldap.group.member.attribute.pattern: CN=(.*),OU=rbac,DC=example,DC=com
ldap.user.object.class: account
## LDAP USERS
mds_super_user: mds
mds_super_user_password: password
kafka_broker_ldap_user: kafka_broker
kafka_broker_ldap_password: password
schema_registry_ldap_user: schema_registry
schema_registry_ldap_password: password
kafka_connect_ldap_user: connect_worker
kafka_connect_ldap_password: password
ksql_ldap_user: ksql
ksql_ldap_password: password
kafka_rest_ldap_user: rest_proxy
kafka_rest_ldap_password: password
control_center_ldap_user: control_center
control_center_ldap_password: password
## Varibles to enable SSO in Control Center
sso_mode: oidc
# necessary configs in MDS server for sso in C3
sso_groups_claim: groups
sso_sub_claim: sub
sso_groups_scope: groups
sso_issuer_url: <issuer url>
sso_jwks_uri: <jwks uri>
sso_authorize_uri: <OAuth authorization endpoint>
sso_token_uri: <IdP token endpoint>
sso_client_id: <client id>
sso_client_password: <client password>
sso_refresh_token: true
zookeeper:
hosts:
demo-zk-0:
demo-zk-1:
demo-zk-2:
kafka_broker:
hosts:
demo-broker-0:
demo-broker-1:
demo-broker-2:
schema_registry:
hosts:
demo-sr-0:
kafka_connect:
hosts:
demo-connect-0:
kafka_rest:
hosts:
demo-rest-0:
ksql:
hosts:
demo-ksql-0:
control_center:
hosts:
demo-c3-0: