Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Configure Security with Ansible Playbooks for Confluent Platform¶
The topics in this section describes how to configure security for Ansible Playbooks for Confluent Platform.
Encryption¶
By default, the Confluent Platform components are installed using the PLAINTEXT protocol with no data encryption.
TLS encryption¶
You can enable TLS encryption with one of the following options:
- Self Signed Certs: A Certificate Authority will be generated by the playbooks and used to sign the certs for each host.
- Custom Certs: You provide signed certs and keys for each host as well as the Certificate Authority Cert used to sign the certs.
- Custom Keystores and Truststores: You provide keystores and truststores for each host.
Note
By default the TLS connection is one-way TLS. Mutual TLS is also configurable.
Configure TLS for all components¶
To enable TLS encryption for all components, add the following in the
hosts.yml
file.
ssl_enabled: true
TLS encryption variables should be added under the all group in the
hosts.yml
file.
Configure TLS for individual components¶
To selectively enable or disable TLS encryption for specific components, set the
following settings to true
or false
in addition to the global
ssl_enabled
setting.
kafka_connect_ssl_enabled
kafka_rest_ssl_enabled
schema_registry_ssl_enabled
control_center_ssl_enabled
ksql_ssl_enabled
For example, if you want TLS enabled for all components except for Schema Registry, set:
ssl_enabled: true
schema_registry_ssl_enabled: false
By default, components are configured with one-way TLS. To turn on TLS mutual authentication, set the following parameter to true:
ssl_mutual_auth_enabled: true
By default, the certs for this configuration are self-signed. To deploy custom certificates, there are two options available: Custom Certificates and Custom Keystore and Truststores.
Custom certificates¶
To provide custom certificates for each host, you need the Certificate Authority certificate, the signed certificates, and keys for each host on the Ansible host. Complete the following steps to use custom certificates.
Set
ssl_custom_certs
totrue
.ssl_custom_certs: true
Enter the path to the Certificate Authority Cert used to sign each host certificate.
ssl_ca_cert_filepath: "/tmp/certs/ca.crt"
Set the following variables for each host.
ssl_signed_cert_filepath: "/tmp/certs/{{inventory_hostname}}-signed.crt" ssl_key_filepath: "/tmp/certs/{{inventory_hostname}}-key.pem"
The variable
{{inventory_hostname}}
in the example shows that Ansible can read the hostnames set in the inventory file. For this reason, you can keep the inventory file shorter if you put the hostname in the filename for each signed certificate and key file.As an alternative, you can set the variables directly under a host:
schema_registry: hosts: ip-192-24-10-207.us-west.compute.internal: ssl_signed_cert_filepath: "/tmp/certs/192-24-10-207-signed.crt ssl_key_filepath: "/tmp/certs/192-24-10-207-key.pem
Custom Keystores and Truststores¶
To provide Custom Keystores and Truststores for each host, you will need to have keystores and truststores for each host on the Ansible host, as well as their passwords. Complete the following steps to use Custom Keystores and Truststores.
Set this variable to true.
ssl_provided_keystore_and_truststore: true
Set the following variables for each host.
ssl_keystore_filepath: "/tmp/certs/{{inventory_hostname}}-keystore.jks" ssl_keystore_key_password: mystorepassword ssl_keystore_store_password: mystorepassword ssl_truststore_filepath: "/tmp/certs/truststore.jks" ssl_truststore_password: truststorepass
Using the {{inventory_hostname}}
variable and setting the same password for
each host, you can set these variable once under the all group in the
hosts.yml
file. As an alternative, you can set these variables under each
host as shown in the alternative Custom Certificates example.
Authentication¶
SASL authentication¶
Confluent Ansible can configure SASL/PLAIN or SASL/GSSAPI (or Kerberos). By default, the Confluent Platform components are installed without any SASL authentication.
SASL authentication options¶
You can enable the following SASL options:
- PLAIN: SASL/PLAIN uses a simple username and password for authentication.
- SCRAM: SASL/SCRAM uses usernames and password stored in ZooKeeper. Credentials get created during installation.
- Kerberos: SASL with Kerberos provides authentication using your Kerberos or Active Directory server.
Note
Kerberos Key Distribution Center (KDC) and Active Directory KDC configurations are not currently configured by the Ansible playbook.
Configure SASL/PLAIN¶
To configure SASL/PLAIN security, set the following option in the all group in the hosts.yml
file.
sasl_protocol: plain
During installation, users are created for each component. This includes an
admin user for the Kafka brokers and a client user for use by external
components. To configure additional users, add the following section to the
all group in the hosts.yml
file. The following example shows an additional
three users added.
sasl_plain_users:
user1:
principal: user1
password: my-secret
user2:
principal: user2
password: my-secret
user3:
principal: user3
password: my-secret
Configure SASL/SCRAM¶
To configure SASL/SCRAM security, set the following option in the all group in the hosts.yml
file.
sasl_protocol: scram
During installation, users are created for each component. This includes an
admin user for the Kafka brokers and a client user for use by external
components. To configure additional users, add the following section to the
all group in the hosts.yml
file.
sasl_scram_users:
user1:
principal: user1
password: my-secret
Configure SASL with Kerberos¶
To configure SASL with Kerberos, you need to create principals within your organization’s KDC server for each component and for each host in each component. You also need to generate keytabs for these principals. The keytab files must be present on the Ansible host.
Note
You need to set up your own Key Distribution Center (KDC) independently of the playbook and provide your own keytabs.
Add the following configuration parameters in in the all
section of the hosts.yaml
file. This installs Kerberos
packages and configures the /etc/krb5.conf
file on each host.
kerberos_kafka_broker_primary
: The primary part of the Kafka broker principal. For example, for the principal,kafka/kafka1.hostname.com@EXAMPLE.COM
, the primary iskafka
.kerberos_configure
: Boolean for Ansible to install Kerberos packages and to configure the/etc/krb5.conf
file. The default istrue
.realm
: The realm part of the Kafka broker Kerberos principal.kdc_hostname
: Hostname of machine with KDC running.admin_hostname
: Hostname of machine with KDC running.
The following example shows the Kerberos configuration settings for the Kerberos principal, kafka/kafka1.hostname.com@EXAMPLE.COM
.
all:
vars:
...omitted
#### Kerberos Configuration ####
kerberos_kafka_broker_primary: kafka
kerberos_configure: true
kerberos:
realm: example.com
kdc_hostname: ip-192-24-45-82.us-west.compute.internal
admin_hostname: ip-192-24-45-82.us-west.compute.internal
Note
If the hosts already have the /etc/krb5.conf
file configured, make sure to set kerberos_configure
to false.
The following examples show the variables that need to be added to the
hosts.yml
file for each component to use Kerberos for authentication. Each
host must have these variables set in the hosts.yml
file.
Zookeeper section
Use the following example to enter the zookeeper_kerberos_keytab_path
and
zookeeper_kerberos_principal
variables. Each host needs to have these two
variables. Three hosts are shown in the example.
zookeeper:
hosts:
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
zookeeper_kerberos_keytab_path: /tmp/keytabs/zookeeper-ip-192-24-34-224.us-west.compute.internal.keytab
zookeeper_kerberos_principal: zookeeper/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Kafka broker section
Use the following example to enter the kafka_broker_kerberos_keytab_path
and
kafka_broker_kerberos_principal
variables. Each host needs to have these two
variables. The example shows three hosts.
kafka_broker:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-37-15.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ip-192-24-34-224.us-west.compute.internal:
kafka_broker_kerberos_keytab_path: /tmp/keytabs/kafka-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_broker_kerberos_principal: kafka/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Schema Registry section
Use the following example to enter the schema_registry_kerberos_keytab_path
and schema_registry_kerberos_principal
variables. Each host needs to have
these two variables. The example shows one host.
schema_registry:
hosts:
ip-192-24-34-224.us-west.compute.internal:
schema_registry_kerberos_keytab_path: /tmp/keytabs/schemaregistry-ip-192-24-34-224.us-west.compute.internal.keytab
schema_registry_kerberos_principal: schemaregistry/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Kafka Connect section
Use the following example to enter the kafka_connect_kerberos_keytab_path
and kafka_connect_kerberos_principal
variables. Each host needs to have
these two variables. The example shows one host.
kafka_connect:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_connect_kerberos_keytab_path: /tmp/keytabs/connect-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_connect_kerberos_principal: connect/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
REST proxy section
Use the following example to enter the kafka_rest_kerberos_keytab_path
and
kafka_rest_kerberos_principal
variables. Each host needs to have these two
variables. The example shows one host.
kafka_rest:
hosts:
ip-192-24-34-224.us-west.compute.internal:
kafka_rest_kerberos_keytab_path: /tmp/keytabs/restproxy-ip-192-24-34-224.us-west.compute.internal.keytab
kafka_rest_kerberos_principal: restproxy/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
ksqlDB section
Use the following example to enter the ksql_kerberos_keytab_path
and
ksql_kerberos_principal
variables. Each host needs to have these two
variables. The example shows one host.
ksql:
hosts:
ip-192-24-34-224.us-west.compute.internal:
ksql_kerberos_keytab_path: /tmp/keytabs/ksql-ip-192-24-34-224.us-west.compute.internal.keytab
ksql_kerberos_principal: ksql/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Control Center section
Use the following example to enter the control_center_kerberos_keytab_path
and control_center_kerberos_principal
variables. Each host needs to have
these two variables. The example shows one host.
control_center:
hosts:
ip-192-24-34-224.us-west.compute.internal:
control_center_kerberos_keytab_path: /tmp/keytabs/controlcenter-ip-192-24-34-224.us-west.compute.internal.keytab
control_center_kerberos_principal: controlcenter/ip-192-24-34-224.us-west.compute.internal@REALM.EXAMPLE.COM
Authorization¶
Role-based Access Control¶
Starting in the 5.5.0 release of Confluent Platform, you can use Ansible Playbooks for Confluent Platform to configure Role-based Access Control (RBAC). The Kafka broker hosts will be configured as the Metadata Service (MDS) hosts. Currently, we do not support configuring other Kafka clusters or components outside of those in the inventory to connect to this MDS cluster.
When enabling RBAC for authorization, the following Kafka listeners can be configured with the authentication options listed:
- Inter-broker listener: SASL PLAIN, SASL SCRAM, SASL GSSAPI, and mTLS
- Confluent Platform component listener: SASL OAUTHBEARER
- External client listeners: SASL PLAIN, SASL SCRAM, SASL GSSAPI, and mTLS
By default, there are two Kafka listeners: one for inter-broker communication and one for the Confluent Platform components. However, the component listener uses an authentication mode unsupported for external clients, and it is recommended that you configure at least one additional listener for your Kafka clients. See Security for internal and external listeners for details.
Do not customize the listener named internal
when configuring RBAC.
Requirements¶
- An LDAP server reachable from your Kafka broker hosts.
- Port 8090 must be opened on the Kafka brokers and accessible by all hosts.
- Set up an MDS super user in LDAP for bootstrapping roles and permissions for the Confluent Platform component principals. We generally recommend creating a user named
mds
. - Set up one principal per Confluent Platform component in your LDAP server. These users are used by the Confluent Platform components to authenticate to MDS and access their respective resources. In the below examples, the following component users are used:
- Schema Registry:
schema_registry
- Connect:
connect_worker
- ksqlDB:
ksql
- REST Proxy:
rest_proxy
- Control Center:
control_center
- Schema Registry:
- (Optional) Generate a key pair to be used by the Oauth-enabled listener as described in Create a PEM key pair.
- If using LDAPS, the Certificate Authority used to sign the LDAP server certificates must be the same one used to sign Confluent Platform host certificates.
Note
If using mTLS or Kerberos for inter-broker authentication, you don’t need to set up an LDAP user for Kafka brokers.
Required settings for RBAC¶
Sample inventory files for RBAC configuration are provided in the directory
sample_inventories
under the Ansible Playbooks for Confluent Platform home directory.
Add the required variables in your hosts.yml
to enable and
configure RBAC.
Enable RBAC with Ansible
rbac_enabled: true
Provide LDAP server details for RBAC to look up and validate users
Consult your LDAP admins and see Configure LDAP Group-Based Authorization for MDS and Configure LDAP Authentication for the properties you need to set in ldap_config
.
The following is an example ldap_config
with sample LDAP properties.
ldap_config: |
ldap.java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory
ldap.com.sun.jndi.ldap.read.timeout=3000
ldap.java.naming.provider.url=ldap://ldap1:389
ldap.java.naming.security.principal=uid=mds,OU=rbac,DC=example,DC=com
ldap.java.naming.security.credentials=password
ldap.java.naming.security.authentication=simple
ldap.user.search.base=OU=rbac,DC=example,DC=com
ldap.group.search.base=OU=rbac,DC=example,DC=com
ldap.user.name.attribute=uid
ldap.user.memberof.attribute.pattern=CN=(.*),OU=rbac,DC=example,DC=com
ldap.group.name.attribute=cn
ldap.group.member.attribute.pattern=CN=(.*),OU=rbac,DC=example,DC=com
ldap.user.object.class=account
Provide the super user credentials for bootstrapping RBAC within Confluent Platform
The mds
user is used in the sample_inventories
mentioned above.
mds_super_user: mds
mds_super_user_password: password
Provide LDAP users for Confluent Platform components
The following users are configured in the sample_inventories
mentioned
above.
schema_registry_ldap_user: schema_registry
schema_registry_ldap_password: password
kafka_connect_ldap_user: connect_worker
kafka_connect_ldap_password: password
ksql_ldap_user: ksql
ksql_ldap_password: password
kafka_rest_ldap_user: rest_proxy
kafka_rest_password: password
control_center_ldap_user: control_center
control_center_ldap_password: password
Optional settings for RBAC¶
Add the optional settings in your hosts.yml
to enable and configure RBAC.
Provide your own MDS server certificates and key pair for OAuth
create_mds_certs: false
token_services_public_pem_file: # Path to public.pem
token_services_private_pem_file: # Path to tokenKeypair.pem
Disable MDS-based ACLs
mds_acls_enabled: false
By default, MDS-based ACLs are enabled when RBAC is enabled.
Enable TLS encryption for MDS
mds_ssl_enabled: true
Defaults to the ssl_enabled
setting.
Enable mutual authentication for RBAC to work with TLS
mds_ssl_mutual_auth_enabled: true
Defaults to the ssl_mutual_auth_enabled
setting.
Configure additional RBAC super users
user1
is used in this example.
rbac_component_additional_super_users:
- user1
Other Considerations¶
Kerberos¶
When setting sasl_protocol: kerberos
, you need keytabs/principals for ZooKeeper,
Kafka and external clients. See Configure SASL with Kerberos.
However, because the rest of Confluent Platform components use their own listener, there is no need to create Kerberos principals and keytabs for those components. They will authenticate to Kafka using their LDAP user/password.
Security configuration tasks¶
Security for internal and external listeners¶
Ansible Playbooks for Confluent Platform configures two listeners on the broker:
- An inter-broker listener on port 9091
- A listener for the other Confluent Platform components and external clients on 9092
By default both of these listeners inherit the security settings you configure
for ssl_enabled
and sasl_protocol
.
If you only need a single listener, add the following variable to the all
group in hosts.yml
file.
kafka_broker_configure_additional_brokers: false
You can customize the out of the box listeners by adding the variable,
kafka_broker_custom_listeners
in the all group in the hosts.yml
file.
In the example below, the broker, internal, and client listeners all have unique
security settings. You can configure multiple additional client listeners, but
do not change the dictionary key for the broker and internal listeners,
broker
and internal
.
kafka_broker_custom_listeners:
broker:
name: BROKER
port: 9091
ssl_enabled: false
ssl_mutual_auth_enabled: false
sasl_protocol: none
internal:
name: INTERNAL
port: 9092
ssl_enabled: true
ssl_mutual_auth_enabled: false
sasl_protocol: scram
client_listener:
name: CLIENT
port: 9093
ssl_enabled: true
ssl_mutual_auth_enabled: true
sasl_protocol: plain