Configure Ansible Playbooks for Confluent Platform
This topic describes commonly used settings to configure Ansible Playbooks for Confluent Platform (Confluent Ansible).
Set Ansible host variables
Once you configure the hosts in your inventory file and verify the connections, you can set variables in the inventory which describe your desired Confluent Platform configuration.
Review the commented out variables with the example inventory file at:
https://github.com/confluentinc/cp-ansible/blob/8.1.0-post/docs/hosts_example.yml
For a full list of supported variables, see the Ansible variable file at:
https://github.com/confluentinc/cp-ansible/blob/8.1.0-post/docs/VARIABLES.md
You can apply variables to all hosts or to specific hosts.
In the below example, all hosts get the ssl_enabled=true variable set:
all:
vars:
ssl_enabled: true
We generally recommend applying variables in the all group so that each host
is aware of how the Confluent Platform is configured as a whole.
You can also make use of group_vars and host_vars directories that are
located next to the inventory file to pass variables. See Ansible Directory
Layout.
Additionally, consider saving sensitive variables in their own variables file in the above structure and use Ansible Vault to encrypt the variable files.
The remainder of this document describes how to configure Confluent Platform using Ansible variables.
Set Confluent Platform software installation method
Ansible Playbooks for Confluent Platform supports the following methods for installing Confluent Platform software onto host machines.
- Package installation, using the packages hosted on packages.confluent.io
This is the default option. It requires internet connectivity from all hosts to
packages.confluent.io. No inventory variables are required to use this method.- Package installation, using the packages hosted on your own RPM or DEB package repository
This option works for hosts that do not have outside internet connectivity. It requires you to pull the Confluent Platform packages and put them on your repository.
Set the following in your inventory file to use this method.
For packages on an RHEL/Centos host:
all: vars: repository_configuration: custom custom_yum_repofile_filepath: /tmp/my-repo.repoFor packages on a Debian host:
all: vars: repository_configuration: custom custom_apt_repo_filepath: /tmp/my-source.listFor the end-to-end workflow of deploying Ansible Playbooks for Confluent Platform in an air-gapped environment, see Deploy Confluent Platform in Air-Gapped Environment Using Ansible Playbooks.
- Tar installation, using the tarball hosted on packages.confluent.io
It requires internet connectivity from all hosts to
packages.confluent.io.Set the following in your inventory file to use this method:
all: vars: installation_method: archive- Tar installation, using the tarball hosted on your own web server
This does not require outside internet connectivity, but does require you to pull the tarball and host it on a web server.
Set the following in your inventory file to use this method:
all: vars: installation_method: archive confluent_archive_file_source: <web server url>/path/confluent-8.1.0.tar.gz
For the end-to-end workflow of deploying Ansible Playbooks for Confluent Platform in an air-gapped environment, see Deploy Confluent Platform in Air-Gapped Environment Using Ansible Playbooks.
- Tar installation using the tarball placed on Ansible control node
This does not require outside internet connectivity, but requires you to pull and copy the tarball to the control node.
Set the following in your inventory file to use this method:
all: vars: installation_method: archive confluent_archive_file_source: /path/to/confluent-8.1.0.tar.gz confluent_archive_file_remote: false
For the end-to-end workflow of deploying Ansible Playbooks for Confluent Platform in an air-gapped environment, see Deploy Confluent Platform in Air-Gapped Environment Using Ansible Playbooks.
Set custom component properties
When a configuration setting is not directly supported by Ansible Playbooks for Confluent Platform, you can use the custom property feature to configure Confluent Platform components.
Before you set a custom property variable, first check the Ansible variable file at the following location for an existing variable:
https://github.com/confluentinc/cp-ansible/blob/8.1.0-post/docs/VARIABLES.md
If you find an existing variable that directly supports the setting, use the variable in the inventory file instead of using a config override.
Configure the custom properties in the Ansible inventory file, hosts.yml,
using the following dictionaries:
kafka_controller_custom_propertieskafka_broker_custom_propertiesschema_registry_custom_propertieskafka_rest_custom_propertieskafka_connect_custom_propertiesksql_custom_propertiescontrol_center_next_gen_custom_propertieskafka_connect_replicator_custom_propertieskafka_connect_replicator_consumer_custom_propertieskafka_connect_replicator_producer_custom_propertieskafka_connect_replicator_monitoring_interceptor_custom_properties
In the example below:
The
num.io.threadsproperty gets set in the Kafka properties file.The
confluent.controlcenter.ksql.default.advertised.urlproperty gets set in the Control Center properties file.
Note that the default in the
confluent.controlcenter.ksql.default.advertised.url property value is the
name Control Center should use to identify the ksqlDB cluster.
all:
vars:
kafka_broker_custom_properties:
num.io.threads: 15
control_center_next_gen_custom_properties:
confluent.controlcenter.ksql.url: http://ksql-external-dns:1234,http://ksql-external-dns:2345
Set custom properties on a specific host
You can configure a specific host with unique properties. Put the component properties block directly under the host.
In the example below, the broker.rack property is set to us-west-2a for
the host, ip-192-24-10-207.us-west.compute.internal.
kafka_broker:
hosts:
ip-192-24-10-207.us-west.compute.internal:
kafka_broker_custom_properties:
broker.rack: us-west-2a
Add Confluent license
To add a Confluent license key for Confluent Platform components, use a custom property for
each Confluent Platform component in the hosts.yml file as following:
all:
vars:
kafka_broker_custom_properties:
confluent.license: <license-key>
kafka.rest.confluent.license.topic: "_confluent-command"
schema_registry_custom_properties:
confluent.license: <license-key>
kafka_connect_custom_properties:
confluent.license: <license-key>
control_center_next_gen_custom_properties:
confluent.license: <license-key>
kafka_rest_custom_properties:
confluent.license: <license-key>
ksql_custom_properties:
confluent.license: <license-key>
Note that Confluent Server (Kafka broker) contains Kafka REST Server, and this
component also requires a valid license configuration. Set the
kafka.rest.confluent.license.topic property to the _confluent-command
topic that stores the Confluent license.
To add license to a connector, use the following config in the hosts.yaml
file:
all:
vars:
kafka_connect_connectors:
- name: sample-connector
config:
confluent.license: <license-key>
The following example adds a license key for Kafka and Schema Registry. The example creates a variable for the license key and uses the variable in the custom properties.
vars:
confluent_license: asdfkjkadslkfjaslkdf
kafka_broker_custom_properties:
confluent.license: "{{ confluent_license }}"
kafka.rest.confluent.license.topic: "_confluent-command"
schema_registry_custom_properties:
confluent.license: "{{ confluent_license }}"
For additional license configuration parameters you can set with the above custom properties, see License Configurations for Confluent Platform.
Configure Control Center
To learn about Control Center, see Confluent Control Center Overview.
The Control Center variables are defined in the following file in GitHub, prefixed with
confluent_control_center_next_gen:
https://github.com/confluentinc/cp-ansible/blob/8.1.0-post/docs/VARIABLES.md
Override the default values for any of the variables you want to customize in the Control Center role section in the inventory file as below:
confluent_control_center_next_gen:
vars:
The following are a few of the commonly customized variables. For the variables related to authentication, see Configure Control Center.
confluent_control_center_next_gen_package_version: Control Center does not follow the same versions as Confluent Platform. Check and update theconfluent_control_center_next_gen_package_versionvariable inhosts.ymlfor the Control Center version you want to install.control_center_next_gen_dependency_config_path: Prometheus and Alert Manager configuration file path. The default path is/opt/confluent-control-center/dependencies.Control Center ships with a Prometheus version that has specific configurations to work with Control Center. Currently, you cannot use any other Prometheus version besides that particular version.
The Prometheus bundled with Control Center can not be used to ingest data to Grafana or any other tool in your environment.
control_center_next_gen_user: Set this variable to customize the Linux user that the Control Center service runs with. The default user iscp-control-center.control_center_next_gen_custom_properties: Set custom Control Center properties.control_center_next_gen_dependency_prometheus_service_environment_overrides: Overrides default environment variables to the Prometheus service.For example:
control_center_next_gen_dependency_prometheus_service_environment_overrides: CONFIG_PATH: "{{ control_center_next_gen_dep_prometheus.config_path }}" # Data path environment variable is different between prometheus and alertmanager TSDB_PATH: "{{ control_center_next_gen_dep_prometheus.data_path }}" LOG_PATH: "{{ control_center_next_gen_dep_prometheus.log_path }}" METRICS_RETENTION_DAYS: "30d"
control_center_next_gen_dependency_alertmanager_service_environment_overrides: Overrides default environment variables to the Alert Manager service.
After configuring Control Center, install Control Center as described in Install Confluent Control Center.
Configure Kafka and KRaft to start sending data to Control Center as described in Configure Kafka and KRaft for Control Center.
For a sample inventory file for Control Center, see Confluent Ansible GitHub repo.
Control Center authentication
Confluent Ansible supports the following authentication methods for Control Center:
Plain
Basic
Basic with TLS
mTLS
Control Center can authenticate to the Prometheus and Alert Manager servers with the
Basic, Basic with TLS, or mTLS authentication. Use the following variables to
enable Control Center authentication to Prometheus and Alert Manager. Set the variable
to true to enable a specific setting.
control_center_next_gen_dependency_prometheus_ssl_enabledcontrol_center_next_gen_dependency_prometheus_basic_auth_enabledcontrol_center_next_gen_dependency_prometheus_mtls_enabledcontrol_center_next_gen_dependency_alertmanager_ssl_enabledcontrol_center_next_gen_dependency_alertmanager_basic_auth_enabledcontrol_center_next_gen_dependency_alertmanager_mtls_enabled
For authentication configuration details, see Configure Authentication for Confluent Platform with Ansible Playbooks.
Configure Kafka and KRaft for Control Center
Configure Kafka and KRaft to send data to the Prometheus service in Control Center.
The following are a few of the notable client-side Kafka and KRaft variables for Prometheus:
kafka_broker_telemetry_control_center_next_gen_userkafka_broker_telemetry_control_center_next_gen_password
TLS-enabled Control Center
With Confluent Platform 7.9.0, when SSL is enabled on the Prometheus endpoint of Control Center, you might get the following error log:
ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
java.lang.NoSuchMethodError:
io.confluent.shaded.org.asynchttpclient.DefaultAsyncHttpClientConfig$Builder.setSslContext(Lio/confluent/shaded/io/netty/handler/ssl/SslContext;)Lio/confluent/shaded/org/asynchttpclient/DefaultAsyncHttpClientConfig$Builder;
To solve the issue, use the following JVM-level overrides for Kafka and KRaft to send metrics to Control Center:
kafka_broker_custom_java_argskafka_controller_custom_java_args
For example:
all:
vars:
kafka_broker_custom_java_args: "-Djavax.net.ssl.trustStore=/var/ssl/private/kafka_broker.truststore.jks -Djavax.net.ssl.trustStorePassword=confluenttruststorepass -Djavax.net.ssl.keyStore=/var/ssl/private/kafka_broker.keystore.jks -Djavax.net.ssl.keyStorePassword=confluentkeystorestorepass"
kafka_controller_custom_java_args: "-Djavax.net.ssl.trustStore=/var/ssl/private/kafka_controller.truststore.jks -Djavax.net.ssl.trustStorePassword=confluenttruststorepass -Djavax.net.ssl.keyStore=/var/ssl/private/kafka_controller.keystore.jks -Djavax.net.ssl.keyStorePassword=confluentkeystorestorepass"
Enable JMX Exporter
JMX Exporter is disabled by default. When enabled, the JMX Exporter jar is pulled and enabled on all Confluent Platform components besides Confluent Control Center.
Enable JMX Exporter in hosts.yml as below:
all:
vars:
jmxexporter_enabled: true
For more information on how the JMX exporter works and how to monitor Kafka cluster with the JMX data using Prometheus and Grafana, see Monitoring Your Event Streams: Integrating Confluent with Prometheus and Grafana.
Enable Jolokia
Jolokia monitoring is disabled by default for Confluent Platform components when installed by Confluent Ansible. When enabled, the Jolokia jar is pulled enabled on all Confluent Platform components.
Enable Jolokia in hosts.yml as shown below:
all:
vars:
jolokia_enabled: true
For more information, see Jolokia.
Jolokia access control
This section explains how to configure Jolokia access control variables for secure JMX monitoring in Confluent Platform deployments.
Jolokia access control variables
These variables control the security of the Jolokia JMX agent endpoint. Jolokia access control restricts JMX operations via an XML policy file, providing fine-grained control over which MBeans and operations are accessible.
The following are the key variables for Jolokia access control:
jolokia_access_control_enabledEnables Jolokia’s access control feature. When enabled, restricts JMX operations via an XML policy file. By default, this is enabled if Jolokia itself is enabled.
Type: boolean
Default:
false
jolokia_access_control_custom_file_enabledMust be set to true or false when access control is enabled. Controls whether to use a custom XML policy file or the built-in secure default.
true: You must provide your own secure XML file viajolokia_access_control_file_src_pathfalse: The playbooks will use a secure default XML file provided by the roleType: boolean
Default:
false
Component-specific variables
Each component has its own set of Jolokia access control variables:
zookeeper_jolokia_access_control_enabledzookeeper_jolokia_access_control_custom_file_enabledzookeeper_jolokia_access_control_file_src_pathkafka_controller_jolokia_access_control_enabledkafka_controller_jolokia_access_control_custom_file_enabledkafka_controller_jolokia_access_control_file_src_pathkafka_broker_jolokia_access_control_enabledkafka_broker_jolokia_access_control_custom_file_enabledkafka_broker_jolokia_access_control_file_src_pathkafka_connect_jolokia_access_control_enabledkafka_connect_jolokia_access_control_custom_file_enabledkafka_connect_jolokia_access_control_file_src_pathkafka_rest_jolokia_access_control_enabledkafka_rest_jolokia_access_control_custom_file_enabledkafka_rest_jolokia_access_control_file_src_pathksql_jolokia_access_control_enabledksql_jolokia_access_control_custom_file_enabledksql_access_control_file_src_pathschema_registry_jolokia_access_control_enabledschema_registry_jolokia_access_control_custom_file_enabledschema_registry_jolokia_access_control_file_src_pathkafka_connect_replicator_jolokia_access_control_enabledkafka_connect_replicator_jolokia_access_control_custom_file_enabledkafka_connect_replicator_jolokia_access_control_file_src_path
Usage examples
Example 1: Enable access control with default secure configuration
all:
vars:
jolokia_enabled: true
jolokia_access_control_enabled: true
jolokia_access_control_custom_file_enabled: false
Example 2: Enable access control with custom XML policy file
all:
vars:
jolokia_enabled: true
jolokia_access_control_enabled: true
jolokia_access_control_custom_file_enabled: true
kafka_broker_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
kafka_connect_replicator_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
schema_registry_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
ksql_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
kafka_rest_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
kafka_connect_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
kafka_broker_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
kafka_controller_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
zookeeper_jolokia_access_control_file_src_path: "/path/to/your/jolokia-access.xml"
Example 3: Component-specific configuration
all:
vars:
jolokia_enabled: true
kafka_controller_jolokia_access_control_custom_file_enabled: false #will use the default policy when custom_enabled is set to false for a component
kafka_broker:
vars:
# Override for brokers only - use custom policy
kafka_broker_jolokia_access_control_custom_file_enabled: true
kafka_broker_jolokia_access_control_file_src_path: "/path/to/broker-jolokia-access.xml"
kafka_controller:
Example 4: Production-ready configuration with explicit choices
all:
vars:
jolokia_enabled: true
jolokia_access_control_enabled: true
# Force explicit choice in production
jolokia_access_control_custom_file_enabled: false
# This will require each component to explicitly set their custom_file_enabled value
Security considerations
Always enable access control in production environments.
Use custom XML policy files for fine-grained control.
Regularly review and update access control policies.
Test access control policies in development before production deployment.
Monitor Jolokia logs for unauthorized access attempts.
Sample custom access control XML
Create a file, for example, /path/to/jolokia-access.xml, with content similar to the following:
<?xml version="1.0" encoding="utf-8"?>
<restrict>
<!-- Allow access to specific MBeans only -->
<allow>
<mbean>kafka.controller:type=KafkaController,name=*</mbean>
<operation>read</operation>
</allow>
<!-- Deny all other access -->
<deny>
<mbean>*:*</mbean>
</deny>
</restrict>
For more information on Jolokia access control XML format, see the Jolokia Security documentation.
You can refer to the policy files in roles/{component_name}/templates/jolokia_access_control_default.xml.
Sample custom access control XML
Create a file, for example, /path/to/jolokia-access.xml, with content similar to the following:
<?xml version="1.0" encoding="utf-8"?>
<restrict>
<!-- Allow access to specific MBeans only -->
<allow>
<mbean>kafka.controller:type=KafkaController,name=*</mbean>
<operation>read</operation>
</allow>
<!-- Deny all other access -->
<deny>
<mbean>*:*</mbean>
</deny>
</restrict>
For more information on Jolokia access control XML format, see the Jolokia Security documentation.
You can refer to the policy files in roles/{component_name}/templates/jolokia_access_control_default.xml.
Access control policy requirement for KRaft controllers
When creating custom access control policies for KRaft, ensure that your policy explicitly allows access to the required MBean. This is necessary because Ansible relies on this MBean during specific operational tasks. Without explicit permission in your policy, validation will fail.
Include the following allow block in your custom policy to grant the necessary access:
<!-- ALLOW ONLY - Specific MBean for ZK to KRaft Migration -->
<allow>
<mbean>kafka.controller:name=ZkMigrationState,type=KafkaController</mbean>
<attribute mode="read">*</attribute>
</allow>
Failure to include this allowance will result in unsuccessful Ansible validations and may disrupt automation workflows.
Configure Log4j 2
Starting in Confluent Ansible 8.0 and Confluent Platform 8.0, Log4j 2 is supported and enabled by default.
Setting custom Log4j 2 configurations is enabled by default. To disable custom
Log4j 2 configurations, set custom_log4j2: false in the inventory file.
To enable or disable custom Log4j 2 configuration at the component level, use
the variable <component>_custom_log4j2.
When set to
false, Confluent Ansible uses the default Log4j 2 configurations for that specific component.When set to
true, you can set the component level<component>_log4j2_root_logger_leveland<component>_log4j2_root_appenders.
To see the default variable settings for Kafka, see Kafka broker variables.
Log redactor is supported with Log4j 2.
By default, Confluent Ansible sets up Log4j 2 with the default values mentioned in the VARIABLES.md.
Deploy Confluent Server or Kafka
Confluent Server is the default version deployed with Confluent Platform. To install Kafka instead, set the
following property in the hosts.yml file.
all:
vars:
confluent_server_enabled: false
Configure Schema Validation
You can configure Schema ID Validation in your Kafka brokers when running Confluent Server. Set
the following properties in the hosts.yml file.
all:
vars:
confluent_server_enabled: true
kafka_broker_schema_validation_enabled: true
Copy files to hosts
To have Ansible copy files to your hosts, place the files on the Ansible control node and set the following variables:
all:
vars:
kafka_controller_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
kafka_broker_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
kafka_rest_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
kafka_connect_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
schema_registry_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
ksql_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
control_center_next_gen_copy_files:
- source_path: /path/to/file.txt
destination_path: /tmp/file.txt
The files in each list will be copied to all hosts within each group, meaning you will distribute one file to all Kafka hosts.
Specify the Java package version
Confluent Ansible provides an option for you to use pre-installed Java or to instruct Confluent Ansible which Java package to install.
To use pre-existing Java, add in the inventory file:
custom_java_pathA full pre-existing Java path on the custom nodes. Confluent Ansible will use the provided path and will skip installing Java as part of the execution.
Default: None
To specify a Java package to install, add one of the following in the inventory file:
redhat_java_package_nameJava Package to install on RHEL/Centos hosts.
Possible values:
java-17-openjdk,java-21-openjdkDefault:
java-21-openjdkdebian_java_package_nameJava Package to install on Debian hosts.
Possible values:
openjdk-11-jdk,openjdk-8-jdk,openjdk-17-jdkDefault:
openjdk-17-jdkubuntu_java_package_nameJava Package to install on Ubuntu hosts.
Possible values:
openjdk-8-jdk,openjdk-11-jdk,openjdk-17-jdkDefault:
openjdk-17-jdk
Add custom Java arguments
To have Ansible add custom Java arguments to each component’s Java process, use the following variables in the inventory file:
all:
vars:
kafka_controller_custom_java_args:
kafka_broker_custom_java_args:
kafka_rest_custom_java_args:
kafka_connect_custom_java_args:
schema_registry_custom_java_args:
ksql_custom_java_args:
control_center_next_gen_custom_java_args:
Set environment variables
To have Ansible set the required environment variables to Confluent Platform component
processes, for example, KAFKA_OPTS, use the following dictionary variables
in the inventory file. Refer to the specific component documentation for the
required environment variables.
all:
vars:
kafka_controller_service_environment_overrides:
kafka_broker_service_environment_overrides:
kafka_rest_service_environment_overrides:
kafka_connect_service_environment_overrides:
kafka_connect_replicator_service_environment_overrides:
schema_registry_service_environment_overrides:
ksql_service_environment_overrides:
control_center_next_gen_service_environment_overrides:
For example, the following snippet sets the KAFKA_JMX_OPTS environment
variable in the Kafka broker service:
all:
vars:
kafka_broker_service_environment_overrides:
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false"
Configure listeners
Ansible Playbooks for Confluent Platform configures the following listeners on the broker:
An inter-broker listener on port 9091
A listener for the other Confluent Platform components and external clients on 9092
By default, both of these listeners inherit the security settings you configure
for ssl_enabled (encryption) and sasl_protocol
(authentication).
If you only need a single listener, add the following variable in the
hosts.yml file.
all:
vars:
kafka_broker_configure_multiple_listeners: false
You can customize the out-of-the-box listeners by adding the variable,
kafka_broker_custom_listeners in the hosts.yml file.
In the example below, the broker, internal, and client listeners all have unique
security settings. You can configure multiple additional client listeners, but
do not change the dictionary key for the broker and internal listeners,
broker and internal.
all:
vars:
kafka_broker_custom_listeners:
broker:
name: BROKER
port: 9091
ssl_enabled: false
sasl_protocol: none
internal:
name: INTERNAL
port: 9092
ssl_enabled: true
sasl_protocol: scram
client_listener:
name: CLIENT
port: 9093
ssl_enabled: true
sasl_protocol: plain
Add advertised listener hostnames
When you have a complex networking setup with multiple network interfaces, you need to set up advertised listeners to the external address (host/IP) so that clients can correctly connect to.
To configure advertised listener hostnames on a specific listener, create an advertised listener ([1]) and set the variables on specific hosts ([2] and [3]) as shown in the following example:
all:
vars:
kafka_broker_custom_listeners:
client_listener: -------------------------- [1]
name: CLIENT
port: 9093
kafka_broker:
hosts:
ip-172-31-43-14.us-west-2.compute.internal:
kafka_broker_custom_listeners:
client_listener: -------------------------- [1]
hostname: ec2-34-209-19-18.us-west-2.compute.amazonaws.com --- [2]
ip-172-31-43-15.us-west-2.compute.internal:
kafka_broker_custom_listeners: -------------------------- [1]
client_listener:
hostname: ec2-34-209-19-19.us-west-2.compute.amazonaws.com --- [3]
The above example sets the AWS external DNS hostnames ([2] and [3]) on the advertised listener ([1]) for clients to connect over the interface.
Configure secrets protection
Confluent Platform secrets allow you to securely store and manage sensitive information.
Secrets protection works on Confluent Platform components, namely Confluent Server, Schema Registry, Connect, ksqlDB, REST Proxy, and Control Center.
Secrets protection is not supported for the community version of Kafka.
To use secrets protection on your component property files with Ansible Playbooks for Confluent Platform, set the following variable in your inventory file.
all:
vars:
secrets_protection_enabled: true
When secrets_protection_enabled is set to true, Ansible generates your
master key and encrypts all properties containing password across all Confluent Platform
components.
To have Ansible use your own master key and base secrets file that you generated ahead of time, add:
all: vars: secrets_protection_enabled: true secrets_protection_masterkey: <masterkey> secrets_protection_security_file: <base secret file path>For example:
all: vars: secrets_protection_enabled: true secrets_protection_masterkey: "UWQYODNQVqwbQeFgytYYoMr+FjK9Q6I0F6r16u6Y0EI=" secrets_protection_security_file: "/tmp/security.properties"To have more granular control over which properties get masked, use the
<component>_secrets_protection_encrypt_passwordsand<component>_secrets_protection_encrypt_propertiesvariables.If
<component>_secrets_protection_encrypt_passwordsis set tofalse, then properties containingpasswordwill no longer get masked.Set
<component>_secrets_protection_encrypt_propertiesto a list of variables to encrypt.For an example, to mask only the Kafka properties
advertised.listenersandbroker.id, set:all: vars: secrets_protection_enabled: true kafka_broker_secrets_protection_encrypt_passwords: false kafka_broker_secrets_protection_encrypt_properties: [advertised.listeners, broker.id]