Advanced Confluent Platform Configurations with Ansible Playbooks
This section provides information about various deployment configurations for Confluent Platform using Ansible.
Configure Tiered Storage with GCS buckets
Configure Tiered Storage to use the Custom Properties and the Copy Files features of Ansible Playbooks for Confluent Platform.
Get the GCP credentials JSON file on the Ansible control node.
Set the following variables in the
hosts.ymlfile:all: vars: kafka_broker_copy_files: - source_path: /tmp/gcloud-a5f9c87c81ae.json destination_path: /etc/security/google/creds.json kafka_broker_custom_properties: confluent.tier.feature: "true" confluent.tier.enable: "true" confluent.tier.backend: GCS confluent.tier.gcs.bucket: bucket-name confluent.tier.gcs.region: us-west2 confluent.tier.gcs.cred.file.path: /etc/security/google/creds.jsonThe credential file destination path must match the
confluent.tier.gcs.cred.file.pathcustom property.
Configure Tiered Storage with S3 buckets
Configure Tiered Storage to use the Custom Properties and the Copy Files features of Ansible Playbooks for Confluent Platform.
Get the AWS credentials file on the Ansible control node.
Set the following variables in the
hosts.ymlfile:all: vars: kafka_broker_copy_files: - source_path: /tmp/credentials destination_path: /etc/security/aws/credentials kafka_broker_custom_properties: confluent.tier.feature: "true" confluent.tier.enable: "true" confluent.tier.backend: S3 confluent.tier.s3.bucket: bucket-name confluent.tier.s3.region: us-west-2 confluent.tier.s3.cred.file.path: /etc/security/aws/credentialsThe credential file destination path must match the
confluent.tier.s3.cred.file.pathcustom property.
Deploy Confluent Platform across multiple regions
To configure multi region clusters, use the following properties in the hosts.yml inventory file:
replica.selector.classsets on all brokersbroker.rackuniquely sets on each host
For example:
kafka_broker:
vars:
kafka_broker_custom_properties:
replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector
hosts:
ip-192-24-10-207.us-west.compute.internal:
broker_id: 1
kafka_broker_custom_properties:
broker.rack: us-west-2a
ip-192-24-5-30.us-west.compute.internal:
broker_id: 2
kafka_broker_custom_properties:
broker.rack: us-west-2b
ip-192-24-10-0.us-west.compute.internal:
broker_id: 3
kafka_broker_custom_properties:
broker.rack: us-west-2a
You can apply the kafka_broker_custom_properties directly within the
kafka_broker group as well.
Configure multiple ksqlDB clusters
To configure multiple ksqlDB clusters, create new groups for each cluster and set them as children of the ksqlDB group.
The Ansible groups cannot be named ksql.
The name of these groups determine how each cluster is named in Control Center (Legacy).
Each ksqlDB cluster needs a unique value for the ksql_service_id property. By
convention, the service ID should end with an underscore.
For example:
ksql:
children:
ksql1:
ksql2:
ksql1:
vars:
ksql_service_id: ksql1_
hosts:
ip-172-31-34-15.us-east-2.compute.internal:
ip-172-31-37-16.us-east-2.compute.internal:
ksql2:
vars:
ksql_service_id: ksql2_
hosts:
ip-172-31-34-17.us-east-2.compute.internal:
ip-172-31-37-18.us-east-2.compute.internal:
To configure Control Center (Legacy) for multiple ksqlDB clusters, set the
ksql_cluster_ansible_group_names property to a list of all ksqlDB children
groups.
For example:
control_center:
vars:
ksql_cluster_ansible_group_names:
- ksql1
- ksql2
hosts:
ip-172-31-37-15.us-east-2.compute.internal:
Configure multiple Connect clusters
To configure multiple Connect clusters, create a new group for each cluster
and set it as children of the kafka_connect group.
The Ansible groups cannot be named kafka_connect.
Each connect cluster needs a unique value for the kafka_connect_group_id
property. The value of kafka_connect_group_id will be the name of the
connect cluster within Control Center (Legacy).
For example:
kafka_connect:
children:
syslog:
elastic:
syslog:
vars:
kafka_connect_group_id: connect_syslog
hosts:
ip-172-31-34-246.us-east-2.compute.internal:
elastic:
vars:
kafka_connect_group_id: connect-elastic
hosts:
ip-172-31-34-246.us-east-2.compute.internal:
To configure Control Center (Legacy) for multiple connect clusters, set the
kafka_connect_cluster_ansible_group_names property to a list of all
kafka_connect children groups.
For example:
control_center:
vars:
kafka_connect_cluster_ansible_group_names:
- syslog
- elastic
hosts:
ip-172-31-37-15.us-east-2.compute.internal:
Connect to Confluent Cloud
You can use Ansible Playbooks for Confluent Platform to configure and deploy on-premises Confluent Platform to connect to Kafka and Schema Registry running in Confluent Cloud.
See the sample inventory file at the following location for the required configuration settings:
https://github.com/confluentinc/cp-ansible/blob/6.0.10-post/sample_inventories/ccloud.yml