Configure Centralized Audit Logs for Confluent Platform with Ansible Playbooks

You can use Ansible Playbooks for Confluent Platform to configure centralized audits logs in Confluent Platform.

Requirements

  • Deploy Confluent Platform in a centralized RBAC architecture and designate one Kafka cluster to be the Audit Logs destination cluster.
  • Register all Kafka clusters in the cluster registry.

The following is the general workflow to configure centralized audit logs.

Configure Kafka clusters to send audit logs

For any cluster to send audit logs to the destination Kafka cluster, add the following variables in the inventory file:

audit_logs_destination_enabled: true
audit_logs_destination_bootstrap_servers:
audit_logs_destination_listener:
  ssl_enabled:
  ssl_mutual_auth_enabled:
  sasl_protocol:

kafka_broker_cluster_name:

For example:

audit_logs_destination_enabled: true
audit_logs_destination_bootstrap_servers: kafka-broker1:9093,kafka-broker2:9093,kafka-broker3:9093
audit_logs_destination_listener:
  ssl_enabled: false
  ssl_mutual_auth_enabled: false
  sasl_protocol: kerberos

kafka_broker_cluster_name: mycluster

You can set the above variables and install a cluster before the destination cluster is provisioned, for example, when installing the MDS.

Register destination Kafka cluster

There are no special variables required to make a Kafka cluster the audit logs destination cluster, but it is recommended that you register the destination cluster within the MDS cluster registry using the following variable in the inventory file of the destination cluster.

confluent-audit-logs is used as an example cluster name in this section.

kafka_broker_cluster_name: confluent-audit-logs

Create role bindings for audit logs

Add additional role bindings required to ensure the audit logs messages are authorized on the destination cluster.

  1. Run the Ansible ad-hoc command to find the principals being used in the connection:

    ansible -i <audit-logs-source-cluster-inventory> kafka_broker \
      -m import_role -a "name=confluent.kafka_broker tasks_from=set_principal.yml" \
      -e listener="{{audit_logs_destination_listener}}"
    

    An example output is:

    kafka-broker2 | SUCCESS => {
        "kafka_broker_principal": "User:CN=kafka_broker,OU=QE IT,O=CONFLUENT,L=PaloAlto,ST=Ca,C=US"
    }
    
  2. Log in to the MDS and create the following role bindings for the principal returned in the logs of the previous step.

    Note that the principal IDs and MDS user IDs are case-specific, and you must match the case when specifying the values.

    # Log in as MDS super user
    confluent login --url <mds-kafka>:8090
    
    # For all Kafka clusters, run:
    confluent iam rbac role-binding create \
       --principal User:CN=kafka_broker,OU=QE IT,O=CONFLUENT,L=PaloAlto,ST=Ca,C=US \
       --role DeveloperWrite \
       --resource Topic:confluent-audit-log-events \
       --prefix --cluster-name confluent-audit-logs
    
    # For MDS, run:
    confluent iam rbac role-binding create \
       --principal User:CN=kafka_broker,OU=QE IT,O=CONFLUENT,L=PaloAlto,ST=Ca,C=US  \
       --role ResourceOwner \
       --resource Topic:confluent-audit-log-events \
       --prefix --cluster-name confluent-audit-logs
    

Configure audit logs

Refer to Configuring Audit Logs using the CLI for setting up audit logs.