Configure MDS to Manage Centralized Audit Logs¶
You can use Centralized audit logging to dynamically update an audit log configuration. Changes made through the Confluent CLI are pushed from the MDS (metadata service) out to all registered clusters, allowing for the centralized management of the audit log configuration, and assurance that all registered clusters publish their audit log events to the same destination Kafka cluster.
The MDS uses an admin client to connect to the destination Kafka cluster and inspect, create, and alter destination topics in response to certain API requests.
Before you can use centralized audit logging, you must configure one of your Kafka clusters to run the metadata service (MDS), which provides API endpoints to register a list of the Kafka clusters in your organization and to centrally manage the audit log configurations of those clusters. This audit log configuration API pushes out to all registered clusters the rules governing which events are captured and where they are sent. It also creates missing destination topics, and keeps the retention time policies of the destination topics in sync with the audit log configuration policy.
Until configured otherwise, the MDS operates on the assumption that it is a lone cluster, using its own internal admin client to configure destination topics on itself, and leaving the bootstrap servers unspecified so that audit log destination topics are on the same cluster. Because the default behavior requires less configuration, it is useful for the initial setup in a development environment.
In a production setting, you should have all of the Kafka clusters publish their
audit logs to a single, central destination cluster. The configuration file for
each managed cluster must include the destination cluster’s connection and
credential information. However, it should disable auto-creation of destination
topics, and leave confluent.security.event.router.config
unspecified.
The following sections explain how to configure Kafka clusters and the MDS to manage centralized audit logs.
Prerequisites¶
Migrate legacy audit log configurations from all of your Kafka clusters into a combined JSON policy.
Important
You must satisfy this prerequisite prior to registering all of the Kafka clusters.
Register all of your Kafka clusters, including the MDS cluster, in the Cluster Registry in Confluent Platform.
Note
MDS cluster registration does not occur by default. You must explicitly register the MDS cluster in the cluster registry before registering other clusters.
Configure all of your registered clusters to use the same MDS for RBAC.
The MDS cluster uses the admin client to communicate with registered clusters (managed clusters). Ensure that the MDS admin client can connect to all of your registered clusters by having them expose an authentication token listener (for example,
listener.name.external.sasl.enabled.mechanisms=OAUTHBEARER
), and registering that listener’s port in the cluster registry. When using SASL_SSL, only use TLS keys that are verifiable by certificates in your client trust stores.Configure a cluster to receive the audit logs. Set up an audit log writer user (with a name like
auditlogwriter
) on that cluster with the ability to write to the destination topics. For example, grant the DeveloperWrite role on the topic prefixconfluent-audit-log-events
. For details, refer to Configure the audit log writer to the destination cluster.Grant the AuditAdmin role on all your Kafka clusters to users or groups who will be managing the audit log configuration.
Note
The recommended way to grant permissions is for the audit log administrator to run any of the Confluent CLI confluent audit-log commands. The error message returns a list of recommended role bindings to grant to the user:
confluent login --url "http://mds.example.com:8090" # authenticate as user "alice" confluent audit-log config describe Error: 403 Forbidden User:alice not permitted to DescribeConfigs on one or more clusters. Fix it: confluent iam rbac role-binding create --role AuditAdmin --principal User:alice --kafka-cluster DBS26_qTQ-mT23p5opUK_g confluent iam rbac role-binding create --role AuditAdmin --principal User:alice --kafka-cluster prz9a_-xqqlRgmekDoLw4U
Required configuration updates¶
To use centralized audit logging, you must:
- Configure the MDS cluster
- Configure each registered cluster
- Configure the audit log destination cluster
Configure the MDS cluster¶
You must make the following configuration updates for the MDS cluster:
- Propagate the audit log configuration to the managed clusters
- Configure audit log topic management on the destination cluster
- Configure the audit log writer to the destination cluster
Propagate the audit log configuration to the managed clusters¶
MDS propagates audit log configuration updates to the managed clusters. For MDS to do this, every managed cluster must have its OAUTHBEARER endpoint registered with cluster registry (refer to Configure listener connections from the MDS cluster).
Assuming the managed clusters’ listeners are configured for SASL_SSL, you must
configure TLS client settings in the MDS cluster’s /etc/kafka/server.properties
file. For example:
# Configuration used when propagating changes to managed clusters with SASL_SSL endpoints
ssl.confluent.metadata.server.truststore.location=<path-to-truststore.jks>
ssl.confluent.metadata.server.truststore.password=<trust-store-password>
If you haven’t already, you must grant the AuditAdmin
role on all of your Kafka clusters to
users or groups who will be managing the audit log configuration.
# Repeat for each Kafka cluster
confluent iam rbac role-binding create --role AuditAdmin --principal User:<audit-admin-user> --kafka-cluster <managed-kafka-cluster-id>
# Alternatively, repeat for each Kafka cluster, but specify the role binding for the group
confluent iam rbac role-binding create --role AuditAdmin --principal Group:<audit-admin-group> --kafka-cluster <managed-kafka-cluster-id>
Configure audit log topic management on the destination cluster¶
MDS manages the audit log topics on the destination cluster, creating missing topics, and keeping the retention time policies of those topics in sync with the audit log configuration policy. For MDS to do this, you must configure the admin client used by MDS to connect to the destination cluster.
Use the confluent.security.event.logger.destination.admin.
prefix when configuring the admin client in the MDS cluster’s server.properties
file. Other than the prefix requirement, this configuration is similar to other
admin client configurations. This connection must be consistent with the
producer configuration on this and all of the managed clusters. For details
about the properties specified here, refer to Kafka AdminClient Configurations for Confluent Platform and
Kafka AdminClient.
SASL_SSL Configuration
confluent.security.event.logger.destination.admin.bootstrap.servers=<logs1.example.com:9092,logs2.example.com:9092>
confluent.security.event.logger.destination.admin.security.protocol=SASL_SSL
confluent.security.event.logger.destination.admin.sasl.mechanism=PLAIN
confluent.security.event.logger.destination.admin.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="<audit-log-admin-username>" \
password="<audit-log-admin-password>";
confluent.security.event.logger.destination.admin.ssl.truststore.location=<path-to-truststore.jks>
confluent.security.event.logger.destination.admin.ssl.truststore.password=<truststore-password>
Note
The recommended security protocol for the admin client configuration is
SASL_SSL. If the destination cluster uses a different security protocol
(for example, SASL_PLAINTEXT), then the value of
confluent.security.event.logger.destination.admin.security.protocol
should reflect that. In the case of SASL_PLAINTEXT, you would not need to
specify the TLS trust store properties shown above.
The servers you specify in confluent.security.event.logger.destination.admin.bootstrap.servers
will only be used as a fallback, until the audit log configuration JSON object stored in MDS
specifies its destination bootstrap_servers
, overriding this setting.
Configure the audit log writer to the destination cluster¶
Because MDS also generates audit log events, you must configure the producer to connect to the audit log destination cluster.
# Configure the producer from the MDS cluster to export its own audit log events
confluent.security.event.logger.enable=true
confluent.security.event.logger.exporter.kafka.topic.create=false
confluent.security.event.logger.exporter.kafka.bootstrap.servers=<logs1.example.com:9092,logs2.example.com:9092>
confluent.security.event.logger.exporter.kafka.security.protocol=SASL_SSL
confluent.security.event.logger.exporter.kafka.sasl.mechanism=PLAIN
confluent.security.event.logger.exporter.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="<audit-log-writer-username>" \
password="<audit-log-writer-password>";
confluent.security.event.logger.exporter.kafka.ssl.truststore.location=<path-to-truststore.jks>
confluent.security.event.logger.exporter.kafka.ssl.truststore.password=<truststore-password>
Note
The user specified in confluent.security.event.logger.exporter.kafka.sasl.jaas.config
must be a principal who has the DeveloperWrite
write role on all topics with the prefix
confluent-audit-log-events
on the destination cluster. However, it is not necessary
for all of the registered clusters to use the same principal specified here.
If required they can all use different principals, but you must
grant all of those principals the appropriate permissions.
The centralized configuration feature is responsible for creating and updating
the destination topics, so we recommend that you turn off the auto-create
mechanism by setting confluent.security.event.logger.exporter.kafka.topic.create=false
on all Kafka clusters (including the MDS cluster).
The servers you specify in confluent.security.event.logger.exporter.kafka.bootstrap.servers
will only be used as a fallback, until the audit log configuration JSON object stored in MDS
specifies its destination bootstrap_servers
, overriding this setting.
Configure each registered cluster¶
You must make the following configuration updates for each registered cluster:
- Ensure that each managed cluster is set up in Cluster Registry in Confluent Platform.
- Configure listener connections from the MDS cluster
- Configure the producer to publish to the audit log destination cluster
Configure listener connections from the MDS cluster¶
You must configure listeners from each managed registered cluster to use SASL_SSL (or SASL_PLAINTEXT)
authentication protocols with OAUTHBEARER specified in sasl.enabled.mechanisms
.
For details about this configuration, refer to Configure Confluent Server brokers.
Configure the producer to publish to the audit log destination cluster¶
You must configure the producer for each registered cluster to connect and publish
to the audit log destination cluster for centralized audit logging. Specify the
following SASL_SSL configuration in the server.properties
file:
# Publish to the audit log destination cluster
confluent.security.event.logger.enable=true
confluent.security.event.logger.exporter.kafka.topic.create=false
confluent.security.event.logger.exporter.kafka.bootstrap.servers=<logs1.example.com:9092,logs2.example.com:9092>
confluent.security.event.logger.exporter.kafka.security.protocol=SASL_SSL
confluent.security.event.logger.exporter.kafka.sasl.mechanism=PLAIN
confluent.security.event.logger.exporter.kafka.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="<audit-log-writer-username>" \
password="<audit-log-writer-password>";
confluent.security.event.logger.exporter.kafka.ssl.truststore.location=<path-to-truststore.jks>
confluent.security.event.logger.exporter.kafka.ssl.truststore.password=<truststore-password>
Configure the audit log destination cluster¶
You must make the following configuration updates for the audit log destination cluster:
- Grant user permissions for audit log topics
- Configure listener connections from the MDS and managed registered clusters
- Configure the default properties for topics on the audit log destination cluster
Grant user permissions for audit log topics¶
To configure the audit log destination cluster for centralized audit logging you must grant the following user permissions:
Specify a principal or principals who can write logs to the destination cluster. The principal should be the same one that appears in the
confluent.security.event.logger.exporter.kafka.sasl.jaas.config
property of theserver.properties
file of each Kafka cluster.confluent iam rbac role-binding create --principal User:<audit-log-writer> --role DeveloperWrite --resource Topic:confluent-audit-log-events --prefix --kafka-cluster <kafka-destination-cluster-id>
Specify a principal who can administer audit log topics on the destination cluster. The principal should be the same one that appears in the
confluent.security.event.logger.destination.admin.sasl.jaas.config
property of theserver.properties
file of the MDS cluster.confluent iam rbac role-binding create --principal User:<audit-log-admin> --role ResourceOwner --resource Topic:confluent-audit-log-events --prefix --kafka-cluster <kafka-destination-cluster-id>
Configure listener connections from the MDS and managed registered clusters¶
The destination server must have a listener that allows connections from all of the clients:
listeners=INTERNAL://:9092,BROKER://:9091,EXTERNAL://:9093
listener.security.protocol.map=INTERNAL:SASL_SSL,BROKER:SASL_SSL,EXTERNAL:SASL_SSL
...
listener.name.external.sasl.enabled.mechanisms=PLAIN
listener.name.external.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" password="admin-secret" \
user_audit-log-admin-username="<audit-log-admin-username>" \
user_audit-log-writer-username="<audit-log-writer-username>"
listener.name.external.ssl.key.password=<key-password>
listener.name.external.ssl.keystore.location=<path-to-keystore.jks>
listener.name.external.ssl.keystore.password=<key-store-password>
listener.name.external.ssl.truststore.location=<path-to-truststore.jks>
listener.name.external.ssl.truststore.password=<trust-store-password>
For details about this configuration, refer to Configure Confluent Server brokers.
Configure the default properties for topics on the audit log destination cluster¶
Any topics created on the destination cluster will use the default properties as
specified in the destination cluster’s server.properties
file (for example:
num.partitions
, min.insync.replicas
, default.replication.factor
).
Next steps¶
At this point you have completed the required configuration updates, and can now
use the Confluent Platform CLI confluent audit-log config
commands or the
Confluent MDS API
to configure audit logs for all of the registered
clusters. For additional command examples and guidance on troubleshooting, see
Troubleshoot the MDS configuration for centralized audit logging.
Troubleshoot the MDS configuration for centralized audit logging¶
This section includes suggestions for troubleshooting the MDS cluster configuration used for managing centralized audit logs.
Verify the audit log retention setting¶
These procedures only affect your retention policy. It is recommended that you make minor changes only.
Use the Confluent CLI to modify the audit log configuration and update the
retention_ms
of one or more destination topics:# Capture the current configuration from MDS confluent audit-log config describe > /tmp/audit-log-config.json # View what was captured cat /tmp/audit-log-config.json { "destinations": { "bootstrap_servers": [ "logs1.example.com:9092", "logs2.example.com:9092" ], "topics": { "confluent-audit-log-events": { "retention_ms": 7776000000 } } }, "default_topics": { "allowed": "confluent-audit-log-events", "denied": "confluent-audit-log-events" } } # Make a small change vim /tmp/audit-log-config.json # e.g. - change 7776000000 to 7776000001 # Post the change back to MDS confluent audit-log config update < /tmp/audit-log-config.json
Verify that the topic’s
retention.ms
setting reflects the new value on the destination cluster:cat /tmp/destination-cluster-admin-client.properties bootstrap.servers=<logs1.example.com:9092> security.protocol=SASL_SSL sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="<destination-cluster-admin-client-username>" \ password="<destination-cluster-admin-client-password>"; ssl.endpoint.identification.algorithm=https sasl.mechanism=PLAIN truststore.location=<path-to-truststore.jks> truststore.password=<trust-store-password> kafka-topics --bootstrap-server <logs1.example.com:9092> \ --command-config /tmp/destination-cluster-admin-client.properties \ --describe --topic <confluent-audit-log-events> Topic: confluent-audit-log-events PartitionCount: 6 ReplicationFactor: 3 Configs: min.insync.replicas=2,cleanup.policy=delete,retention.ms=7776000001 Topic: confluent-audit-log-events Partition: 0 Leader: 2 Replicas: 2,1,0 Isr: 2,0,1 Topic: confluent-audit-log-events Partition: 1 Leader: 1 Replicas: 1,0,2 Isr: 2,0,1 Topic: confluent-audit-log-events Partition: 2 Leader: 0 Replicas: 0,2,1 Isr: 2,0,1 Topic: confluent-audit-log-events Partition: 3 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1 Topic: confluent-audit-log-events Partition: 4 Leader: 1 Replicas: 1,2,0 Isr: 2,0,1 Topic: confluent-audit-log-events Partition: 5 Leader: 0 Replicas: 0,1,2 Isr: 2,0,1
Alter the
retention.ms
value of one of the destination topics directly on the destination cluster:kafka-topics --bootstrap-server <destination-cluster>:9092 \ --command-config /tmp/destination-cluster-admin-client.properties \ --alter --topic confluent-audit-log-events \ --config retention.ms=7776000002
Verify that the audit log configuration shows the new
retention_ms
setting:confluent audit-log config describe { "destinations": { "bootstrap_servers": [ "logs1.example.com:9092", "logs2.example.com:9092" ], "topics": { "confluent-audit-log-events": { "retention_ms": 7776000002 } } }, "default_topics": { "allowed": "confluent-audit-log-events", "denied": "confluent-audit-log-events" } }
If this troubleshooting procedure doesn’t work (for example, if audit logging is
not configured properly, you will get an error when you attempt to run the describe
command), check to ensure that the connection and credentials in your MDS broker
properties (the properties prefixed by
confluent.security.event.logger.destination.admin.
) are working. Also verify
that you’ve granted sufficient permissions to the admin client principal on the
destination cluster. Note that the minimum role binding should grant
the ResourceOwner role on topics with the prefix confluent-audit-log-events
on the destination cluster. Finally, confirm that the destination cluster is
reachable and listening for connections from the MDS cluster’s network address.
Verify the audit log configuration is synchronized to registered clusters¶
Use the following command to verify that the audit log configuration is synchronized to registered clusters:
kafka-configs --bootstrap-server <mds-managed-cluster>:9092 \
--command-config /tmp/managed-cluster-admin-client.properties \
--entity-type brokers \
--entity-default \
--describe \
| grep confluent.security.event.router.config
You should see the same JSON audit log configuration you get when you run
confluent audit-log config describe
. It is possible that retention_ms
values may differ if the audit topics have been altered directly on the
destination cluster, in which case the metadata in the JSON may also be different.
Everything else should be the same. If this verification fails check the following
for the MDS cluster registry:
- The clusters expose an auth token listener
(
listener.name.<example>.sasl.enabled.mechanism=OAUTHBEARER
) - The clusters’ TLS keys are verifiable by certificates in the MDS server’s trust store.
Also look for error status messages when making an audit log API update request to MDS.