Configure Audit Logs using the Confluent CLI

Starting in Confluent Platform 6.0, you can use Confluent CLI commands to dynamically update your audit log configurations. The Confluent CLI is the recommended tool for scripted or command line interactions with the MDS API Audit Log Configuration endpoints. Changes made using the CLI are pushed from the MDS (metadata service) out to all registered clusters, allowing for centralized management of the audit log configuration.

Diagram that shows the workflow for centralized, MDS-based audit logging

Centralized audit logging workflow

Prerequisites

Migrate individual Kafka cluster audit log configurations

Prior to Confluent Platform 6.0, to set up and use audit logging you configured individual clusters to use the JSON-value property confluent.security.event.router.config in the server.properties file. As mentioned above, starting in Confluent Platform 6.0, you can expand audit logging capabilities to use a centralized (MDS) configuration that spans across multiple registered Kafka clusters.

If you wish to preserve an existing audit log configuration to leverage the audit log routing configurations already specified for each Kafka cluster, you must migrate the existing configuration files and combine them in a common file to be used in the newly-centralized configuration.

Note

If you use the Confluent CLI or the MDS API without registering any clusters in the cluster registry, then any changes you make to the audit log configuration through the API using the Confluent CLI will only affect the audit log configuration of the Kafka cluster where MDS is running, and will not impact any other Kafka clusters.

How the audit log migration tool works

The audit log migration tool performs the following tasks:

  • Sets the output bootstrap servers to the value specified (when specified). Note that output bootstrap servers are empty by default.

  • Combines the input audit log destination topics. For topics that appear in more than one Kafka cluster configuration, the migration tool uses the maximum retention time specified in the configuration.

  • Sets the default audit log topic as confluent-audit-log-events. If necessary, the migration tool will add this topic to the set of destination topics (in which case, it specifies a retention period of 7776000000 milliseconds).

  • Combines the set of all excluded principals.

  • Replaces the /kafka=*/ part of each Confluent Resource Name (CRN) pattern using the cluster ID of the contributing Kafka cluster. For example, a route in the configuration from cluster1 with a CRN like crn:///kafka=*/topic=accounting-* will be transformed to crn:///kafka=cluster1/topic=accounting-*.

    For routes that have a CRN that uses something other than /kafka=*/, the migration tool will not replace the Kafka cluster ID. For example, if a route specifies kafka=pkc-123 and the cluster ID is pkc-abc then the tool will leave it untouched and return the warning:

    Mismatched Kafka Cluster Warning: Routes from one Kafka cluster ID on a
    completely different cluster ID are unexpected, but not necessarily wrong.
    For example, this message might be returned if you attempt to reuse the
    same routing configuration on multiple clusters.
    
  • For any incoming audit log router configurations that have default topics other than confluent-audit-log-events, the script will add extra routes for the following CRN patterns (if they do not already exist):

    Topic Route Event Category Type
    crn://<authority>/kafka=<cluster-id> AUTHORIZE, MANAGEMENT
    crn://<authority>/kafka=<cluster-id>/topic=* AUTHORIZE, MANAGEMENT
    crn://<authority>/kafka=<cluster-id>/control-center-broker-metrics=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/control-center-alerts=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/delegation-token=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/control-center-broker-metrics=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/control-center-alerts=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/cluster-registry=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/security-metadata=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/all=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/connect=<connect-id> AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/connect=<connect-id>/connector=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/connect=<connect-id>/secret=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/connect=<connect-id>/all=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/schema-registry=<sr-id> AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/schema-registry=<sr-id>/subject=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/schema-registry=<sr-id>/all=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/ksql=<id> AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/ksql=<id>/ksql-cluster=* AUTHORIZE
    crn://<authority>/kafka=<cluster-id>/ksql=<id>/all=* AUTHORIZE

    Note

    If you do not want the routes listed above added in your newly-migrated audit log configuration, then edit your input server.properties files to only use confluent-audit-log-events in the default_topics before migrating.

Default audit log configuration

If your existing 5.4 or 5.5 audit log configuration only uses the default settings, and you have not altered it in any way (confluent.security.event.router.config is empty or missing), then you do not need to migrate the audit log configuration. The default centralized audit log configuration will continue to write to a topic (confluent-audit-log-events) on the cluster.

After upgrading to Confluent Platform 6.0, and satisfying the Prerequisites, you can proceed to use centralized audit logging.

Note

If you are migrating an existing audit log configuration to Confluent Platform 6.0 and intend to use centralized audit logging, but want continued access to the old audit logs, then you should either: consume from both the original clusters and the destination cluster for the period of time you want access, or copy the old logs from the original topic in the cluster into the destination cluster topics.

Previously-configured audit log configurations

In cases where you have configured each Confluent Platform (5.4 or 5.5) Kafka cluster independently to produce audit logs to a single destination cluster, possibly with complex rules, you must migrate the audit log configurations from all of your Kafka clusters and combine them into a single, unified JSON. After combining these configurations, use the confluent audit-log config update command to post the JSON to the MDS audit log configuration API.

Note

You must use a single destination cluster for all centralized audit logs.

For example, you have two audit log configurations for Kafka clusters cluster123 and clusterABC. The configuration for each cluster is stored in the server.properties files in /tmp/cluster123/server.properties and /tmp/clusterABC/server.properties, respectively.

The audit log router configuration for cluster123 is:

cluster123/server.properties:
...
confluent.security.event.router.config= \
{ \
  "destinations": { \
    "bootstrap_servers":["audit.example.com:9092"],\
    "topics":{ \
      "confluent-audit-log-events_payroll":{ \
        "retention_ms":50 \
      }, \
      "confluent-audit-log-events":{ \
        "retention_ms":500 \
      }
    }
  }, \
  "default_topics":{ \
    "allowed":"confluent-audit-log-events", \
    "denied":"confluent-audit-log-events"}, \
    "routes":{ \
      "crn://mds1.example.com/kafka=*/topic=payroll-*": { \
        "produce":{ \
          "allowed":"confluent-audit-log-events_payroll", \
          "denied":"confluent-audit-log-events_payroll" \
        }, \
        "consume":{ \
          "allowed":"confluent-audit-log-events_payroll", \
          "denied":"confluent-audit-log-events_payroll" \
        } \
    }, \
    "crn://some-authority/kafka=clusterX":{ \
      "management":{ \
        "allowed":"confluent-audit-log-events_payroll", \
        "denied":"confluent-audit-log-events_payroll" \
      }
    }
  }, \
  "excluded_principals":["User:Alice"]
}

The audit log router configuration for clusterABC is:

clusterABC/server.properties
...
confluent.security.event.router.config={ \
  "destinations": { \
    "bootstrap_servers": [ \
      "OLD_ID.us-central1.gcp.cloud:9092" \
    ], \
    "topics": { \
      "confluent-audit-log-events_payroll": { \
        "retention_ms": 2592000000 \
      }, \
      "confluent-audit-log-events_billing": { \
        "retention_ms": 2592000000 \
      }, \
      "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC": { \
        "retention_ms": 100 \
      } \
    } \
  }, \
  "default_topics": { \
    "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC", \
    "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC" \
  }, \
  "routes": { \
    "crn://mds1.example.com/kafka=*/topic=billing-*": { \
      "produce": { \
        "allowed": "confluent-audit-log-events_billing", \
        "denied": "confluent-audit-log-events_billing" \
      }, \
      "consume": { \
        "allowed": "confluent-audit-log-events_billing", \
        "denied": "confluent-audit-log-events_billing" \
      }, \
      "management": { \
        "allowed": "confluent-audit-log-events_billing", \
        "denied": "confluent-audit-log-events_billing" \
      } \
    }, \
    "crn://diff-authority/kafka=different-cluster-id/topic=payroll-*": { \
      "produce": { \
        "allowed": "confluent-audit-log-events_payroll", \
        "denied": "confluent-audit-log-events_payroll" \
      }, \
      "consume": { \
        "allowed": "confluent-audit-log-events_payroll", \
        "denied": "confluent-audit-log-events_payroll" \
      } \
    }, \
    "crn://some-authority/kafka=clusterX": { \
      "management": { \
        "allowed": "confluent-audit-log-events_payroll", \
        "denied": "confluent-audit-log-events_payroll" \
      } \
    } \
  }, \
  "excluded_principals": [ \
    "User:Bob" \
  ] \
}

To combine these configurations using designated bootstrap servers (NEW_ID_2.us.gcp.cloud:9092 and NEW_ID_1.us.gcp.cloud:9092), run the migration tool as follows:

# Migrate the audit log configurations and combine into a single JSON blob
confluent audit-log migrate config \
--combine cluster123=/tmp/cluster123/server.properties,\
clusterABC=/tmp/clusterABC/server.properties \
--bootstrap-servers NEW_ID_2.us.gcp.cloud:9092 \
--bootstrap-servers NEW_ID_1.us.gcp.cloud:9092 \
> /tmp/audit-log-config.json

Mismatched Kafka Cluster Warning: Cluster "cluster123" has a route for a different cluster, route: "crn://some-authority/kafka=clusterX". Routes from one Kafka cluster ID on a completely different cluster ID are unexpected, but not necessarily wrong. For example, this message might be returned if you reuse the same routing configuration on multiple clusters.
Mismatched Kafka Cluster Warning: Cluster "clusterABC" has a route for a different cluster, route: "crn://diff-authority/kafka=different-cluster-id/topic=payroll-*". Routes from one Kafka cluster ID on a completely different cluster ID are unexpected, but not necessarily wrong. For example, this message might be returned if you reuse the same routing configuration on multiple clusters.
Mismatched Kafka Cluster Warning: Cluster "clusterABC" has a route for a different cluster, route: "crn://some-authority/kafka=clusterX". Routes from one Kafka cluster ID on a completely different cluster ID are unexpected, but not necessarily wrong. For example, this message might be returned if you reuse the same routing configuration on multiple clusters.
Multiple CRN Authorities Warning: Cluster "cluster123" had multiple CRN authorities in its routes: [crn://mds1.example.com/ crn://some-authority/]. Multiple, different CRN authorities exist in routes from a single cluster. This is unexpected in a configuration targeting a single cluster, but makes sense if you are reusing the same routing rules on multiple clusters. If this is the case you can ignore this warning or consider using CRN patterns with wildcard (empty) authority values in your audit log routes.
Multiple CRN Authorities Warning: Cluster "clusterABC" had multiple CRN authorities in its routes: [crn://diff-authority/ crn://mds1.example.com/ crn://some-authority/]. Multiple, different CRN authorities exist in routes from a single cluster. This is unexpected in a configuration targeting a single cluster, but makes sense if you are reusing the same routing rules on multiple clusters. If this is the case you can ignore this warning or consider using CRN patterns with wildcard (empty) authority values in your audit log routes.
New Bootstrap Servers Warning: Cluster "cluster123" currently has bootstrap servers = [audit.example.com:9092]. Replacing with [NEW_ID_1.us.gcp.cloud:9092 NEW_ID_2.us.gcp.cloud:9092]. Migrated clusters will use the specified bootstrap servers.
New Bootstrap Servers Warning: Cluster "clusterABC" currently has bootstrap servers = [OLD_ID.us-central1.gcp.cloud:9092]. Replacing with [NEW_ID_1.us.gcp.cloud:9092 NEW_ID_2.us.gcp.cloud:9092]. Migrated clusters will use the specified bootstrap servers.
New Excluded Principals Warning: Due to combining the excluded principals from every input cluster, cluster "cluster123" will now also exclude the following principals: [User:Bob]
New Excluded Principals Warning: Due to combining the excluded principals from every input cluster, cluster "clusterABC" will now also exclude the following principals: [User:Alice]
Repeated Route Warning: Route Name : "crn://some-authority/kafka=clusterX". There are duplicate routes specified between different router configurations. Duplicate routes will be dropped.
Retention Time Discrepancy Warning: Topic "confluent-audit-log-events_payroll" had discrepancies in retention time. Using max: 2592000000. Discrepancies in retention time occur when two cluster configurations have the same topic in a router configuration, but different retention times. The maximum specified retention time will be used.

# If desired, make adjustments
vim /tmp/audit-log-config.json

# Post the JSON to the MDS audit log configuration API
confluent audit-log config update --force < /tmp/audit-log-config.json

The combined results appear in standard output (stdout), which in this example is in /tmp/audit-log-config.json. Warnings appear in standard error (stderr). The output bootstrap servers are empty by default, but here they reflect the values specified when running the migration tool:

{
  "destinations": {
    "bootstrap_servers": [
      "NEW_ID_1.us.gcp.cloud:9092",
      "NEW_ID_2.us.gcp.cloud:9092"
    ],
    "topics": {
      "confluent-audit-log-events": {
        "retention_ms": 500
      },
      "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC": {
        "retention_ms": 100
      },
      "confluent-audit-log-events_billing": {
        "retention_ms": 2592000000
      },
      "confluent-audit-log-events_payroll": {
        "retention_ms": 2592000000
      }
    }
  },
  "excluded_principals": [
    "User:Alice",
    "User:Bob"
  ],
  "default_topics": {
    "allowed": "confluent-audit-log-events",
    "denied": "confluent-audit-log-events"
  },
  "routes": {
    "crn:///kafka=cluster123/topic=payroll-*": {
      "produce": {
        "allowed": "confluent-audit-log-events_payroll",
        "denied": "confluent-audit-log-events_payroll"
      },
      "consume": {
        "allowed": "confluent-audit-log-events_payroll",
        "denied": "confluent-audit-log-events_payroll"
      }
    },
    "crn:///kafka=clusterABC": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      },
      "management": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/all=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/cluster-registry=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/connect=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/connect=*/all=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/connect=*/connector=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/connect=*/secret=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/control-center-alerts=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/control-center-broker-metrics=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/delegation-token=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/group=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      },
      "management": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/ksql=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/ksql=*/all=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/ksql=*/ksql-cluster=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/schema-registry=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/schema-registry=*/all=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/schema-registry=*/subject=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/security-metadata=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/topic=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      },
      "management": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterABC/topic=billing-*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      },
      "management": {
        "allowed": "confluent-audit-log-events_billing",
        "denied": "confluent-audit-log-events_billing"
      },
      "produce": {
        "allowed": "confluent-audit-log-events_billing",
        "denied": "confluent-audit-log-events_billing"
      },
      "consume": {
        "allowed": "confluent-audit-log-events_billing",
        "denied": "confluent-audit-log-events_billing"
      }
    },
    "crn:///kafka=clusterABC/transaction-id=*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      }
    },
    "crn:///kafka=clusterX": {
      "management": {
        "allowed": "confluent-audit-log-events_payroll",
        "denied": "confluent-audit-log-events_payroll"
      }
    },
    "crn:///kafka=different-cluster-id/topic=payroll-*": {
      "authorize": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      },
      "management": {
        "allowed": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC",
        "denied": "confluent-audit-log-events_DIFFERENT-DEFAULT-TOPIC"
      },
      "produce": {
        "allowed": "confluent-audit-log-events_payroll",
        "denied": "confluent-audit-log-events_payroll"
      },
      "consume": {
        "allowed": "confluent-audit-log-events_payroll",
        "denied": "confluent-audit-log-events_payroll"
      }
    }
  },
  "metadata": null
}

Viewing the Audit Log Configuration

There are a variety of scenarios in which you may want to view the current audit log configuration. For example, if you want to identify the target cluster currently being used for audit logging, or you want to see the target topic for specific audit log operations, you can view the audit log configuration using the confluent audit-log config describe command. This command returns the entire JSON configuration to standard output. If you want to update the audit log configuration rules, use this command to capture the current configuration before opening it in your text editor. This method is easier than starting with an empty file.

confluent audit-log config describe > /tmp/audit-log-config.json
cat /tmp/audit-log-config.json
{
  "destinations": {
    "bootstrap_servers": [
      "localhost:9091",
      "localhost:9093"
    ],
    "topics": {
      "confluent-audit-log-events": {
        "retention_ms": 259200000
      }
    }
  },
  "excluded_principals": [
    "User:Alice",
    "User:service_account_id"
  ],
  "default_topics": {
    "allowed": "confluent-audit-log-events",
    "denied": "confluent-audit-log-events"
  },
  "metadata": {
    "resource_version": "FElOq8fl5Mp4imIaRbX4iA",
    "updated_at": "2020-08-06T18:50:01Z"
  }
}

Update the audit log configuration

You can use the confluent audit-log config update command to dynamically replace the existing audit log configuration. The update option pushes the updated JSON configuration to the MDS API Audit Log Configuration.

The input to the confluent audit-log config update command is read from standard input by default. You can use the --file argument to specify the path to an input file instead. For example, in most shells, all of the following approaches are equivalent:

cat /path/to/my/file.json | confluent audit-log config update

confluent audit-log config update < /path/to/my/file.json

confluent audit-log config update --file /path/to/my/file.json

Following is a typical workflow for dynamically updating the audit log configuration:

  1. Run confluent audit-log config describe > tempfile to capture the existing configuration.
  2. Edit the tempfile.
  3. Run confluent audit-log config update < tempfile to write changes back to the MDS API Audit Log Configuration.

Alternatively, your organization may store all configurations in a source control system, and push out configuration updates automatically using hooks or triggers as source code is updated. In this use case, your “golden” JSON configuration is stored in source control, so your triggers should always overwrite the JSON configuration stored in the MDS API Audit Log Configuration.

Following is a typical “infrastructure as code” workflow:

  1. Check out the version-controlled configuration file. Edit the file and create a merge request.
  2. Your peers review and approve, and you merge the changes back into the source control system.
  3. Your control system’s hooks call your organization’s automated scripts, which check out the “golden” JSON configuration to a file at <path-to-audit-log-config.json> and call confluent audit-log config update --force --file <path-to-audit-log-config.json>, overwriting the previous audit log configuration known to the MDS API with the newly-updated configuration.

Overwrite concurrent modifications using the --force option

Use the --force option when you want to overwrite any concurrent modifications. For example, in a scenario where administrator A starts editing the configuration, and administrator B makes a concurrent edit, when administrator A completes her edit and attempts to upload the new configuration, she gets an error. This error occurs because both administrators are trying to update the old version of the configuration concurrently. In this case, after inspecting the new version of the configuration, administrator A may decide that it makes sense to use the --force option to overwrite administrator B’s changes.

Note that the MDS API checks the resource_version in the JSON updates. If it does not match the latest version, the API returns a concurrent update error:

Error: Metadata Service backend error: 409 Conflict: {"destinations":{"topics":{"confluent-audit-log-events":{"retention_ms":1000000000}}},"default_topics":{"allowed":"confluent-audit-log-events","denied":"confluent-audit-log-events"},"metadata":{"resource_version":"4Ecnf-3erIWXjqgbjLXauw","updated_at":"2020-07-15T22:30:12Z"}}

Using the --force option in this scenario allows the resource_version in your updated JSON to deviate from the current one.

In the following configuration update example, the original configuration is being replaced by the configuration in the file acme-audit-log-config-hr. Note the use of the --force option, which is required because the resource_version here differs from the one in the existing configuration. Upon the successful completion of the update, the new configuration is returned:

confluent audit-log config update --force --file acme-audit-log-config-hr
{
  "destinations": {
    "bootstrap_servers": [
      "localhost:9091",
      "localhost:9093"
    ],
    "topics": {
      "confluent-audit-log-events": {
        "retention_ms": 259200000
      },
      "confluent-audit-log-events_hr": {
        "retention_ms": 604800000
      }
    }
  },
  "excluded_principals": [
    "User:Alice",
    "User:service_account_id"
  ],
  "default_topics": {
    "allowed": "confluent-audit-log-events",
    "denied": "confluent-audit-log-events"
  },
  "routes": {
    "crn:///kafka=*/topic=hr-*": {
      "management": {
        "allowed": null,
        "denied": null
      },
      "authorize": {
        "allowed": null,
        "denied": null
      },
      "produce": {
        "allowed": "confluent-audit-log-events_hr",
        "denied": "confluent-audit-log-events_hr"
      },
      "consume": {
        "allowed": "confluent-audit-log-events_hr",
        "denied": "confluent-audit-log-events_hr"
      },
      "describe": {
        "allowed": "",
        "denied": ""
      }
    }
  },
  "metadata": {
    "resource_version": "3N-cNxcdIRtVr-4Jvkbg5w",
    "updated_at": "2020-08-06T19:01:32Z"
  }
}

Edit the audit log configuration

You can modify the existing audit log configuration without creating an intermediate file (as you must do when using confluent audit-log config update).

When you run the confluent audit-log config edit command, your default editor ($EDITOR) opens a temporary file with the contents of the latest audit log configuration known to the MDS server. Enter your configuration updates in the editor. After you finish your configuration updates and close the editor, the Confluent CLI completes the updates by sending the temporary file to the MDS server.

Troubleshoot audit log routes

The routing rules for audit logs can be complex. The CLI provides two commands to help clarify and troubleshoot the audit log routing rules, given a specific resource CRN:

  • Use confluent audit-log route list to list all of the audit log routes that match the given resource or any of its sub-resources.
  • Use confluent audit-log route lookup to determine which of those routes is selected by the “longest common prefix” precedence rule for the given resource.

Note

The MDS API Audit Log Configuration route lookup requires a CRN for a single resource, not a pattern using the wildcard character (*). If you include a wildcard in your search, you will get an error.

To view all of the audit log routes related to the Kafka cluster identified by the CRN crn://mds1.example.com/kafka=test-cluster-1:

confluent audit-log route list -r "crn://mds1.example.com/kafka=test-cluster-1"
 {
   "default_topics": {
     "allowed": "confluent-audit-log-events",
     "denied": "confluent-audit-log-events"
   },
   "routes": {
     "crn://mds1.example.com/kafka=*/topic=billing-*": {
       "management": {
         "allowed": "confluent-audit-log-events_billing",
         "denied": "confluent-audit-log-events_other"
       },
       "authorize": {
         "allowed": null,
         "denied": null
       },
       "produce": {
         "allowed": "confluent-audit-log-events_billing",
         "denied": "confluent-audit-log-events_other"
       },
       "consume": {
         "allowed": "confluent-audit-log-events_billing",
         "denied": "confluent-audit-log-events_other"
       }
     },
     "crn://mds1.example.com/kafka=*/topic=payroll-*": {
       "management": {
         "allowed": "confluent-audit-log-events_payroll",
         "denied": "confluent-audit-log-events_other"
       },
       "authorize": {
         "allowed": null,
         "denied": null
       },
       "produce": {
         "allowed": "confluent-audit-log-events_payroll",
         "denied": "confluent-audit-log-events_other"
       },
       "consume": {
         "allowed": "confluent-audit-log-events_payroll",
         "denied": "confluent-audit-log-events_other"
       }
     },
     "crn://mds1.example.com/kafka=test-cluster-1": {
       "management": {
         "allowed": "confluent-audit-log-events_other",
         "denied": "confluent-audit-log-events_other"
       },
       "authorize": {
         "allowed": "confluent-audit-log-events",
         "denied": "confluent-audit-log-events"
       }
     }
   }
 }

To find out where the audit log messages for the resource "crn://mds1.example.com/kafka=test-cluster-1" are being routed:

confluent audit-log route lookup confluent audit-log route lookup "crn://mds1.example.com/kafka=test-cluster-1"

{
  "route": "crn://mds1.example.com/kafka=test-cluster-1",
  "categories": {
    "management": {
      "allowed": "confluent-audit-log-events_other",
      "denied": "confluent-audit-log-events_other"
    },
    "authorize": {
      "allowed": "confluent-audit-log-events",
      "denied": "confluent-audit-log-events"
    },
    "produce": {
      "allowed": "",
      "denied": ""
    },
    "consume": {
      "allowed": "",
      "denied": ""
    },
    "interbroker": {
      "allowed": "",
      "denied": ""
    },
    "heartbeat": {
      "allowed": "",
      "denied": ""
    },
    "describe": {
      "allowed": "",
      "denied": ""
    }
  }
}

Only the single route showing where audit log messages using this CRN will be routed is returned, with all defaults populated.