Manage Access Control Lists (ACLs) for Authorization in Confluent Platform

Use ACLs

The examples in the following sections use kafka-acls (the Kafka Authorization management CLI) to add, remove, or list ACLs. For details on the supported options, run kafka-acls --help. Note that because ACLs are stored in ZooKeeper and they are propagated to the brokers asynchronously, there may be a delay before the change takes effect, even after the command returns.

You can also use the Kafka AdminClient API to manage ACLs.

Common ACL use cases include:

  • create a topic, the principal of the client requires the CREATE and DESCRIBE operations on the Topic or Cluster resource.
  • produce to a topic, the principal of the producer requires the WRITE operation on the Topic resource.
  • consume from a topic, the principal of the consumer requires the READ operation on the Topic and Group resources.

Note that to create, produce, and consume, the servers need to be configured with the appropriate ACLs. The servers need authorization to update metadata (CLUSTER_ACTION) and to read from a topic (READ) for replication purposes.

ZooKeeper-based ACLs do not support use of the confluent iam acl commands, which are only used with centralized ACLs.

ACL format

Kafka ACLs are defined in the general format of “Principal P is [Allowed/Denied] Operation O From Host H On Resources matching ResourcePattern RP”.

  • Wildcards apply for any resource.

  • You can give topic and group wildcard access to users who have permission to access all topics and groups (for example, admin users). If you use this method, you don’t need to create a separate rule for each topic and group for the user. For example, you can use this command to grant wildcard access to Alice:

    kafka-acls --bootstrap-server localhost:9092 \
      --command-config adminclient-configs.conf \
      --add \
      --allow-principal User:Alice \
      --operation All \
      --topic '*' \
      --group '*'
    

Use the configuration properties file

If you have used the producer API, consumer API, or Streams API with Kafka clusters before, then you might be aware that the connectivity details to the cluster are specified using configuration properties. While some users may recognize this for applications developed to interact with Kafka, others might be unaware that the administration tools that come with Kafka work the same way, meaning that after you have defined the configuration properties (often in a form of a config.properties file), either applications or tools will be able to connect to clusters.

When you create a configuration properties file in the user home directory, any subsequent command that you issue (be sure to include the path for the configuration file) reads that file and uses it to establish connectivity to the Kafka cluster. The first thing you must do to interact with your Kafka clusters using native Kafka tools is to generate a configuration properties file.

The --command-config argument supplies the Confluent CLI tools with the configuration properties that they require to connect to the Kafka cluster, in the .properties file format. Typically, this includes the security.protocol that the cluster uses to connect and any information necessary to authenticate to the cluster. For example:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
  username="alice" \
  password="s3cr3t";

Add ACLs

Suppose you want to add an ACL where: principals User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown and User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown are allowed to perform read and write operations on the topic test-topic from IP addresses 198.51.100.0 and 198.51.100.1. You can do that by executing the following:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf --add \
  --allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \
  --allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \
  --allow-host 198.51.100.0 \
  --allow-host 198.51.100.1 \
  --operation Read \
  --operation Write \
  --topic test-topic

Note that --allow-host and --deny-host only support IP addresses (hostnames are not supported). Also, IPv6 addresses are supported and can be used in ACLs.

By default, all principals without an explicit ACL allowing an operation to access a resource are denied. In rare instances, where an ACL that allows access to all but some principal is desired, you can use the --deny-principal and --deny-host options. For example, use the following command to allow all users to read from test-topic but deny only User:kafka/kafka6.host-1.com@bigdata.com from IP address 198.51.100.3:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:'*' \
  --allow-host '*' \
  --deny-principal User:kafka/kafka6.host-1.com@bigdata.com \
  --deny-host 198.51.100.3 \
  --operation Read \
  --topic test-topic

Kafka does not support certificate revocation lists (CRLs), so you cannot revoke a client’s certificate. The only alternative is to disable the user’s access using an ACL:

kafka-acls --bootstrap-server localhost:9092 \
  --add \
  --deny-principal "User:CN=Bob,O=Sales" \
  --cluster \
  --topic '*'

The examples above add ACLs to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly, one can add ACLs to a cluster by specifying --cluster and to a group by specifying --group [group-name]. If you need to grant permission to all groups, you can specify --group='*', as shown in the following command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:'*' \
  --operation read \
  --topic test \
  --group '*'

You can add ACLs on prefixed resource patterns. For example, you can add an ACL that enables users in the org unit (OU) ServiceUsers (this organization is using TLS authentication) to produce to any topic whose name starts with Test-. You can do that by running the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \
  --producer \
  --topic Test- \
  --resource-pattern-type prefixed

Note that --resource-pattern-type defaults to literal, which only affects resources with the identical name, or in the case of the wildcard resource name '*', a resource with any name.

Caution

The --link-id option for kafka-acls, available starting with Confluent Platform 7.1.0, is experimental and should not be used in production deployments. In particular, do not use link-ID to create ACLs. If an ACL with --link-id is created on the source cluster, it is marked for management by the link ID, and is not synced to the destination, regardless of acl.sync.filters. Currently, Confluent Platform does not validate link IDs created with kafka-acls. For details, see Migrating ACLs from Source to Destination Cluster.

Remove ACLs

Removing ACLs is similar adding them, except the --remove option should be specified instead of --add. To remove the ACLs added in the first example above, you can execute the following:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf --remove \
  --allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \
  --allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \
  --allow-host 198.51.100.0 \
  --allow-host 198.51.100.1 \
  --operation Read \
  --operation Write \
  --topic test-topic

If you want to remove the ACL added to the prefixed resource pattern in the example, run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --remove \
  --allow-principal User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \
  --producer \
  --topic Test- \
  --resource-pattern-type Prefixed

List ACLs

You can list the ACLs for a given resource by specifying the --list option and the resource. For example, to list all ACLs for test-topic, run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic test-topic

However, this only returns the ACLs that have been added to this exact resource pattern. Other ACLs can exist that affect access to the topic–for example, any ACLs on the topic wildcard '*', or any ACLs on prefixed resource patterns. You can explicitly query ACLs on the wildcard resource pattern by running the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic '*'

You might not be able to explicitly query for ACLs on prefixed resource patterns that match Test-topic because the name of such patterns may not be known, but you can list all ACLs affecting Test-topic by using --resource-pattern-type match. For example:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic Test-topic \
  --resource-pattern-type match

This command lists ACLs on all matching literal, prefixed, and wildcard resource patterns.

To view an ACL for an internal topic, run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic __consumer_offsets

Add or remove a principal as a producer or consumer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~–~~~~~~~~~~~~~~~~~

The most common use cases for ACL management are adding or removing a principal as a producer or consumer. To add user “Jane Doe” (Kerberos platform User:janedoe@bigdata.com) as a producer of test-topic, you can run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add --allow-principal User:janedoe@bigdata.com \
  --producer --topic test-topic

To add User:janedoe@bigdata.com as a consumer of test-topic with group Group-1, specify the --consumer and --group options:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:janedoe@bigdata.com \
  --consumer \
  --topic test-topic \
  --group Group-1

To remove a principal from a producer or consumer role, specify the --remove option.

Enable authorization for idempotent and transactional APIs

To ensure that exactly one copy of each message is written to the stream, use enable.idempotence=true. The principal used by idempotent producers must be authorized to perform Write on the cluster.

To enable Bob to produce messages using an idempotent producer, you can execute the command:

kafka-acls --bootstrap-server localhost:9092
  --command-config adminclient-configs.conf \
  --add --allow-principal User:Bob \
  --producer \
  --topic test-topic \
  --write \
  --idempotent

To enable transactional delivery with reliability semantics that span multiple producer sessions, configure a producer with a non-empty transactional.id. The principal used by transactional producers must be authorized for Describe and Write operations on the configured transactional.id.

To enable Alice to produce messages using a transactional producer with transactional.id=test-txn, run the command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:Alice \
  --producer \
  --topic test-topic \
  --transactional-id test-txn

Create non-super user ACL administrators

In the event that you need a non-super user to create or delete ACLs, but do not want to grant them the super user role, an existing super user can grant another user (referred to here as the ACL administrator) the ALTER --cluster access control entry (ACE), which binds an operation (in the following example, “alter”) to a “cluster” resource. After granting ALTER --cluster to the ACL Administrator, that user can create and delete ACLs for a given resource in a cluster.

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:notSuper \
  --operation  ALTER --cluster

Note

  • If you wish to assign ALTER --cluster to a group, then Group:groupName is also valid; however, the Authorizer you are using must be able to handle/allow groups.
  • Exercise caution when assigning ALTER --cluster to users or groups because such users will be able to create and delete ACLs to control their own access to resources as well.

Authorization in the REST Proxy and Schema Registry

You can use Kafka ACLs to enforce authorization in the REST Proxy and Schema Registry by using the REST Proxy Security Plugin and Schema Registry Security Plugin for Confluent Platform.

Debug using authorizer logs

To help debug authorization issues, you can run clusters with authorizer logs set to DEBUG mode in the log4j.properties. If you’re using the default log4j.properties file, change the following line to DEBUG mode instead of WARN:

log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender

The log4j.properties file is located in the Kafka configuration directory at /etc/kafka/log4j.properties. If you’re using an earlier version of Confluent Platform, or if you’re using your own log4j.properties file, you’ll need to add the following lines to the configuration:

log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false

You must restart the broker before it takes effect. This logs every request being authorized and its associated user name. The log is located in $kafka_logs_dir/kafka-authorizer.log. The location of the logs depends on the packaging format - kafka_logs_dir will be in /var/log/kafka in rpm/debian and $base_dir/logs in the archive format.