Authorization using Access Control Lists (ACLs) in Confluent Platform

Important

As of Confluent Platform 7.5, ZooKeeper is deprecated for new deployments. Confluent recommends KRaft mode for new deployments. For more information, see KRaft Overview for Confluent Platform.

Apache Kafka® includes a pluggable authorization framework (Authorizer), configured using the authorizer.class.name configuration property in the Kafka broker configuration file. Two authorizers are available: AclAuthorizer (for ZooKeeper-based clusters) and StandardAuthorizer (for KRaft-based clusters). For ZooKeeper-based clusters, Authorizer stores access control lists (ACLs) in ZooKeeper; for KRaft-based clusters, ACLs are stored in the KRaft-based Kafka cluster metadata. Kafka brokers use the authorizer to determine whether or not to authorize an operation based on the principal and the resource being accessed.

Setting ACLs is important – if a resource does not have associated ACLs, only super users can access the resource.

To learn more about authorization using ACLs, also see the following resources:

ACL concepts

Access control lists (ACLs) provide important authorization controls for your organization’s Apache Kafka® cluster data. Before creating and using ACLs, familiarize yourself with the concepts described in this section; your understanding is key to your success when creating and using ACLs to manage access to components and cluster data.

Authorizer

Note

While this topic covers Apache Kafka® authorizers only, Confluent also provides the Configure Confluent Server Authorizer in Confluent Platform to allow for proprietary LDAP group-based and role-based access control (RBAC), as well as centralized ACLs. By default, Confluent Server Authorizer supports ZooKeeper-based ACLs.

An authorizer is a server plugin used by Apache Kafka® to authorize operations. More specifically, an authorizer controls whether or not to authorize an operation based on the principal and the resource being accessed. The default Kafka authorizer implementation, for ZooKeeper-based clusters, is AclAuthorizer (kafka.security.authorizer.AclAuthorizer), which was introduced in Apache Kafka® 2.4/Confluent Platform 5.4.0. Prior to that, the authorizer was named SimpleAclAuthorizer (kafka.security.auth.SimpleAclAuthorizer). For KRaft-based Kafka clusters, the authorizer is StandardAuthorizer (org.apache.kafka.metadata.authorizer.StandardAuthorizer).

ZooKeeper-based Kafka clusters

Important

As of Confluent Platform 7.5, ZooKeeper is deprecated for new deployments. Confluent recommends KRaft mode for new deployments. For more information, see KRaft Overview for Confluent Platform.

To enable and use the AclAuthorizer on a ZooKeeper-based Kafka cluster, set its full class name for your broker configuration in server.properties:

authorizer.class.name=kafka.security.authorizer.AclAuthorizer

AclAuthorizer stores Kafka ACL information in ZooKeeper. However, it does not control access to ZooKeeper nodes. Rather, ZooKeeper has its own ACL security to control access to ZooKeeper nodes. ZooKeeper ACLs control which principal (for example, the broker principal) can update ZooKeeper nodes containing Kafka cluster metadata (such as in-sync replicas, topic configuration, and Kafka ACLs) and nodes used in interbroker coordination (such as controller election, broker joining, and topic deletion).

Kafka ACLs control which principals can perform operations on Kafka resources. Kafka brokers can use ZooKeeper ACLs by enabling Secure ZooKeeper in Confluent Platform (zookeeper.set.acl=true) for the broker configuration.

KRaft-based Kafka clusters

To enable and use the StandardAuthorizer on a KRaft-based Kafka cluster, set the full class name for your configuration on all nodes (brokers, controllers, or combined brokers and controllers) in their configuration file to:

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
KRaft Principal Forwarding

In KRaft clusters, administrator requests, such as CreateTopics and DeleteTopics, are sent to the broker listeners by the client. The broker then forwards the request to the active controller through the first listener configured in controller.listener.names. Authorization of these requests is done on the controller node. This is achieved by way of an Envelope request which packages both the underlying request from the client as well as the client principal. When the controller receives the forwarded Envelope request from the broker, it first authorizes the Envelope request using the authenticated broker principal. Then it authorizes the underlying request using the forwarded principal.

All of this implies that Kafka must understand how to serialize and deserialize the client principal. The authentication framework allows for customized principals by overriding the principal.builder.class configuration. In order for customized principals to work with KRaft, the configured class must implement org.apache.kafka.common.security.auth.KafkaPrincipalSerde so that Kafka knows how to serialize and deserialize the principals. The default implementation org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder uses the Kafka RPC format defined in the source code: clients/src/main/resources/common/message/DefaultPrincipalData.json. For details about request forwarding in KRaft, see KIP-590.

Principal

A principal is an entity that can be authenticated by the authorizer. Clients of a Kafka broker identify themselves as a particular principal using various security protocols. The way a principal is identified depends upon which security protocol it uses to connect to the Kafka broker (for example: mTLS, SASL/GSSAPI, or SASL/PLAIN). Authentication depends on the security protocol in place (such as SASL or TLS) to recognize a principal within a Kafka broker.

The following examples show the principal name format based on the security protocol being used:

  • When a client connects to a Kafka broker using the TLS security protocol, the principal name will be in the form of the TLS certificate subject name: CN=quickstart.confluent.io,OU=TEST,O=Sales,L=PaloAlto,ST=Ca,C=US. Note that there are no spaces after the comma between subject parts.
  • When a client connects to a Kafka broker using the SASL security protocol with GSSAPI (Kerberos) mechanism, the principal will be in the Kerberos principal format: kafka-client@hostname.com. For more detail, refer to Kerberos Principal Names.
  • When a client connects to a Kafka broker using the SASL security protocol with a PLAIN or SCRAM mechanism, the principal is a simple text string, such as alice, admin, or billing_etl_job_03.

The AclAuthorizer only supports individual users and always interprets the principal as the user name. However, other authorizers support groups. Therefore, when specifying the principal, you must include the type using the prefix User: or Group: (case-sensitive). Here are some examples: User:admin, Group:developers, or User:CN=quickstart.confluent.io,OU=TEST,O=Sales,L=PaloAlto,ST=Ca,C=US.

In the following ACL, the plain text principals (User:alice, User:fred) are identified as Kafka users who are allowed to run specific operations (read and write) from either of the specified hosts (host-1, host-2) on a specific resource (topic):

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:alice \
  --allow-principal User:fred \
  --allow-host host-1 \
  --allow-host host-2 \
  --operation read \
  --operation write \
  --topic finance-topic

To follow best practices, create one principal per application and give each principal only the ACLs required and no more. For example, if Alice is writing three programs that access different topics to automate a billing workflow, she could create three principals: billing_etl_job_01, billing_etl_job_02, and billing_etl_job_03. She would then grant each principal permissions on only the required topics and run each program with its specific principal.

Alternatively, she could take a middle-ground approach and create a single billing_etl_jobs principal with access to all topics that the billing programs require and run all three with that principal.

Alice should not run these programs as her own principal because she would presumably have broader permissions than the jobs actually need. Running with one principal per application also helps significantly with debugging and auditing because it’s clearer which application is performing each operation.

Wildcard principals

You can create ACLs for all principals by using a wildcard in the principal User:*. ACLs that include a wildcard for the user principal apply to all users. For example, the following command grants everyone access to the topic testTopic:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:* \
  --operation All \
  --topic testTopic

If you use an authorizer that supports group principals, such as Confluent Server Authorizer, you can also create ACLs for all group principals using the principal Group:*. ACLs that include the wildcard for the principal apply to all users belonging to at least one group.

Wildcards are not supported for super users. For example, specifying the wildcard User:* in the super.users property does not make every user a super user because no wildcard match is performed.

super.users=User:*

Note

If you use Confluent Server Authorizer, role bindings do not support wildcard matching. Assigning a role to User:* does not grant the role to every user. For details about RBAC principals, see Authorize using Role-Based Access Control (RBAC) in Confluent Platform.

SASL/Kerberos principals

If you use Kerberos, your Kafka principal is based on your Kerberos principal (for example, kafka/kafka1.hostname.com@EXAMPLE.COM). By default, Kafka only uses the primary name of the Kerberos principal, which is the name that appears before the slash (/). If the broker Kerberos principal is kafka/broker1.example.com@EXAMPLE, then the principal used by the Kafka authorizer is kafka. The hostname is different for every broker. This parsing is automatically implemented by the default value of sasl.kerberos.principal.to.local.rules.

For details about Kerberos principal names and configurations, see Kerberos Principals.

Note

If your organization uses a Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your Kafka cluster and for every operating system user that will access the cluster with Kerberos authentication (using clients and tools). Server principals are of type NT_HOSTBASED_SERVICE.

Each Kafka broker must be able to communicate with all of the other brokers for replication and when it acts as the controller. You must add the broker principal as a super user, otherwise Kafka will not work.

Configuration options for customizing SASL/Kerberos user name

By default, the Kafka principal will be the primary part of the Kerberos principal. You can change this behavior by specifying a customized rule for sasl.kerberos.principal.to.local.rules in server.properties. The format of sasl.kerberos.principal.to.local.rules takes the form of a list where each rule works in the same way it does in auth_to_local in the Kerberos configuration file (krb5.conf). These rules support the use of lowercase or uppercase to force the translated result to be all lowercase (/L) or all uppercase (/U). This enforcement is achieved by adding a /L or /U to the end of the rule. Each rule starts with RULE: and contains an expression. The following examples show the format and syntax:

RULE:[n:string](regexp)s/pattern/replacement/
RULE:[n:string](regexp)s/pattern/replacement/g
RULE:[n:string](regexp)s/pattern/replacement//L
RULE:[n:string](regexp)s/pattern/replacement/g/L
RULE:[n:string](regexp)s/pattern/replacement//U
RULE:[n:string](regexp)s/pattern/replacement/g/U

This rule translates user@MYDOMAIN.COM to user while keeping the default rule in place:

sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT

TLS principal user names

As mentioned earlier, principals are recognized based on how users authenticate to the Kafka broker, which in turn depends upon the security protocol used. To use TLS principals, you must understand how to accurately represent user names.

By default, the name of the principal identified by a TLS certificate is the DN (X.500 Distinguished Name) of that certificate (also known as the Subject), which uses the form CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown. You can use ssl.principal.mapping.rules to translate the DN to a more manageable principal name. Refer to Configuration options for customizing TLS user name for details.

If TLS is enabled, but client authentication is not configured, clients connect anonymously using the TLS port and appear to the server with the user name ANONYMOUS. Such a configuration provides encryption and server authentication, but clients connect anonymously. The other case in which the server sees the ANONYMOUS user is if the PLAINTEXT security protocol is used. If you grant read and write permission to the ANONYMOUS user, anyone can access the brokers without authentication.

Configuration options for customizing TLS user name

By default, the TLS/SSL user name is in the form CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown. This configuration allows a list of rules for mapping the X.500 distinguished name (DN) to short name. The rules are evaluated in order and the first rule that matches a DN is used to map it to a short name. Any later rules in the list are ignored.

The format of ssl.principal.mapping.rules is a list where each rule starts with “RULE:” and contains an expression using the formats below. The default rule returns string representation of the X.500 certificate DN. If the DN matches the pattern, then the replacement command is run over the name.

  • Shorthand character classes are supported using double backslahes, for example, \\d for digits, \\w for word characters, \\s for whitespace, and \\p{L} for Unicode letters.
  • You can force the translated result to be all lowercase or uppercase case by adding an /L or /U to the end of the rule:
RULE:pattern/replacement/
RULE:pattern/replacement/[LU]

Example ssl.principal.mapping.rules values are:

RULE:^CN=(.*?),OU=ServiceUsers.*$/$1/,
RULE:^CN=(.*?),OU=(.*?),O=(.*?),L=(.*?),ST=(.*?),C=(.*?)$/$1@$2/L,
RULE:^.*[Cc][Nn]=([a-zA-Z0-9.]*).*$/$1/L,
DEFAULT

These rules translate the DN as follows: CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown to serviceuser and CN=adminUser,OU=Admin,O=Unknown,L=Unknown,ST=Unknown,C=Unknown to adminuser@admin.

Operations

An operation is an action performed on a resource. In addition to identifying the resources to which users or groups have access, ACLs identify the operations those users or groups are authorized to perform. For each resource, an operation is mapped to one or more Kafka APIs or request types for that resource. For example, a READ operation for the Topic resource is mapped to Fetch, OffsetCommit, and TxnOffsetCommit. Or, a WRITE operation for the Topic resource is mapped to Produce and AddPartitionsToTxn.

The following tables list the operations available for each resource type in Confluent Platform and describe the relationship between operations, resources, and APIs. This list is not comprehensive. To learn more about additional cluster resource operations, see Authorization (ACLs) in the Cluster Linking documentation.

Cluster resource operations

Operation Resource APIs allowed
Alter Cluster AlterReplicaLogDirs, CreateAcls, DeleteAcls
AlterConfigs Cluster AlterConfigs
ClusterAction Cluster Fetch (for replication only), LeaderAndIsr, OffsetForLeaderEpoch, StopReplica, UpdateMetadata, ControlledShutdown, WriteTxnMarkers
Create Cluster CreateTopics, Metadata
Describe Cluster DescribeAcls, DescribeLogDirs, ListGroups
DescribeConfigs Cluster DescribeConfigs

Topic resource type operations

Operation Resource APIs allowed
Alter Topic CreatePartitions
AlterConfigs Topic AlterConfigs
Create Topic CreateTopics, Metadata
Delete Topic DeleteRecords, DeleteTopics
Describe Topic ListOffsets, Metadata, OffsetFetch, OffsetForLeaderEpoch
DescribeConfigs Topic DescribeConfigs
Read Topic Fetch, OffsetCommit, TxnOffsetCommit
Write Topic Produce, AddPartitionsToTxn

Group resource type operations

Operation Resource APIs allowed
Delete Group DeleteGroups
Describe Group DescribeGroup, FindCoordinator, ListGroups
Read Group AddOffsetsToTxn, Heartbeat, JoinGroup, LeaveGroup, OffsetCommit, OffsetFetch, SyncGroup, TxnOffsetCommit

Token resource type operations

Operation Resource API allowed
Describe DelegationToken DescribeTokens

Transactional ID resource type operations

Operation Resource APIs allowed
Describe TransactionalId FindCoordinator
Write TransactionalId Produce, AddPartitionsToTxn, AddOffsetsToTxn, EndTxn, InitProducerId, TxnOffsetCommit

The operations in the tables above are for clients (producers, consumers, admin) and interbroker operations of a cluster. In a secure cluster, client requests and interbroker operations require authorization. The interbroker operations are split into two classes: cluster and topic. Cluster operations refer to operations necessary for the management of the cluster, like updating broker and partition metadata, changing the leader and the set of in-sync replicas of a partition, and triggering a controlled shutdown.

Because of how replication of topic partitions works internally, the broker principal must be a super user so that the broker can replicate topics properly from leader to follower.

Producers and consumers need to be authorized to perform operations on topics, but they should be configured with different principals compared to the brokers. The main operations that producers require authorization to execute are WRITE and READ. Admin users can execute command line tools and require authorization. Operations that an admin user might need authorization for are DELETE, CREATE, and ALTER. You can use wildcards (*) for producers and consumers so that you only have to set it once.

Implicitly-derived operations

Certain operations provide additional implicit operation access to users.

When granted READ, WRITE, or DELETE, users implicitly derive the DESCRIBE operation.

When granted ALTER_CONFIGS, users implicitly derive the DESCRIBE_CONFIGS operation.

ACL order of precedence

In contexts where you have both allow and deny ACLs, deny ACLs take precedence over allow ACLs.

Resources

Users access and perform operations on specific Kafka and Confluent Platform resources. A resource can be a cluster, group, Kafka topic, transactional ID, or Delegation token. ACLs specify which users can access a specified resource and the operations they can perform on that resource. Within Kafka, resources include:

Cluster
The Kafka cluster. To run operations that impact the entire Kafka cluster, such as a controlled shutdown or creating new topics, must be assigned privileges on the cluster resource.
Delegation Token
Delegation tokens are shared secrets between Apache Kafka® brokers and clients. Authentication based on delegation tokens is a lightweight authentication mechanism that you can use to complement existing SASL/SSL methods. Refer to Authentication using Delegation Tokens in Confluent Platform for more details.
Group
Groups in the brokers. All protocol calls that work with groups, such as joining a group, must have corresponding privileges with the group in the subject. Group (group.id) includes Consumer Group, Stream Group (application.id), Connect Worker Group, or any other group that uses the Consumer Group protocol, like Schema Registry cluster.
Topic
All Kafka messages are organized into topics (and partitions). To access a topic, you must have a corresponding operation (such as READ or WRITE) defined in an ACL.
Transactional ID

A transactional ID (transactional.id) identifies a single producer instance across application restarts and provides a way to ensure a single writer; this is necessary for exactly-once semantics (EOS). Only one producer can be active for each transactional.id. When a producer starts, it first checks whether or not there is a pending transaction by a producer with its own transactional.id. If there is, then it waits until the transaction has finished (abort or commit). This guarantees that the producer always starts from a consistent state.

When used, a producer must be able to manipulate transactional IDs and have all the permissions set. For example, the following ACL allows all users in the system access to an EOS producer:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --transactional-id * \
  --allow-principal User:* \
  --operation write

In cases where you need to create ACLs for a Kafka cluster to allow Streams exactly-once (EOS) processing:

# Allow Streams EOS:
kafka-acls ...
  --add \
  --allow-principal User:team1 \
  --operation WRITE \
  --operation DESCRIBE \
  --transactional-id team1-streams-app1 \
  --resource-pattern-type prefixed

For additional information about the role of transactional IDs, refer to Transactions in Apache Kafka.

The Operations available to a user depend on the resources to which the user has been granted access. All resources have a unique resource identifier. For example, for the topic resource type, the resource identity is the topic name, and for the group resource type, the resource identity is the group name.

You can view the ACLs for a specific resource using the --list option. For example, to view all ACLs for the topic test-topic run the following command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic test-topic

Use prefixed ACLs

You can specify ACL resources using either a LITERAL value (default), a PREFIXED pattern type, or a wildcard (*), which allows both.

If you identify the resource as LITERAL, Kafka will attempt to match the full resource name (for example, topic or consumer group) with the resource specified in the ACL. In some cases, you might want to use an asterisk (*) to specify all resources.

If you identify the resource as PREFIXED, Kafka attempts to match the prefix of the resource name with the resource specified in ACL.

For example, you can add an ACL for user User:kafka/kafka1.host-1.com@bigdata.com to produce to any topic with a name that uses the prefix Test-. You can do this by running the following command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:kafka/kafka1.host-1.com@bigdata.com \
  --producer \
  --topic Test- \
  --resource-pattern-type prefixed

In the following example, a program called “BillingPublisher”, which was built using the Kafka Java SDK, requires an ACL that allows it to write only to topics that use the prefix billing-:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:BillingPublisher \
  --allow-host 198.51.100.0 \
  --producer \
  --topic billing- \
  --resource-pattern-type prefixed

Be aware that you cannot use the PREFIXED resource pattern type for a topic while granting access to all groups * (wildcard) within a single command. Instead, split permissions across different commands. For example, grant READ and DESCRIBE access to the user for the prefixed topics:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:username \
  --operation Read \
  --operation Describe \
  --topic topicprefix \
  --resource-pattern-type prefixed

Then grant user READ access to all groups:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:username \
  --operation Read \
  --group '*'

Super users

By default, if a resource has no associated ACLs, then only super users can access that resource. If you want to change that behavior, you can include the following in server.properties: allow.everyone.if.no.acl.found=true.

Note

Use of the allow.everyone.if.no.acl.found configuration option in production environments is strongly discouraged.

  • If you specify this option based on the assumption that you have ACLs, but then your last ACL is deleted, you essentially open up your Kafka clusters to all users.
  • If you’re using this option to disable ACLs, exercise caution: if someone adds an ACL, all the users who previously had access will lose that access.

You can add super users in server.properties (note that the delimiter is a semicolon because TLS/SSL user names might contain a comma) as shown here:

super.users=User:Bob;User:Alice

ACLs and monitoring interceptors

Confluent Monitoring Interceptors produce to the _confluent-monitoring topic by default. You can configure the _confluent-monitoring topic using the confluent.monitoring.interceptor.topic attribute. Application or programmatic access to the _confluent-monitoring topic must have WRITE and DESCRIBE access. If the _confluent-monitoring topic does not exist, then you must have cluster-level CREATE and DESCRIBE access to create it. You must also have topic-level CREATE, DESCRIBE, READ, and WRITE access on _confluent-monitoring.

You can provide access either individually for each client principal that will use interceptors, or using a wildcard entry for all clients. The following example shows an ACL that grants a principal access to the _confluent-monitoring topic:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --topic _confluent-monitoring \
  --allow-principal User:username \
  --operation write \
  --operation Describe

The Confluent Control Center principal requires READ, DESCRIBE, and CREATE access to the _confluent-monitoring topic. Use the control-center-set-acls script to set the appropriate permissions for the Confluent Control Center principal to access this topic. For details, see Configure HTTP Basic Authentication with Control Center on Confluent Platform.

Use ACLs

The examples in the following sections use kafka-acls (the Kafka Authorization management CLI) to add, remove, or list ACLs. For details on the supported options, run kafka-acls --help. Note that because ACLs are stored in ZooKeeper and they are propagated to the brokers asynchronously, there may be a delay before the change takes effect, even after the command returns.

You can also use the Kafka AdminClient API to manage ACLs.

Common ACL use cases include:

  • create a topic, the principal of the client requires the CREATE and DESCRIBE operations on the Topic or Cluster resource.
  • produce to a topic, the principal of the producer requires the WRITE operation on the Topic resource.
  • consume from a topic, the principal of the consumer requires the READ operation on the Topic and Group resources.

Note that to create, produce, and consume, the servers need to be configured with the appropriate ACLs. The servers need authorization to update metadata (CLUSTER_ACTION) and to read from a topic (READ) for replication purposes.

ZooKeeper-based ACLs do not support use of the confluent iam acl commands, which are only used with centralized ACLs.

ACL format

Kafka ACLs are defined in the general format of “Principal P is [Allowed/Denied] Operation O From Host H On Resources matching ResourcePattern RP”.

  • Wildcards apply for any resource.

  • You can give topic and group wildcard access to users who have permission to access all topics and groups (for example, admin users). If you use this method, you don’t need to create a separate rule for each topic and group for the user. For example, you can use this command to grant wildcard access to Alice:

    kafka-acls --bootstrap-server localhost:9092 \
      --command-config adminclient-configs.conf \
      --add \
      --allow-principal User:Alice \
      --operation All \
      --topic '*' \
      --group '*'
    

Use the configuration properties file

If you have used the producer API, consumer API, or Streams API with Kafka clusters before, then you might be aware that the connectivity details to the cluster are specified using configuration properties. While some users may recognize this for applications developed to interact with Kafka, others might be unaware that the administration tools that come with Kafka work the same way, meaning that after you have defined the configuration properties (often in a form of a config.properties file), either applications or tools will be able to connect to clusters.

When you create a configuration properties file in the user home directory, any subsequent command that you issue (be sure to include the path for the configuration file) reads that file and uses it to establish connectivity to the Kafka cluster. The first thing you must do to interact with your Kafka clusters using native Kafka tools is to generate a configuration properties file.

The --command-config argument supplies the Confluent CLI tools with the configuration properties that they require to connect to the Kafka cluster, in the .properties file format. Typically, this includes the security.protocol that the cluster uses to connect and any information necessary to authenticate to the cluster. For example:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
  username="alice" \
  password="s3cr3t";

Add ACLs

Suppose you want to add an ACL where: principals User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown and User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown are allowed to perform read and write operations on the topic test-topic from IP addresses 198.51.100.0 and 198.51.100.1. You can do that by executing the following:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf --add \
  --allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \
  --allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \
  --allow-host 198.51.100.0 \
  --allow-host 198.51.100.1 \
  --operation Read \
  --operation Write \
  --topic test-topic

Note that --allow-host and --deny-host only support IP addresses (hostnames are not supported). Also, IPv6 addresses are supported and can be used in ACLs.

By default, all principals without an explicit ACL allowing an operation to access a resource are denied. In rare instances, where an ACL that allows access to all but some principal is desired, you can use the --deny-principal and --deny-host options. For example, use the following command to allow all users to read from test-topic but deny only User:kafka/kafka6.host-1.com@bigdata.com from IP address 198.51.100.3:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:'*' \
  --allow-host '*' \
  --deny-principal User:kafka/kafka6.host-1.com@bigdata.com \
  --deny-host 198.51.100.3 \
  --operation Read \
  --topic test-topic

Kafka does not support certificate revocation lists (CRLs), so you cannot revoke a client’s certificate. The only alternative is to disable the user’s access using an ACL:

kafka-acls --bootstrap-server localhost:9092 \
  --add \
  --deny-principal "User:CN=Bob,O=Sales" \
  --cluster \
  --topic '*'

The examples above add ACLs to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly, one can add ACLs to a cluster by specifying --cluster and to a group by specifying --group [group-name]. If you need to grant permission to all groups, you can specify --group='*', as shown in the following command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:'*' \
  --operation read \
  --topic test \
  --group '*'

You can add ACLs on prefixed resource patterns. For example, you can add an ACL that enables users in the org unit (OU) ServiceUsers (this organization is using TLS authentication) to produce to any topic whose name starts with Test-. You can do that by running the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \
  --producer \
  --topic Test- \
  --resource-pattern-type prefixed

Note that --resource-pattern-type defaults to literal, which only affects resources with the identical name, or in the case of the wildcard resource name '*', a resource with any name.

Caution

The --link-id option for kafka-acls, available starting with Confluent Platform 7.1.0, is experimental and should not be used in production deployments. In particular, do not use link-ID to create ACLs. If an ACL with --link-id is created on the source cluster, it is marked for management by the link ID, and is not synced to the destination, regardless of acl.sync.filters. Currently, Confluent Platform does not validate link IDs created with kafka-acls. For details, see Migrating ACLs from Source to Destination Cluster.

Remove ACLs

Removing ACLs is similar adding them, except the --remove option should be specified instead of --add. To remove the ACLs added in the first example above, you can execute the following:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf --remove \
  --allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \
  --allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \
  --allow-host 198.51.100.0 \
  --allow-host 198.51.100.1 \
  --operation Read \
  --operation Write \
  --topic test-topic

If you want to remove the ACL added to the prefixed resource pattern in the example, run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --remove \
  --allow-principal User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \
  --producer \
  --topic Test- \
  --resource-pattern-type Prefixed

List ACLs

You can list the ACLs for a given resource by specifying the --list option and the resource. For example, to list all ACLs for test-topic, run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic test-topic

However, this only returns the ACLs that have been added to this exact resource pattern. Other ACLs can exist that affect access to the topic–for example, any ACLs on the topic wildcard '*', or any ACLs on prefixed resource patterns. You can explicitly query ACLs on the wildcard resource pattern by running the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic '*'

You might not be able to explicitly query for ACLs on prefixed resource patterns that match Test-topic because the name of such patterns may not be known, but you can list all ACLs affecting Test-topic by using --resource-pattern-type match. For example:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic Test-topic \
  --resource-pattern-type match

This command lists ACLs on all matching literal, prefixed, and wildcard resource patterns.

To view an ACL for an internal topic, run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --list \
  --topic __consumer_offsets

Add or remove a principal as a producer or consumer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~–~~~~~~~~~~~~~~~~~

The most common use cases for ACL management are adding or removing a principal as a producer or consumer. To add user “Jane Doe” (Kerberos platform User:janedoe@bigdata.com) as a producer of test-topic, you can run the following CLI command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add --allow-principal User:janedoe@bigdata.com \
  --producer --topic test-topic

To add User:janedoe@bigdata.com as a consumer of test-topic with group Group-1, specify the --consumer and --group options:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:janedoe@bigdata.com \
  --consumer \
  --topic test-topic \
  --group Group-1

To remove a principal from a producer or consumer role, specify the --remove option.

Enable authorization for idempotent and transactional APIs

To ensure that exactly one copy of each message is written to the stream, use enable.idempotence=true. The principal used by idempotent producers must be authorized to perform Write on the cluster.

To enable Bob to produce messages using an idempotent producer, you can execute the command:

kafka-acls --bootstrap-server localhost:9092
  --command-config adminclient-configs.conf \
  --add --allow-principal User:Bob \
  --producer \
  --topic test-topic \
  --write \
  --idempotent

To enable transactional delivery with reliability semantics that span multiple producer sessions, configure a producer with a non-empty transactional.id. The principal used by transactional producers must be authorized for Describe and Write operations on the configured transactional.id.

To enable Alice to produce messages using a transactional producer with transactional.id=test-txn, run the command:

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:Alice \
  --producer \
  --topic test-topic \
  --transactional-id test-txn

Create non-super user ACL administrators

In the event that you need a non-super user to create or delete ACLs, but do not want to grant them the super user role, an existing super user can grant another user (referred to here as the ACL administrator) the ALTER --cluster access control entry (ACE), which binds an operation (in the following example, “alter”) to a “cluster” resource. After granting ALTER --cluster to the ACL Administrator, that user can create and delete ACLs for a given resource in a cluster.

kafka-acls --bootstrap-server localhost:9092 \
  --command-config adminclient-configs.conf \
  --add \
  --allow-principal User:notSuper \
  --operation  ALTER --cluster

Note

  • If you wish to assign ALTER --cluster to a group, then Group:groupName is also valid; however, the Authorizer you are using must be able to handle/allow groups.
  • Exercise caution when assigning ALTER --cluster to users or groups because such users will be able to create and delete ACLs to control their own access to resources as well.

Authorization in the REST Proxy and Schema Registry

You can use Kafka ACLs to enforce authorization in the REST Proxy and Schema Registry by using the REST Proxy Security Plugin and Schema Registry Security Plugin for Confluent Platform.

Debug using authorizer logs

To help debug authorization issues, you can run clusters with authorizer logs set to DEBUG mode in the log4j.properties. If you’re using the default log4j.properties file, change the following line to DEBUG mode instead of WARN:

log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender

The log4j.properties file is located in the Kafka configuration directory at /etc/kafka/log4j.properties. If you’re using an earlier version of Confluent Platform, or if you’re using your own log4j.properties file, you’ll need to add the following lines to the configuration:

log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false

You must restart the broker before it takes effect. This logs every request being authorized and its associated user name. The log is located in $kafka_logs_dir/kafka-authorizer.log. The location of the logs depends on the packaging format - kafka_logs_dir will be in /var/log/kafka in rpm/debian and $base_dir/logs in the archive format.