.. _kafka_authorization: Authorization using ACLs ======================== |ak-tm| ships with a pluggable, out-of-the-box Authorizer implementation that uses |zk-full| to store all the ACLs. It is important to set ACLs because otherwise access to resources is limited to super users when an Authorizer is configured. The default behavior is that if a resource has no associated ACLs, then no one is allowed to access the resource, except super users. .. include:: ../includes/cp-demo-tip.rst .. _acl-concepts: ACL concepts ------------ Access Control Lists (ACLs) provide important authorization controls for your enterprise’s |ak-tm| cluster data. Before attempting to create and use ACLs, familiarize yourself with the concepts described in this section; your understanding of them is key to your success when creating and using ACLs to manage access to components and cluster data. Authorizer ~~~~~~~~~~ An authorizer is a server plugin used by |ak-tm| to authorize operations. More specifically, an authorizer controls whether or not to authorize an operation based on the principal and the resource being accessed. The default |ak| authorizer implementation is AclAuthorizer (``kafka.security.authorizer.AclAuthorizer``), which was introduced in |ak-tm| 2.4/|cp| 5.4.0. Prior to that, the authorizer was named SimpleAclAuthorizer (``kafka.security.auth.SimpleAclAuthorizer``). To enable and use the AclAuthorizer, set its full class name for your broker configuration in ``server.properties``: .. codewithvars:: bash authorizer.class.name=kafka.security.authorizer.AclAuthorizer .. note:: While this topic covers AclAuthorizer only, be aware that Confluent also provides the :ref:`kafka_ldap_authorizer` to allow for group and user-principal-based authorization, and the :ref:`confluent_server_authorizer` to allow for proprietary LDAP group-based and role-based access control (:ref:`RBAC `), as well as :ref:`centralized ACLs `. By default, |csa| supports |zk|-based ACLs. AclAuthorizer stores |ak| ACL information in |zk|. However, it does not control access to |zk| nodes. Rather, |zk| has its own ACL security to control access to |zk| nodes. |zk| ACLs control which principal (for example, the broker principal) can update |zk| nodes containing |ak| cluster metadata (such as in-sync replicas, topic configuration, and |ak| ACLs) and nodes used in inter-broker coordination (such as controller election, broker joining, and topic deletion). Kafka ACLs control which principals can perform operations on |ak| resources. |ak| brokers can use |zk| ACLs by enabling :ref:`zk-security` (``zookeeper.set.acl=true``) for the broker configuration. .. _acl-principal: Principal ~~~~~~~~~ A principal is an entity that can be authenticated by the authorizer. Clients of a |ak| broker identify themselves as a particular principal using various security protocols. The way a principal is identified depends upon which security protocol it uses to connect to the |ak| broker (for example: :ref:`mTLS `, :ref:`SASL/GSSAPI `, or :ref:`SASL/PLAIN `). Authentication depends on the security protocol in place (such as SASL, TLS/SSL) to recognize a principal within a |ak| broker. The following examples show the principal name format based on the security protocol being used: - When a client connects to a |ak| broker using the SSL security protocol, the principal name will be in the form of the SSL certificate subject name: ``CN=quickstart.confluent.io,OU=TEST,O=Sales,L=PaloAlto,ST=Ca,C=US``. Note that there are no spaces after the comma between subject parts. - When a client connects to a |ak| broker using the SASL security protocol with GSSAPI (Kerberos) mechanism, the principal will be in the Kerberos principal format: ``kafka-client@hostname.com``. For more detail, refer to `Kerberos Principal Names `__. - When a client connects to a Kafka broker using the SASL security protocol with a PLAIN or SCRAM mechanism, the principal will be a simple text string, such as ``alice``, ``admin``, or ``billing_etl_job_03``. The AclAuthorizer only supports individual users and always interprets the principal as the user name. However, other authorizers, such as the LDAP Authorizer, support groups. Therefore, when specifying the principal you must include the type using the prefix ``User:`` or ``Group:`` (case-sensitive). Some examples: ``User:admin``, ``Group:developers``, or ``User:CN=quickstart.confluent.io,OU=TEST,O=Sales,L=PaloAlto,ST=Ca,C=US``. In the following ACL, the plain text principals (``User:alice``, ``User:fred``) are identified as |ak| users who are allowed to run specific operations (read, write) from either of the specified hosts (host-1, host-2) on a specific resource (topic): .. codewithvars:: bash kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:alice --allow-principal User:fred --allow-host host-1 \ --allow-host host-2 --operation read --operation write --topic finance-topic It's a best practice to create one principal per application and give each principal only the ACLs it requires and no more. For example: if Alice is writing three programs that access different topics to automate a billing workflow, she could create three principals: ``billing_etl_job_01``, ``billing_etl_job_02``, and ``billing_etl_job_03``. She would then grant each principal permissions on only the exact topics it needs, and run each program with its specific principal. Alternatively, she could take a middle-ground approach and create a single ``billing_etl_jobs`` principal with access to all of the topics that the billing programs require, and simply run all three with that principal. Alice should not run these programs as her own principal because she would presumably have broader permissions than the jobs actually need. Running with one principal per application also helps significantly with debugging and auditing because it's clearer which application is performing each operation. .. _acl-wildcard-principals: Wildcard principals ^^^^^^^^^^^^^^^^^^^ You can create ACLs for all principals using a wildcard in the principal ``User:*``. ACLs that use a wildcard as the user principal are applied to all users. For example, the following command grants everyone access to the topic ``testTopic``: :: kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal User:* --operation All --topic testTopic If you use an authorizer that supports group principals, such as |csa|, you can also create ACLs for all group principals using the principal ``Group:*``. ACLs that use the wildcard as the principal are applied to all users who belong to at least one group. You cannot create ACLs that use the wildcard for super users. For example, while the following configuration makes the user principal ``User:*`` a super user, it does not make every user a super user because no wildcard match is performed: :: super.users=User:* .. note:: If you are using |csa|, note that role bindings do not support wildcard matching. Hence, assigning a role to ``User:*`` does not grant the role to every user. Refer to :ref:`rbac-overview` for more details about RBAC principals. .. _sasl-with-kerberos-principals: SASL/Kerberos principals ^^^^^^^^^^^^^^^^^^^^^^^^ If you use Kerberos, your |ak| principal is based on your Kerberos principal (for example, ``kafka/kafka1.hostname.com@EXAMPLE.COM``). By default, |ak| only uses the primary name of the Kerberos principal, which is the name that appears before the slash (``/``). Hence, if the broker Kerberos principal is ``kafka/broker1.example.com@EXAMPLE``, then the principal used by the |ak| authorizer is ``kafka``. The hostname is different for every broker. This parsing is automatically implemented using the default value of ``sasl.kerberos.principal.to.local.rules``. For details about Kerberos principal names and configurations, refer to `Kerberos Principals `__. .. note:: If you are using your organization’s Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each |ak| broker in your cluster and for every operating system user that will access |ak| with Kerberos authentication (using clients and tools). Server principals are of type NT_HOSTBASED_SERVICE. Each broker must be able to communicate with all of the other brokers for replication, and when it acts as the controller. You must add the broker principal as a super user, otherwise |ak| will not work. Configuration options for customizing SASL/Kerberos user name """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""" By default, the |ak| principal will be the primary part of the Kerberos principal. You can change this behavior by specifying a customized rule for ``sasl.kerberos.principal.to.local.rules`` in ``server.properties``. The format of ``sasl.kerberos.principal.to.local.rules`` takes the form a list where each rule works in the same way it does in ``auth_to_local`` in the `Kerberos configuration file (krb5.conf) `__. These rules support the use of lowercase/uppercase to force the translated result to be all lowercase (``/L``) or all uppercase (``/U``). This enforcement is achieved by adding a ``/L`` or ``/U`` to the end of the rule. Each rule starts with ``RULE:`` and contains an expression. The following examples show the format and syntax: :: RULE:[n:string](regexp)s/pattern/replacement/ RULE:[n:string](regexp)s/pattern/replacement/g RULE:[n:string](regexp)s/pattern/replacement//L RULE:[n:string](regexp)s/pattern/replacement/g/L RULE:[n:string](regexp)s/pattern/replacement//U RULE:[n:string](regexp)s/pattern/replacement/g/U This rule translates ``user@MYDOMAIN.COM`` to ``user`` while keeping the default rule in place: :: sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT TLS/SSL principal user names ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned earlier, principals are recognized based on how users authenticate to the broker, which in turn depends upon the security protocol in use. To use TLS/SSL principals, you must understand how to accurately represent user names. By default, the name of the principal identified by a TLS/SSL certificate is the DN (X.500 Distinguished Name) of that certificate (also known as the Subject), which uses the form ``CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown``. You can use ``ssl.principal.mapping.rules`` to translate the DN to a more manageable principal name. Refer to :ref:`kafka_config_ssl_user_name` for details. In the event that SSL is enabled but client authentication is not configured, clients will connect anonymously using the SSL port and will appear to the server with the user name ANONYMOUS. Such a configuration provides encryption and server authentication, but clients connect anonymously. The other case in which the server will see the ANONYMOUS user is if the PLAINTEXT security protocol is being used. By granting read/write permission to the ANONYMOUS user, you are allowing anyone to access the brokers without authentication. As such, you should not grant access to ANONYMOUS users unless the intention is to give everyone the permission. .. _kafka_config_ssl_user_name: Configuration options for customizing TLS/SSL user name """"""""""""""""""""""""""""""""""""""""""""""""""""""" .. include:: ../includes/kafka-ssl-user-dname-mapping-rule.rst .. _acl-operations: Operations ~~~~~~~~~~ An operation is an action performed on a :ref:`resource `. In addition to identifying the resources to which users or groups have access, ACLs also identify the operations those users or groups are authorized to perform. For each resource, an operation is mapped to one or more |ak| APIs or request types applicable for that resource. For example, a READ operation for the Topic resource is mapped to Fetch, OffsetCommit, and TxnOffsetCommit. Or, a WRITE operation for the Topic resource is mapped to Produce and AddPartitionsToTxn. The following tables identify the valid operations available for each resource type in |cp|, and describe the relationship between operations, resources, and APIs: .. _acl-format-operations-resources: Operations available for the Cluster resource type: ================ =============== ========================================= Operation Resource API ================ =============== ========================================= ALTER Cluster AlterReplicaLogDirs ALTER Cluster CreateAcls ALTER Cluster DeleteAcls ALTER_CONFIGS Cluster AlterConfigs CLUSTER_ACTION Cluster Fetch (for replication only) CLUSTER_ACTION Cluster LeaderAndIsr CLUSTER_ACTION Cluster OffsetForLeaderEpoch CLUSTER_ACTION Cluster StopReplica CLUSTER_ACTION Cluster UpdateMetadata CLUSTER_ACTION Cluster ControlledShutdown CLUSTER_ACTION Cluster WriteTxnMarkers CREATE Cluster CreateTopics CREATE Cluster Metadata if ``auto.create.topics.enable`` DESCRIBE Cluster DescribeAcls DESCRIBE Cluster DescribeLogDirs DESCRIBE Cluster ListGroups DESCRIBE_CONFIGS Cluster DescribeConfigs IDEMPOTENT_WRITE Cluster InitProducerId IDEMPOTENT_WRITE Cluster Produce ================ =============== ========================================= Operations available for the Topic resource type: ================ =============== ========================================= Operation Resource API ================ =============== ========================================= ALTER Topic CreatePartitions ALTER_CONFIGS Topic AlterConfigs CREATE Topic Metadata if ``auto.create.topics.enable`` CREATE Topic CreateTopics DELETE Topic DeleteRecords DELETE Topic DeleteTopics DESCRIBE Topic ListOffsets DESCRIBE Topic Metadata DESCRIBE Topic OffsetFetch DESCRIBE Topic OffsetForLeaderEpoch DESCRIBE_CONFIGS Topic DescribeConfigs READ Topic Fetch READ Topic OffsetCommit READ Topic TxnOffsetCommit WRITE Topic Produce WRITE Topic AddPartitionsToTxn ================ =============== ========================================= Operations available for the Group resource type: ================ =============== ========================================= Operation Resource API ================ =============== ========================================= DELETE Group DeleteGroups DESCRIBE Group DescribeGroup DESCRIBE Group FindCoordinator DESCRIBE Group ListGroups READ Group AddOffsetsToTxn READ Group Heartbeat READ Group JoinGroup READ Group LeaveGroup READ Group OffsetCommit READ Group OffsetFetch READ Group SyncGroup READ Group TxnOffsetCommit ================ =============== ========================================= Operations available for the Delegation Token resource type: ================ =============== ========================================= Operation Resource API ================ =============== ========================================= DESCRIBE DelegationToken DescribeTokens ================ =============== ========================================= Operations available for the Transactional ID resource type: ================ =============== ========================================= Operation Resource API ================ =============== ========================================= DESCRIBE TransactionalId FindCoordinator WRITE TransactionalId Produce WRITE TransactionalId AddPartitionsToTxn WRITE TransactionalId AddOffsetsToTxn WRITE TransactionalId EndTxn WRITE TransactionalId InitProducerId WRITE TransactionalId TxnOffsetCommit ================ =============== ========================================= The operations in the tables above are both for clients (producers, consumers, admin) and inter-broker operations of a cluster. In a secure cluster, both client requests and inter-broker operations require authorization. The inter-broker operations are split into two classes: cluster and topic. Cluster operations refer to operations necessary for the management of the cluster, like updating broker and partition metadata, changing the leader and the set of in-sync replicas of a partition, and triggering a controlled shutdown. Because of the way replication of topic partitions works internally, the broker principal must be a super user so that the broker can replicate topics properly from leader to follower. Producers and consumers need to be authorized to perform operations on topics, but they should be configured with different principals compared to the brokers. The main operations that producers require authorization to execute are WRITE and READ. Admin users can execute command line tools and require authorization. Operations that an admin user might need authorization for are DELETE, CREATE, and ALTER. You can use wildcards (``*``) for producers and consumers so that you only have to set it once. Implicitly-derived operations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Certain operations provide additional implicit operation access to users. When granted READ, WRITE, or DELETE, users implicitly derive the DESCRIBE operation. When granted ALTER_CONFIGS, users implicitly derive the DESCRIBE_CONFIGS operation. .. _acl-precedence: ACL precedence ^^^^^^^^^^^^^^ In contexts where you have both allow and deny ACLs, deny ACLs take precedence over allow ACLs. .. _acl-resources: Resources ~~~~~~~~~ Users access and perform operations on specific |ak| and |cp| resources. A resource can be a cluster, group, |ak-tm| topic, transactional ID, or Delegation token. ACLs specify which users can access a specified resource, and the operations they are permitted to run against that resource. Within |ak| resources are: Cluster The |ak| cluster. Users who wish to run operations that impact the whole cluster, such as a controlled shutdown or create a new topic, must be assigned privileges on the cluster resource. Delegation Token Delegation tokens are shared secrets between |ak-tm| brokers and clients. Authentication based on delegation tokens is a lightweight authentication mechanism that you can use to complement existing SASL/SSL methods. Refer to :ref:`kafka_sasl_delegate_auth` for more details. Group Groups in the brokers. All protocol calls that work with groups, such as joining a group, must have corresponding privileges with the group in the subject. Group (``group.id``) can mean Consumer Group, Stream Group (``application.id``), Connect Worker Group, or any other group that uses the Consumer Group protocol, like |sr| cluster. Topic All |ak| messages are organized into topics (and partitions). To access a topic, you must have a corresponding operation (such as READ or WRITE) defined in an ACL. Transactional ID A transactional ID (``transactional.id``) identifies a single producer instance across application restarts and provides a way to ensure a single writer; this is necessary for exactly-once semantics (EOS). There can only be one producer active at any time for each ``transactional.id``. When a producer starts up, it first checks whether or not there is a pending transaction by a producer with its own ``transactional.id``. If there is, then it must wait until the transaction has finished (abort or commit). This guarantees that the producer always starts from a consistent state. When used, a producer must be able to manipulate transactional IDs and have all the permissions set. For example, the following ACL allows all users in the system access to an EOS producer: .. codewithvars:: bash kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --transactional-id * --allow-principal User:* --operation write For additional information about the role of transactional IDs, refer to `Transactions in Apache Kafka `__. The :ref:`acl-operations` available to a user depend on the resources to which the user has been granted access. All resources have a resource identifier, which uniquely identifies them. For example, for the resource type topic, the resource identity is the topic name, and for the resource type group, the resource identity is the group name. You can view the ACLs for a specific resource using the ``--list`` option. For example, to view all ACLs for the topic ``test-topic`` run the following command: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --list --topic test-topic .. _prefixed-acls: Prefixed ACLs ^^^^^^^^^^^^^ You can specify ACL resources using either a LITERAL value (default), PREFIXED pattern type, or wildcard (``*``), which allows all. If you identify the resource using LITERAL, |ak| will try to match the full resource name (for example, topic or consumer group) with the resource specified in the ACL. In some cases you may want to use an asterisk (``*``) to specify all resources. If you identify the resource using PREFIXED, |ak| will try to match the prefix of the resource name with the resource specified in ACL. For example, you can add an ACL for user ``User:kafka/kafka1.host-1.com@bigdata.com`` to produce to any topic with a name that uses the prefix ``Test-``. You can do this by running the following command: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:kafka/kafka1.host-1.com@bigdata.com \ --producer --topic Test- --resource-pattern-type prefixed In the following example, a program called "BillingPublisher", which was built using the |ak| Java SDK, requires an ACL that allows it to write only to topics that use the prefix ``billing-``: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal User:BillingPublisher \ --allow-host 198.51.100.0 --producer --topic billing- --resource-pattern-type prefixed Be aware that you cannot use the PREFIXED resource pattern type for a topic while granting access to all groups ``*`` (wildcard) within a single command. Instead, split permissions across different commands. For example, grant READ and DESCRIBE access to the user for the prefixed topics: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal User:username --operation Read --operation Describe --topic topicprefix --resource-pattern-type prefixed Then grant user READ access to all groups: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal User:username --operation Read --group '*' .. _kafka-auth-superuser: Super Users ^^^^^^^^^^^ By default, if a resource has no associated ACLs, then no one is allowed to access that resource except super users. If you want to change that behavior, you can include the following in server.properties: ``allow.everyone.if.no.acl.found=true``. You can add super users in ``server.properties`` (note that the delimiter is a semicolon because SSL user names may contain a comma) as shown here: .. codewithvars:: bash super.users=User:Bob;User:Alice ACLs and Monitoring Interceptors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Confluent :ref:`Monitoring Interceptors ` produce to the ``_confluent-monitoring`` topic by default. You can configure the ``_confluent-monitoring`` topic using the ``confluent.monitoring.interceptor.topic`` attribute. Application or programmatic access to the ``_confluent-monitoring`` topic must have WRITE and DESCRIBE access. If the ``_confluent-monitoring`` topic does not exist, then you must have cluster-level CREATE and DESCRIBE access to create it. You must also have topic-level CREATE, DESCRIBE, READ, and WRITE access on ``_confluent-monitoring``. You can provide access either individually for each client principal that will use interceptors, or using a wildcard entry for all clients. The following example shows an ACL that grants a principal access to the ``_confluent-monitoring`` topic: :: kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --topic _confluent-monitoring --allow-principal User:username --operation write --operation describe The |c3| principal requires READ, DESCRIBE, and CREATE access to the ``_confluent-monitoring`` topic. Use the ``control-center-set-acls`` script to set the appropriate permissions for the |c3| principal to access this topic. Refer to :ref:`ui_authentication` for details. Using ACLs ---------- The examples in the following sections use ``bin/kafka-acls`` (the |kacls-cli|) to add, remove or list ACLs. For detailed information on the supported options, run ``bin/kafka-acls --help``. Note that ACLs are stored in |zk| and they are propagated to the brokers asynchronously so there may be a delay before the change takes effect even after the command returns. You can also use the |ak| :cp-javadoc:`AdminClient|clients/javadocs/org/apache/kafka/clients/admin/AdminClient.html` API to manage ACLs. .. tip:: - If you are using transactions (``--transactional-id``), the IdempotentWrite ACL is implied. - If you are not using transactions, you can use the ``--idempotent`` option to enable the IdempotentWrite ACL. Some of the most common ACL use cases: * **create** a topic, the principal of the client will require the CREATE and DESCRIBE operations on the ``Topic`` or ``Cluster`` resource. * **produce** to a topic, the principal of the producer will require the WRITE operation on the ``Topic`` resource. * **consume** from a topic, the principal of the consumer will require the READ operation on the ``Topic`` and ``Group`` resources. Note that to be able to create, produce, and consume, the servers need to be configured with the appropriate ACLs. The servers need authorization to update metadata (CLUSTER_ACTION) and to read from a topic (READ) for replication purposes. ACL format ~~~~~~~~~~ |ak| ACLs are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H On Resources matching ResourcePattern RP". - Wildcards apply for any resource. - You can give topic and group wildcard access to users who have permission to access all topics and groups (for example, admin users). If you use this method, you don't have to create a separate rule for each topic and group for the user. For example, you can use this command to grant wildcard access to Alice: .. code:: bash kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add --allow-principal \ User:Alice --operation All --topic '*' --group '*' .. _using-config-properties-file: Using the configuration properties file ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you have used the producer API, consumer API, or Streams API with |ak| before, then you may be aware that the connectivity details to the cluster are specified using configuration properties. While some users may recognize this for applications developed to interact with |ak|, others may be unaware that the administration tools that come with |ak| work in the same way, which means that after you have the defined the configuration properties (often in a form of a ``config.properties`` file), either applications or tools will be able to connect to clusters. When you create a configuration properties file in the user home directory, any subsequent command that you issue (be sure to include the path for the configuration file) will read that file and use it to establish connectivity to the |ak| cluster. So the first thing you need to do to interact with your |ak| clusters using native |ak| tools is to generate a configuration properties file. The ``--command-config`` argument supplies the |confluent-cli| tools with the configuration properties that they require to connect to the |ak| cluster, in the ``.properties`` file format. This typically includes the ``security.protocol`` that the cluster uses to connect, and any information necessary to authenticate to the cluster. For example: :: security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="alice" \ password="s3cr3t"; .. _kafka_adding_acls: Adding ACLs ~~~~~~~~~~~ Suppose you want to add an ACL where: principals ``User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown`` and ``User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown`` are allowed to perform read and write operations on the topic ``test-topic`` from IP 198.51.100.0 and IP 198.51.100.1. You can do that by executing the following: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \ --allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \ --allow-host 198.51.100.0 --allow-host 198.51.100.1 \ --operation Read --operation Write --topic test-topic By default all principals that don't have an explicit ACL allowing an operation to access a resource are denied. In rare cases where an ACL that allows access to all but some principal is desired, you can use the ``--deny-principal`` and ``--deny-host`` options. For example, use the following command to allow all users to Read from ``test-topic`` but only deny ``User:kafka/kafka6.host-1.com@bigdata.com`` from IP 198.51.100.3: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal User:'*' --allow-host '*' --deny-principal User:kafka/kafka6.host-1.com@bigdata.com --deny-host 198.51.100.3 \ --operation Read --topic test-topic |ak| does not support certificate revocation lists (CRLs), so you cannot revoke a client’s certificate. Hence, the only alternative is to disable the user’s access using an ACL: .. codewithvars:: bash kafka-acls --bootstrap-server localhost:9092 --add --deny-principal "User:CN=Bob,O=Sales" --cluster --topic '*' Note that ``--allow-host`` and ``deny-host`` only support IP addresses (hostnames are not supported). Also note that IPv6 addresses are supported, and that you can use them in ACLs. The examples above add ACLs to a topic by specifying ``--topic [topic-name]`` as the resource pattern option. Similarly, one can add ACLs to a cluster by specifying ``--cluster`` and to a group by specifying ``--group [group-name]``. In the event that you want to grant permission to all groups, you may do so by specifying ``--group='*'`` as shown in the following command: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal User:'*' --operation read --topic test --group '*' You can add ACLs on prefixed resource patterns. For example, you can add an ACL that enables users in the org unit (OU) ``ServiceUsers`` (this org is using TLS/SSL authentication) to produce to any topic whose name starts with ``Test-``. You can do that by executing the CLI with the following options: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add --allow-principal \ User:CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown --producer --topic Test- --resource-pattern-type prefixed Note that ``--resource-pattern-type`` defaults to ``literal``, which only affects resources with the exact same name or, in the case of the wildcard resource name ``'*'``, a resource with any name. Removing ACLs ~~~~~~~~~~~~~ Removing ACLs is similar adding them, except the ``--remove`` option should be specified instead of ``--add``. To remove the ACLs added in the first example above you can execute the following: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --remove \ --allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \ --allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \ --allow-host 198.51.100.0 --allow-host 198.51.100.1 \ --operation Read --operation Write --topic test-topic If you want to remove the ACL added to the prefixed resource pattern in the example, run the CLI with following options: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --remove \ --allow-principal User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \ --producer --topic Test- --resource-pattern-type Prefixed Listing ACLs ~~~~~~~~~~~~ You can list the ACLs for a given resource by specifying the ``--list`` option and the resource. For example, to list all ACLs for ``test-topic`` execute the following: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --list --topic test-topic However, this will only return the ACLs that have been added to this exact resource pattern. Other ACLs can exist that affect access to the topic--for example, any ACLs on the topic wildcard ``'*'``, or any ACLs on prefixed resource patterns. You can explicitly query ACLs on the wildcard resource pattern: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --list --topic '*' It is not necessarily possible to explicitly query for ACLs on prefixed resource patterns that match ``Test-topic`` because the name of such patterns may not be known. You can list all ACLs affecting ``Test-topic`` by using ``--resource-pattern-type match``. For example: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --list --topic Test-topic --resource-pattern-type match This command will list ACLs on all matching literal, wildcard, and prefixed resource patterns. To view an ACL for an internal topic: .. codewithvars:: bash kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --list --topic __consumer_offsets Adding or Removing a Principal as Producer or Consumer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The most common use cases for ACL management are adding/removing a principal as a producer or consumer. To add user "Jane Doe" (Kerberos platform ``User:janedoe@bigdata.com``) as a producer of ``test-topic`` you can execute the following: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:janedoe@bigdata.com \ --producer --topic test-topic To add ``User:janedoe@bigdata.com`` as a consumer of ``test-topic`` with group ``Group-1``, you can specify the ``--consumer`` and ``--group`` options: .. codewithvars:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:janedoe@bigdata.com \ --consumer --topic test-topic --group Group-1 To remove a principal from a producer or consumer role, you can specify the ``--remove`` option. Enabling Authorization for Idempotent and Transactional APIs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Producers may be configured with ``enable.idempotence=true`` to ensure that exactly one copy of each message is written to the stream. The principal used by idempotent producers must be authorized to perform ``IdempotentWrite`` on the cluster. To enable Bob to produce messages using an idempotent producer, you can execute the command: .. sourcecode:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:Bob \ --producer --topic test-topic --idempotent Producers may also be configured with a non-empty ``transactional.id`` to enable transactional delivery with reliability semantics that span multiple producer sessions. The principal used by transactional producers must be authorized for ``Describe`` and ``Write`` operations on the configured ``transactional.id``. To enable Alice to produce messages using a transactional producer with ``transactional.id=test-txn``, run the command: .. sourcecode:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:Alice \ --producer --topic test-topic --transactional-id test-txn Note that idempotent write access is automatically granted for transactional producers configured with ACLs for the configured transactional ID. Creating Non-Super User ACL Administrators ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In the event that you want a non-super user to be able to create or delete ACLs, but not grant them the super user role, a current super user can grant another user (referred to here as the ACL administrator) the ``ALTER --cluster`` access control entry (ACE), which binds an operation, in this case, “alter” to a resource, “cluster”. After granting the ACL Administrator the ``ALTER --cluster`` ACE, that user can create and delete ACLs for a given resource in a cluster. .. sourcecode:: bash kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \ --add --allow-principal User:notSuper \ --operation ALTER --cluster .. note:: - If you wish to assign ``ALTER --cluster`` to a group, then ``Group:groupName`` is also valid; however, the Authorizer you are using must be able to handle/allow groups. - Exercise caution when assigning ``ALTER --cluster`` to users or groups because such users will be able to create and delete ACLs to control their own access to resources as well. Authorization in the REST Proxy and |sr| ---------------------------------------- You may use |ak| ACLs to enforce authorization in the REST Proxy and |sr|. These require Confluent :ref:`security plugins `. Debugging --------- It's possible to run with authorizer logs in ``DEBUG`` mode by making some changes to the ``log4j.properties`` file. If you're using the default ``log4j.properties`` file, change the following line to ``DEBUG`` mode instead of ``WARN``: .. codewithvars:: bash log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender The ``log4j.properties`` file is located in the |ak| config directory at ``/etc/kafka/log4j.properties``. In the event that you're using an earlier version of |cp|, or if you're using your own ``log4j.properties`` file, you'll need to add the following lines to the configuration: .. codewithvars:: bash log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender log4j.additivity.kafka.authorizer.logger=false You'll need to restart the broker before it will take effect. This will log every request being authorized and its associated user name. The log is located in ``$kafka_logs_dir/kafka-authorizer.log``. The location of the logs depends on the packaging format - ``kafka_logs_dir`` will be in ``/var/log/kafka`` in ``rpm/debian`` and ``$base_dir/logs`` in the archive format. .. toctree:: :maxdepth: 1 :hidden: ../security/ldap-authorizer/introduction