.. title:: Authorization in Confluent Platform using ACLs
.. meta::
:description:
Learn how to configure access control lists (ACLs) for Confluent Platform
components and resources, including Kafka clusters.
.. _kafka_authorization:
Authorization using Access Control Lists (ACLs)
###############################################
.. important::
.. include:: ../includes/zk-deprecation.rst
|ak-tm| includes a pluggable authorization framework (Authorizer), configured
using the ``authorizer.class.name`` configuration property in the |ak| broker
configuration file. Two authorizers are available: AclAuthorizer (for |zk|-based
clusters) and StandardAuthorizer (for |kraft|-based clusters). For |zk|-based
clusters, Authorizer stores access control lists (ACLs) in |zk|; for |kraft|-based
clusters, ACLs are stored in the |kraft|-based |ak| cluster metadata. |ak| brokers
use the authorizer to determine whether or not to authorize an operation based on
the principal and the resource being accessed.
Setting ACLs is important -- if a resource does not have associated ACLs, only
super users can access the resource.
To learn more about authorization using ACLs, also see the following resources:
- `Authorization module `__
of the free Confluent Developer course, `Apache Kafka Security `__.
- Example of setting Docker environment variables for |cp| running in |zk| mode in :ref:`Confluent Platform demo `.
For a configuration reference, see the :devx-cp-demo:`docker-compose.yml file|docker-compose.yml` in the demo.
.. _acl-concepts:
ACL concepts
------------
Access control lists (ACLs) provide important authorization controls for your
organization’s |ak-tm| cluster data. Before creating and using ACLs, familiarize
yourself with the concepts described in this section; your understanding is key
to your success when creating and using ACLs to manage access to components and
cluster data.
Authorizer
~~~~~~~~~~
.. note::
While this topic covers |ak-tm| authorizers only, Confluent also provides the
:ref:`confluent_server_authorizer` to allow for proprietary LDAP group-based
and role-based access control (:ref:`RBAC `), as well as
:ref:`centralized ACLs `. By default, |csa|
supports |zk|-based ACLs.
An authorizer is a server plugin used by |ak-tm| to authorize operations. More
specifically, an authorizer controls whether or not to authorize an operation
based on the principal and the resource being accessed. The default |ak|
authorizer implementation, for |zk|-based clusters, is AclAuthorizer
(``kafka.security.authorizer.AclAuthorizer``), which was introduced in
|ak-tm| 2.4/|cp| 5.4.0. Prior to that, the authorizer was named SimpleAclAuthorizer
(``kafka.security.auth.SimpleAclAuthorizer``). For |kraft|-based |ak| clusters,
the authorizer is StandardAuthorizer (``org.apache.kafka.metadata.authorizer.StandardAuthorizer``).
|zk|-based |ak| clusters
^^^^^^^^^^^^^^^^^^^^^^^^
.. important::
.. include:: ../includes/zk-deprecation.rst
To enable and use the AclAuthorizer on a |zk|-based |ak| cluster, set its full
class name for your broker configuration in ``server.properties``:
.. code-block:: text
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
AclAuthorizer stores |ak| ACL information in |zk|. However, it does not
control access to |zk| nodes. Rather, |zk| has its own ACL security to control
access to |zk| nodes. |zk| ACLs control which principal (for example, the broker
principal) can update |zk| nodes containing |ak| cluster metadata (such as in-sync
replicas, topic configuration, and |ak| ACLs) and nodes used in interbroker
coordination (such as controller election, broker joining, and topic deletion).
|ak| ACLs control which principals can perform operations on |ak| resources.
|ak| brokers can use |zk| ACLs by enabling :ref:`zk-security`
(``zookeeper.set.acl=true``) for the broker configuration.
|kraft|-based |ak| clusters
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To enable and use the StandardAuthorizer on a |kraft|-based |ak| cluster, set the full
class name for your configuration on all nodes (brokers, controllers, or combined
brokers and controllers) in their configuration file to:
.. code-block:: text
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
KRaft Principal Forwarding
""""""""""""""""""""""""""
In |kraft| clusters, administrator requests, such as ``CreateTopics`` and
``DeleteTopics``, are sent to the broker listeners by the client. The broker
then forwards the request to the active controller through the first listener
configured in ``controller.listener.names``. Authorization of these requests is
done on the controller node. This is achieved by way of an ``Envelope`` request
which packages both the underlying request from the client as well as the client
principal. When the controller receives the forwarded Envelope request from the
broker, it first authorizes the ``Envelope`` request using the authenticated broker
principal. Then it authorizes the underlying request using the forwarded principal.
All of this implies that |ak| must understand how to serialize and deserialize the
client principal. The authentication framework allows for customized principals by
overriding the ``principal.builder.class`` configuration. In order for customized
principals to work with |kraft|, the configured class must implement
``org.apache.kafka.common.security.auth.KafkaPrincipalSerde`` so that |ak| knows
how to serialize and deserialize the principals. The default implementation
``org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder``
uses the |ak| RPC format defined in the source code: ``clients/src/main/resources/common/message/DefaultPrincipalData.json``.
For details about request forwarding in ``KRaft``, see
`KIP-590 `__.
.. _acl-principal:
Principal
~~~~~~~~~
A principal is an entity that can be authenticated by the authorizer. Clients of
a |ak| broker identify themselves as a particular principal using various security
protocols. The way a principal is identified depends upon which security protocol
it uses to connect to the |ak| broker (for example: :ref:`mTLS `,
:ref:`SASL/GSSAPI `, or :ref:`SASL/PLAIN `).
Authentication depends on the security protocol in place (such as SASL or TLS)
to recognize a principal within a |ak| broker.
The following examples show the principal name format based on the security
protocol being used:
- When a client connects to a |ak| broker using the TLS security protocol,
the principal name will be in the form of the TLS certificate subject name:
``CN=quickstart.confluent.io,OU=TEST,O=Sales,L=PaloAlto,ST=Ca,C=US``.
Note that there are no spaces after the comma between subject parts.
- When a client connects to a |ak| broker using the SASL security protocol with GSSAPI
(Kerberos) mechanism, the principal will be in the Kerberos principal format:
``kafka-client@hostname.com``. For more detail, refer to
`Kerberos Principal Names `__.
- When a client connects to a |ak| broker using the SASL security protocol with
a PLAIN or SCRAM mechanism, the principal is a simple text string, such as
``alice``, ``admin``, or ``billing_etl_job_03``.
The AclAuthorizer only supports individual users and always interprets the principal
as the user name. However, other authorizers support groups. Therefore, when
specifying the principal, you must include the type using the prefix ``User:``
or ``Group:`` (case-sensitive). Here are some examples: ``User:admin``, ``Group:developers``,
or ``User:CN=quickstart.confluent.io,OU=TEST,O=Sales,L=PaloAlto,ST=Ca,C=US``.
In the following ACL, the plain text principals (``User:alice``, ``User:fred``)
are identified as |ak| users who are allowed to run specific operations (read and
write) from either of the specified hosts (host-1, host-2) on a specific resource
(topic):
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:alice \
--allow-principal User:fred \
--allow-host host-1 \
--allow-host host-2 \
--operation read \
--operation write \
--topic finance-topic
To follow best practices, create one principal per application and give each
principal only the ACLs required and no more. For example, if Alice is writing
three programs that access different topics to automate a billing workflow, she
could create three principals: ``billing_etl_job_01``, ``billing_etl_job_02``,
and ``billing_etl_job_03``. She would then grant each principal permissions on
only the required topics and run each program with its specific principal.
Alternatively, she could take a middle-ground approach and create a single
``billing_etl_jobs`` principal with access to all topics that the billing
programs require and run all three with that principal.
Alice should not run these programs as her own principal because she would
presumably have broader permissions than the jobs actually need. Running with
one principal per application also helps significantly with debugging and auditing
because it's clearer which application is performing each operation.
.. _acl-wildcard-principals:
Wildcard principals
^^^^^^^^^^^^^^^^^^^
You can create ACLs for all principals by using a wildcard in the principal ``User:*``.
ACLs that include a wildcard for the user principal apply to all users. For
example, the following command grants everyone access to the topic ``testTopic``:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:* \
--operation All \
--topic testTopic
If you use an authorizer that supports group principals, such as |csa|, you can
also create ACLs for all group principals using the principal ``Group:*``. ACLs
that include the wildcard for the principal apply to all users belonging to at
least one group.
Wildcards are not supported for super users. Even though the following example
specifies the user principal ``User:*`` as a super user, every user is not
granted super user privileges because no wildcard match is performed:
.. code-block:: text
super.users=User:*
.. note::
If you use |csa|, role bindings do not support wildcard matching. Assigning
a role to ``User:*`` does not grant the role to every user. For details
about RBAC principals, see :ref:`rbac-overview`.
.. _sasl-with-kerberos-principals:
SASL/Kerberos principals
^^^^^^^^^^^^^^^^^^^^^^^^
If you use Kerberos, your |ak| principal is based on your Kerberos principal (for
example, ``kafka/kafka1.hostname.com@EXAMPLE.COM``). By default, |ak| only uses the
primary name of the Kerberos principal, which is the name that appears before
the slash (``/``). If the broker Kerberos principal is ``kafka/broker1.example.com@EXAMPLE``,
then the principal used by the |ak| authorizer is ``kafka``. The hostname is
different for every broker. This parsing is automatically implemented by the
default value of ``sasl.kerberos.principal.to.local.rules``.
For details about Kerberos principal names and configurations, see
`Kerberos Principals `__.
.. note::
If your organization uses a Kerberos or Active Directory server, ask your
Kerberos administrator for a principal for each |ak| broker in your |ak|
cluster and for every operating system user that will access the cluster
with Kerberos authentication (using clients and tools). Server principals
are of type NT_HOSTBASED_SERVICE.
Each |ak| broker must be able to communicate with all of the other brokers for
replication and when it acts as the controller. You must add the broker principal
as a super user, otherwise |ak| will not work.
Configuration options for customizing SASL/Kerberos user name
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
By default, the |ak| principal will be the primary part of the Kerberos principal.
You can change this behavior by specifying a customized rule for
``sasl.kerberos.principal.to.local.rules`` in ``server.properties``. The format
of ``sasl.kerberos.principal.to.local.rules`` takes the form of a list where each
rule works in the same way it does in ``auth_to_local`` in the
`Kerberos configuration file (krb5.conf) `__.
These rules support the use of lowercase or uppercase to force the translated
result to be all lowercase (``/L``) or all uppercase (``/U``). This enforcement is achieved
by adding a ``/L`` or ``/U`` to the end of the rule. Each rule starts with ``RULE:``
and contains an expression. The following examples show the format and syntax:
.. code-block:: text
RULE:[n:string](regexp)s/pattern/replacement/
RULE:[n:string](regexp)s/pattern/replacement/g
RULE:[n:string](regexp)s/pattern/replacement//L
RULE:[n:string](regexp)s/pattern/replacement/g/L
RULE:[n:string](regexp)s/pattern/replacement//U
RULE:[n:string](regexp)s/pattern/replacement/g/U
This rule translates ``user@MYDOMAIN.COM`` to ``user`` while keeping the default
rule in place:
.. code-block:: text
sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT
TLS principal user names
^^^^^^^^^^^^^^^^^^^^^^^^
As mentioned earlier, principals are recognized based on how users authenticate
to the |ak| broker, which in turn depends upon the security protocol used. To use
TLS principals, you must understand how to accurately represent user names.
By default, the name of the principal identified by a TLS certificate
is the DN (X.500 Distinguished Name) of that certificate (also known as the
Subject), which uses the form ``CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown``.
You can use ``ssl.principal.mapping.rules`` to translate the DN to a more manageable
principal name. Refer to :ref:`kafka_config_ssl_user_name` for details.
If TLS is enabled, but client authentication is not configured, clients connect
anonymously using the TLS port and appear to the server with the user name
ANONYMOUS. Such a configuration provides encryption and server authentication,
but clients connect anonymously. The other case in which the server sees the
ANONYMOUS user is if the PLAINTEXT security protocol is used. If you grant read
and write permission to the ANONYMOUS user, anyone can access the brokers without
authentication.
.. _kafka_config_ssl_user_name:
Configuration options for customizing TLS user name
"""""""""""""""""""""""""""""""""""""""""""""""""""
.. include:: ../includes/kafka-ssl-user-dname-mapping-rule.rst
.. _acl-operations:
Operations
~~~~~~~~~~
An operation is an action performed on a :ref:`resource `. In
addition to identifying the resources to which users or groups have access,
ACLs identify the operations those users or groups are authorized to perform.
For each resource, an operation is mapped to one or more |ak| APIs or request
types for that resource. For example, a READ operation for the Topic resource
is mapped to Fetch, OffsetCommit, and TxnOffsetCommit. Or, a WRITE operation
for the Topic resource is mapped to Produce and AddPartitionsToTxn.
The following tables list the operations available for each resource type in |cp|
and describe the relationship between operations, resources, and APIs:
.. _acl-format-operations-resources:
Cluster resource operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^
================ =============== ==========================================
Operation Resource APIs allowed
================ =============== ==========================================
Alter Cluster AlterReplicaLogDirs,
CreateAcls,
DeleteAcls
AlterConfigs Cluster AlterConfigs
ClusterAction Cluster Fetch (for replication only),
LeaderAndIsr,
OffsetForLeaderEpoch,
StopReplica,
UpdateMetadata,
ControlledShutdown,
WriteTxnMarkers
Create Cluster CreateTopics, Metadata
Describe Cluster DescribeAcls,
DescribeLogDirs,
ListGroups
DescribeConfigs Cluster DescribeConfigs
================ =============== ==========================================
Topic resource type operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
================ =============== ==========================================
Operation Resource APIs allowed
================ =============== ==========================================
Alter Topic CreatePartitions
AlterConfigs Topic AlterConfigs
Create Topic CreateTopics, Metadata
Delete Topic DeleteRecords,
DeleteTopics
Describe Topic ListOffsets,
Metadata,
OffsetFetch,
OffsetForLeaderEpoch
DescribeConfigs Topic DescribeConfigs
Read Topic Fetch,
OffsetCommit,
TxnOffsetCommit
Write Topic Produce,
AddPartitionsToTxn
================ =============== ==========================================
Group resource type operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
================ =============== =========================================
Operation Resource APIs allowed
================ =============== =========================================
Delete Group DeleteGroups
Describe Group DescribeGroup,
FindCoordinator,
ListGroups
Read Group AddOffsetsToTxn,
Heartbeat,
JoinGroup,
LeaveGroup,
OffsetCommit,
OffsetFetch,
SyncGroup,
TxnOffsetCommit
================ =============== =========================================
Token resource type operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
================ =============== =========================================
Operation Resource API allowed
================ =============== =========================================
Describe DelegationToken DescribeTokens
================ =============== =========================================
Transactional ID resource type operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
================ =============== =========================================
Operation Resource APIs allowed
================ =============== =========================================
Describe TransactionalId FindCoordinator
Write TransactionalId Produce,
AddPartitionsToTxn,
AddOffsetsToTxn,
EndTxn,
InitProducerId,
TxnOffsetCommit
================ =============== =========================================
The operations in the tables above are for clients (producers, consumers,
admin) and interbroker operations of a cluster. In a secure cluster, client
requests and interbroker operations require authorization. The interbroker
operations are split into two classes: cluster and topic. Cluster operations
refer to operations necessary for the management of the cluster, like updating
broker and partition metadata, changing the leader and the set of in-sync replicas
of a partition, and triggering a controlled shutdown.
Because of how replication of topic partitions works internally, the broker
principal must be a super user so that the broker can replicate topics properly
from leader to follower.
Producers and consumers need to be authorized to perform operations on topics,
but they should be configured with different principals compared to the brokers.
The main operations that producers require authorization to execute are WRITE
and READ. Admin users can execute command line tools and require authorization.
Operations that an admin user might need authorization for are DELETE, CREATE,
and ALTER. You can use wildcards (``*``) for producers and consumers so that you
only have to set it once.
Implicitly-derived operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Certain operations provide additional implicit operation access to users.
When granted READ, WRITE, or DELETE, users implicitly derive the DESCRIBE operation.
When granted ALTER_CONFIGS, users implicitly derive the DESCRIBE_CONFIGS operation.
.. _acl-precedence:
ACL order of precedence
^^^^^^^^^^^^^^^^^^^^^^^
In contexts where you have both allow and deny ACLs, deny ACLs take precedence
over allow ACLs.
.. _acl-resources:
Resources
~~~~~~~~~
Users access and perform operations on specific |ak| and |cp| resources. A
resource can be a cluster, group, |ak| topic, transactional ID, or Delegation
token. ACLs specify which users can access a specified resource and the
operations they can perform on that resource. Within |ak|, resources
include:
Cluster
The |ak| cluster. To run operations that impact the entire |ak| cluster,
such as a controlled shutdown or creating new topics, must be assigned
privileges on the cluster resource.
Delegation Token
Delegation tokens are shared secrets between |ak-tm| brokers and clients.
Authentication based on delegation tokens is a lightweight authentication
mechanism that you can use to complement existing SASL/SSL methods. Refer to
:ref:`kafka_sasl_delegate_auth` for more details.
Group
Groups in the brokers. All protocol calls that work with groups, such as
joining a group, must have corresponding privileges with the group in the
subject. Group (``group.id``) includes Consumer Group, Stream Group
(``application.id``), Connect Worker Group, or any other group that uses the
Consumer Group protocol, like |sr| cluster.
Topic
All |ak| messages are organized into topics (and partitions). To access a topic,
you must have a corresponding operation (such as READ or WRITE) defined in an
ACL.
Transactional ID
A transactional ID (``transactional.id``) identifies a single producer
instance across application restarts and provides a way to ensure a single
writer; this is necessary for exactly-once semantics (EOS). Only one producer
can be active for each ``transactional.id``. When a producer starts, it first
checks whether or not there is a pending transaction by a producer with its
own ``transactional.id``. If there is, then it waits until the transaction
has finished (abort or commit). This guarantees that the producer always
starts from a consistent state.
When used, a producer must be able to manipulate transactional IDs and have all the
permissions set. For example, the following ACL allows all users in the system
access to an EOS producer:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--transactional-id * \
--allow-principal User:* \
--operation write
In cases where you need to create ACLs for a |ak| cluster to allow
Streams exactly-once (EOS) processing:
.. code-block:: shell
# Allow Streams EOS:
kafka-acls ...
--add \
--allow-principal User:team1 \
--operation WRITE \
--operation DESCRIBE \
--transactional-id team1-streams-app1 \
--resource-pattern-type prefixed
For additional information about the role of transactional IDs, refer to
`Transactions in Apache Kafka `__.
The :ref:`acl-operations` available to a user depend on the resources to which
the user has been granted access. All resources have a unique resource identifier.
For example, for the topic resource type, the resource identity is the topic name,
and for the group resource type, the resource identity is the group name.
You can view the ACLs for a specific resource using the ``--list`` option. For
example, to view all ACLs for the topic ``test-topic`` run the following
command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--list \
--topic test-topic
.. _prefixed-acls:
Use prefixed ACLs
^^^^^^^^^^^^^^^^^
You can specify ACL resources using either a LITERAL value (default), a PREFIXED
pattern type, or a wildcard (``*``), which allows both.
If you identify the resource as LITERAL, |ak| will attempt to match the full
resource name (for example, topic or consumer group) with the resource specified
in the ACL. In some cases, you might want to use an asterisk (``*``) to specify all
resources.
If you identify the resource as PREFIXED, |ak| attempts to match the prefix of the
resource name with the resource specified in ACL.
For example, you can add an ACL for user ``User:kafka/kafka1.host-1.com@bigdata.com``
to produce to any topic with a name that uses the prefix ``Test-``. You can do
this by running the following command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:kafka/kafka1.host-1.com@bigdata.com \
--producer \
--topic Test- \
--resource-pattern-type prefixed
In the following example, a program called "BillingPublisher", which was built
using the |ak| Java SDK, requires an ACL that allows it to write only to
topics that use the prefix ``billing-``:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:BillingPublisher \
--allow-host 198.51.100.0 \
--producer \
--topic billing- \
--resource-pattern-type prefixed
Be aware that you cannot use the PREFIXED resource pattern type for a topic while granting
access to all groups ``*`` (wildcard) within a single command. Instead, split
permissions across different commands. For example, grant READ and DESCRIBE access
to the user for the prefixed topics:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:username \
--operation Read \
--operation Describe \
--topic topicprefix \
--resource-pattern-type prefixed
Then grant user READ access to all groups:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:username \
--operation Read \
--group '*'
.. _kafka-auth-superuser:
Super users
^^^^^^^^^^^
By default, if a resource has no associated ACLs, then only super users can access
that resource. If you want to change that behavior, you can include the following
in ``server.properties``: ``allow.everyone.if.no.acl.found=true``.
.. include:: includes/allow-everyone-if-no-acl-found-warning.rst
You can add super users in ``server.properties`` (note that the delimiter is a
semicolon because TLS/SSL user names might contain a comma) as shown here:
.. code-block:: text
super.users=User:Bob;User:Alice
ACLs and monitoring interceptors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Confluent :ref:`Monitoring Interceptors `
produce to the ``_confluent-monitoring`` topic by default. You can configure the
``_confluent-monitoring`` topic using the ``confluent.monitoring.interceptor.topic``
attribute. Application or programmatic access to the ``_confluent-monitoring``
topic must have WRITE and DESCRIBE access. If the ``_confluent-monitoring`` topic
does not exist, then you must have cluster-level CREATE and DESCRIBE access to
create it. You must also have topic-level CREATE, DESCRIBE, READ, and WRITE access on
``_confluent-monitoring``.
You can provide access either individually for each client principal that will use
interceptors, or using a wildcard entry for all clients. The following example
shows an ACL that grants a principal access to the ``_confluent-monitoring`` topic:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--topic _confluent-monitoring \
--allow-principal User:username \
--operation write \
--operation Describe
The |c3| principal requires READ, DESCRIBE, and CREATE access to the ``_confluent-monitoring``
topic. Use the ``control-center-set-acls`` script to set the appropriate
permissions for the |c3| principal to access this topic. For details, see :ref:`ui_authentication`.
.. _kafka-acl-tool:
Use ACLs
--------
The examples in the following sections use ``kafka-acls`` (the |kacls-cli|)
to add, remove, or list ACLs. For details on the supported options, run
``kafka-acls --help``. Note that because ACLs are stored in |zk| and they are
propagated to the brokers asynchronously, there may be a delay before the
change takes effect, even after the command returns.
You can also use the |ak| :platform:`AdminClient|clients/javadocs/javadoc/org/apache/kafka/clients/admin/AdminClient.html`
API to manage ACLs.
Common ACL use cases include:
* **create** a topic, the principal of the client requires the CREATE and
DESCRIBE operations on the ``Topic`` or ``Cluster`` resource.
* **produce** to a topic, the principal of the producer requires the WRITE
operation on the ``Topic`` resource.
* **consume** from a topic, the principal of the consumer requires the READ
operation on the ``Topic`` and ``Group`` resources.
Note that to create, produce, and consume, the servers need to be configured
with the appropriate ACLs. The servers need authorization to update metadata
(CLUSTER_ACTION) and to read from a topic (READ) for replication purposes.
|zk|-based ACLs do not support use of the
:confluent-cli:`confluent iam acl commands|command-reference/iam/acl/confluent_iam_acl_create.html`,
which are only used with :ref:`centralized ACLs `.
ACL format
~~~~~~~~~~
|ak| ACLs are defined in the general format of
"Principal P is [Allowed/Denied] Operation O From Host H On Resources matching ResourcePattern RP".
- Wildcards apply for any resource.
- You can give topic and group wildcard access to users who have permission to
access all topics and groups (for example, admin users). If you use this method,
you don't need to create a separate rule for each topic and group for the user.
For example, you can use this command to grant wildcard access to Alice:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:Alice \
--operation All \
--topic '*' \
--group '*'
.. _using-config-properties-file:
Use the configuration properties file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have used the producer API, consumer API, or Streams API with |ak| clusters
before, then you might be aware that the connectivity details to the cluster are
specified using configuration properties. While some users may recognize this
for applications developed to interact with |ak|, others might be unaware that the
administration tools that come with |ak| work the same way, meaning that after
you have defined the configuration properties (often in a form of a ``config.properties``
file), either applications or tools will be able to connect to clusters.
When you create a configuration properties file in the user home directory,
any subsequent command that you issue (be sure to include the path for the
configuration file) reads that file and uses it to establish connectivity to
the |ak| cluster. The first thing you must do to interact with your |ak| clusters
using native |ak| tools is to generate a configuration properties file.
The ``--command-config`` argument supplies the |confluent-cli| tools with the
configuration properties that they require to connect to the |ak| cluster, in the
``.properties`` file format. Typically, this includes the ``security.protocol``
that the cluster uses to connect and any information necessary to authenticate
to the cluster. For example:
.. code-block:: text
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="alice" \
password="s3cr3t";
.. _kafka_adding_acls:
Add ACLs
~~~~~~~~
Suppose you want to add an ACL where: principals ``User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown``
and ``User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown`` are allowed
to perform read and write operations on the topic ``test-topic`` from IP addresses
198.51.100.0 and 198.51.100.1. You can do that by executing the following:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf --add \
--allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \
--allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \
--allow-host 198.51.100.0 \
--allow-host 198.51.100.1 \
--operation Read \
--operation Write \
--topic test-topic
Note that ``--allow-host`` and ``--deny-host`` only support IP addresses (hostnames
are not supported). Also, IPv6 addresses are supported and can be used in ACLs.
By default, all principals without an explicit ACL allowing an operation to access
a resource are denied. In rare instances, where an ACL that allows access to all
but some principal is desired, you can use the ``--deny-principal`` and
``--deny-host`` options. For example, use the following command to allow all users
to read from ``test-topic`` but deny only ``User:kafka/kafka6.host-1.com@bigdata.com``
from IP address 198.51.100.3:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:'*' \
--allow-host '*' \
--deny-principal User:kafka/kafka6.host-1.com@bigdata.com \
--deny-host 198.51.100.3 \
--operation Read \
--topic test-topic
|ak| does not support certificate revocation lists (CRLs), so you cannot revoke
a client’s certificate. The only alternative is to disable the user’s access
using an ACL:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--add \
--deny-principal "User:CN=Bob,O=Sales" \
--cluster \
--topic '*'
The examples above add ACLs to a topic by specifying ``--topic [topic-name]``
as the resource pattern option. Similarly, one can add ACLs to a cluster by
specifying ``--cluster`` and to a group by specifying ``--group [group-name]``.
If you need to grant permission to all groups, you can specify ``--group='*'``,
as shown in the following command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:'*' \
--operation read \
--topic test \
--group '*'
You can add ACLs on prefixed resource patterns. For example, you can add an ACL
that enables users in the org unit (OU) ``ServiceUsers`` (this organization is
using TLS authentication) to produce to any topic whose name starts with
``Test-``. You can do that by running the following CLI command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \
--producer \
--topic Test- \
--resource-pattern-type prefixed
Note that ``--resource-pattern-type`` defaults to ``literal``, which only
affects resources with the identical name, or in the case of the wildcard
resource name ``'*'``, a resource with any name.
.. caution::
The ``--link-id`` option for ``kafka-acls``, available starting with |cp| 7.1.0,
is experimental and should not be used in production deployments. In particular,
do not use ``link-ID`` to create ACLs. If an ACL with ``--link-id`` is created
on the source cluster, it is marked for management by the link ID, and is not
synced to the destination, regardless of ``acl.sync.filters``. Currently,
|cp| does not validate link IDs created with ``kafka-acls``. For details, see
:ref:`cluster-link-acls-migrate`.
Remove ACLs
~~~~~~~~~~~
Removing ACLs is similar adding them, except the ``--remove`` option should be
specified instead of ``--add``. To remove the ACLs added in the first example
above, you can execute the following:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf --remove \
--allow-principal "User:CN=Bob Thomas,OU=Sales,O=Unknown,L=Unknown,ST=NY,C=Unknown" \
--allow-principal "User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" \
--allow-host 198.51.100.0 \
--allow-host 198.51.100.1 \
--operation Read \
--operation Write \
--topic test-topic
If you want to remove the ACL added to the prefixed resource pattern in the
example, run the following CLI command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--remove \
--allow-principal User:CN=Jane Smith,OU=Sales,O=Unknown,L=Unknown,ST=Unknown,C=Unknown \
--producer \
--topic Test- \
--resource-pattern-type Prefixed
List ACLs
~~~~~~~~~
You can list the ACLs for a given resource by specifying the ``--list`` option
and the resource. For example, to list all ACLs for ``test-topic``, run the
following CLI command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--list \
--topic test-topic
However, this only returns the ACLs that have been added to this exact resource
pattern. Other ACLs can exist that affect access to the topic--for example, any
ACLs on the topic wildcard ``'*'``, or any ACLs on prefixed resource patterns.
You can explicitly query ACLs on the wildcard resource pattern by running the
following CLI command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--list \
--topic '*'
You might not be able to explicitly query for ACLs on prefixed resource patterns
that match ``Test-topic`` because the name of such patterns may not be known, but
you can list all ACLs affecting ``Test-topic`` by using ``--resource-pattern-type match``.
For example:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--list \
--topic Test-topic \
--resource-pattern-type match
This command lists ACLs on all matching literal, prefixed, and wildcard resource
patterns.
To view an ACL for an internal topic, run the following CLI command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--list \
--topic __consumer_offsets
Add or remove a principal as a producer or consumer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~--~~~~~~~~~~~~~~~~~
The most common use cases for ACL management are adding or removing a principal as
a producer or consumer. To add user "Jane Doe" (Kerberos platform ``User:janedoe@bigdata.com``) as a
producer of ``test-topic``, you can run the following CLI command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add --allow-principal User:janedoe@bigdata.com \
--producer --topic test-topic
To add ``User:janedoe@bigdata.com`` as a consumer of ``test-topic`` with group
``Group-1``, specify the ``--consumer`` and ``--group`` options:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:janedoe@bigdata.com \
--consumer \
--topic test-topic \
--group Group-1
To remove a principal from a producer or consumer role, specify the ``--remove``
option.
Enable authorization for idempotent and transactional APIs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To ensure that exactly one copy of each message is written to the stream, use
``enable.idempotence=true``. The principal used by idempotent producers must be
authorized to perform ``Write`` on the cluster.
To enable Bob to produce messages using an idempotent producer, you can execute
the command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092
--command-config adminclient-configs.conf \
--add --allow-principal User:Bob \
--producer \
--topic test-topic \
--write \
--idempotent
To enable transactional delivery with reliability semantics that span multiple
producer sessions, configure a producer with a non-empty ``transactional.id``.
The principal used by transactional producers must be authorized for ``Describe``
and ``Write`` operations on the configured ``transactional.id``.
To enable Alice to produce messages using a transactional producer with
``transactional.id=test-txn``, run the command:
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:Alice \
--producer \
--topic test-topic \
--transactional-id test-txn
Create non-super user ACL administrators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the event that you need a non-super user to create or delete ACLs, but do not
want to grant them the super user role, an existing super user can grant another
user (referred to here as the ACL administrator) the ``ALTER --cluster`` access
control entry (ACE), which binds an operation (in the following example, “alter”)
to a “cluster” resource. After granting ``ALTER --cluster`` to the ACL Administrator,
that user can create and delete ACLs for a given resource in a cluster.
.. code-block:: shell
kafka-acls --bootstrap-server localhost:9092 \
--command-config adminclient-configs.conf \
--add \
--allow-principal User:notSuper \
--operation ALTER --cluster
.. note::
- If you wish to assign ``ALTER --cluster`` to a group, then ``Group:groupName``
is also valid; however, the Authorizer you are using must be able to handle/allow
groups.
- Exercise caution when assigning ``ALTER --cluster`` to users or groups because
such users will be able to create and delete ACLs to control their own access
to resources as well.
Authorization in the REST Proxy and |sr|
----------------------------------------
You can use |ak| ACLs to enforce authorization in the REST Proxy and |sr| by
using the :ref:`REST Proxy Security Plugin ` and
:ref:`confluentsecurityplugins_schema_registry_security_plugin`.
Debug using authorizer logs
---------------------------
To help debug authorization issues, you can run clusters with authorizer logs
set to ``DEBUG`` mode in the ``log4j.properties``. If you're using the default
``log4j.properties`` file, change the following line to ``DEBUG`` mode instead
of ``WARN``:
.. code-block:: text
log4j.logger.kafka.authorizer.logger=WARN, authorizerAppender
The ``log4j.properties`` file is located in the |ak| configuration directory at
``/etc/kafka/log4j.properties``. If you're using an earlier version of |cp|,
or if you're using your own ``log4j.properties`` file, you'll need to add
the following lines to the configuration:
.. code-block:: text
log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false
You must restart the broker before it takes effect. This logs every request
being authorized and its associated user name. The log is located in
``$kafka_logs_dir/kafka-authorizer.log``. The location of the logs depends on
the packaging format - ``kafka_logs_dir`` will be in ``/var/log/kafka`` in
``rpm/debian`` and ``$base_dir/logs`` in the archive format.
.. _additional-acl-topics:
Related content
---------------
- :ref:`authorization-acl-with-mds`
- :ref:`confluentsecurityplugins_sracl_authorizer`
- :cloud:`Confluent Replicator to Confluent Cloud ACL Configurations|get-started/examples/ccloud/docs/replicator-to-cloud-configuration-types.html`
- :ksqldb-docs:`Configure Authorization of ksqlDB with Kafka ACLs|operate-and-deploy/installation/server-config/security/#configure-authorization-of-ksqldb-with-kafka-acls`
- :ref:`streams_developer-guide_security-acls`
- :ref:`Cluster Linking Authorization (ACLs) `
- :ref:`c3-auth-acls`
- :confluent-cli:`Confluent CLI confluent iam acl|command-reference/iam/acl/index.html#confluent-iam-acl`
- :cloud:`Access Control Lists (ACLs) for Confluent Cloud|access-management/acl.html`