.. title:: Configure SASL/SCRAM for Confluent Platform .. meta:: :description: Learn about SASL/SCRAM and how to configure it for Confluent Platform clusters. .. _kafka_sasl_auth_scram: Configure SASL/SCRAM authentication for Confluent Platform ---------------------------------------------------------- SASL/SCRAM Overview ~~~~~~~~~~~~~~~~~~~ Salted Challenge Response Authentication Mechanism (SCRAM), or SASL/SCRAM, is a family of SASL mechanisms that addresses the security concerns with traditional mechanisms that perform username/password authentication, like PLAIN and DIGEST-MD5. SCRAM provides the following features: * The challenge-response mechanism of SASL/SCRAM protects against password sniffing on the network and against dictionary attacks on the password file. SCRAM allows the server to authenticate the client without ever transmitting or storing the client's password in plain text. * Authentication information stored in the authentication database is not sufficient by itself to impersonate the client. The information is salted to prevent a pre-stored dictionary attack if the database is compromised. For details on how SASL/SCRAM works, see `RFC 5802 `__. |cp| clusters support ``SCRAM-SHA-256`` and ``SCRAM-SHA-512``, which can be used with :ref:`TLS ` to perform secure authentication. The examples below use ``SCRAM-SHA-256``, but you can substitute the configuration for ``SCRAM-SHA-512`` as needed. The default SCRAM implementation in |ak| stores SCRAM credentials in |kraft| or |zk| and is suitable for use in |cp| installations where |kraft| or |zk| is on a private network. Because of this, you must create SCRAM credentials for users in |kraft| or |zk|. .. _sasl_scram_kraft-based-clusters: |kraft|-based clusters ^^^^^^^^^^^^^^^^^^^^^^ If you want |ak| brokers to authenticate to each other using SCRAM, the SCRAM credentials must be created before the brokers are up and running. To create SCRAM credentials for users in |kraft|, use the ``--add-scram`` option of the ``kafka-storage`` command, like this: .. code-block:: shell kafka-storage format [-h] --config CONFIG \ --cluster-id CLUSTER_ID \ --add-scram SCRAM_CREDENTIAL \ --release-version RELEASE_VERSION] \ --ignore-formatted] where ``SCRAM_CREDENTIAL`` looks like one of the following: * ``'SCRAM-SHA-256=[name=alice,password=alice-secret]'`` * ``'SCRAM-SHA-512=[name=alice,iterations=8192,salt="MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=",saltedpassword="mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE="]'`` The ``SCRAM_CREDENTIAL`` argument is a key-value pair where the key specifies the SCRAM mechanism supported and the value includes a set of key-value pairs used to populate the ``UserScramCredentialsRecord``. The ``SCRAM_CREDENTIAL`` subarguments require a ``name`` key and either a ``password`` key or a ``saltedpassword`` key. If you use a ``saltedpassword`` key, you must also supply an ``iteration`` key and a ``salt`` key. The ``iteration`` and ``salt`` key are otherwise optional. However, if they are not supplied, ``iteration`` count will default to 4096 and the ``salt`` will be randomly generated. The value for ``salt`` and ``saltedpassword`` is a Base64 encoding of binary data. The ``kafka-storage`` tool initializes the storage space for each |ak| broker and controller. One of the files created is the ``bootstrap.checkpoint`` file, which contains a set of ``UserScramCredentialsRecord`` records that are used to bootstrap the cluster. The ``--add-scram`` option adds a new ``ApiMessageAndVersion`` record to the ``bootstrap.checkpoint`` file. The record contains a ``UserScramCredentialsRecord`` that is used to store the SCRAM credentials for the specified user. This record is used by |ak| brokers to authenticate other brokers to it using SCRAM. The record is for the server side of each connection; the client side of each connection still needs to know the password. |zk|-based clusters ^^^^^^^^^^^^^^^^^^^ To create and manage SCRAM credentials on |zk|-based clusters, add the ``--bootstrap-server`` option to the ``kafka-configs`` command, specifying the bootstrap server and port, the SCRAM configuration, the entity type, and the entity name. For example, to create SCRAM credentials: .. codewithvars:: bash kafka-configs --bootstrap-server localhost:9092 --alter \ --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' \ --entity-type users \ --entity-name alice kafka-configs --bootstrap-server localhost:9092 --alter \ --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users \ --entity-name admin If you want |ak| brokers to authenticate to each other using SCRAM, and you want to create SCRAM credentials before the brokers are up and running, you must create SCRAM credentials for users in |zk| using the ``--zookeeper`` option (you cannot use the ``--bootstrap-server`` option): .. codewithvars:: bash kafka-configs --zookeeper localhost:2181 --alter \ --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' \ --entity-type users \ --entity-name alice kafka-configs --zookeeper localhost:2181 --alter \ --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' \ --entity-type users \ --entity-name admin The default iteration count of 4096 is used if iterations are not specified. A random salt is created and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in |zk|. Security Considerations for SASL/SCRAM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - The default implementation of SASL/SCRAM in |ak| stores SCRAM credentials in |zk|. This is suitable for production use in installations where |zk| is secure and on a private network. - For cases where you require |ak| brokers to authenticate each other using SCRAM, and you need to create SCRAM credentials before the brokers are up and running, use the ``--zookeeper`` option to create SCRAM credentials. - |ak| only supports the strong hash functions SHA-256 and SHA-512 with a minimum iteration count of 4096. Strong hash functions combined with strong passwords and high iteration counts protect against brute force attacks if |zk| security is compromised. - SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This protects against dictionary or brute force attacks and against impersonation if |zk| is compromised. - The default SASL/SCRAM credential store may be overridden using custom callback handlers by configuring ``sasl.server.callback.handler.class`` in installations where |zk| is not secure. - For more details on security considerations, refer to `RFC 5802 `__. The remainder of this page will show you how to configure SASL/SCRAM for each component in the Confluent Platform. .. _sasl_scram_broker: Brokers ~~~~~~~ .. include:: ../includes/intro_brokers.rst * :ref:`Confluent Metrics Reporter ` JAAS ^^^^ .. note:: Use of separate JAAS files is supported, but is *not* recommended. Instead, use the listener configuration specified in step 5 of :ref:`Configuration ` to replace the steps below. #. First create the broker's JAAS configuration file in each |ak| broker's configuration directory. For this example, it is named ``kafka_server_jaas.conf``. #. In each broker's JAAS file, configure a ``KafkaServer`` section. This configuration defines one user (``admin``). The properties ``username`` and ``password`` are used by the broker to initiate connections to other brokers. In this example, ``admin`` is the user for interbroker communication. .. codewithvars:: bash KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret"; }; .. _auth-sasl-scram-broker-config: Configuration ^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_scram_broker_config.rst Run ^^^ .. include:: ../includes/auth_sasl_scram_broker_run.rst .. _sasl_scram_clients: Clients ~~~~~~~ .. important:: If you are configuring this for |sr| or |crest|, you must prefix each parameter with ``confluent.license``. For example, ``sasl.mechanism`` becomes ``confluent.license.sasl.mechanism``. For additional information, see :ref:`kafka-rest-and-sasl-ssl-configs`. .. include:: ../includes/intro_clients.rst .. include:: ../includes/auth_sasl_scram_client_config.rst |zk| ~~~~ |zk| does not support SASL/SCRAM authentication, but it does support another mechanism SASL/DIGEST-MD5. .. include:: ../includes/intro_zk.rst .. _sasl_scram_connect-workers: |kconnect-long| ~~~~~~~~~~~~~~~ .. include:: ../includes/intro_connect.rst * :ref:`Confluent Monitoring Interceptors ` * :ref:`Confluent Metrics Reporter ` .. include:: ../includes/auth_sasl_scram_connect-workers_config.rst .. _sasl_scram_replicator: |crep-full| ~~~~~~~~~~~ .. include:: ../includes/intro_replicator.rst * :ref:`Kafka Connect ` .. include:: ../includes/auth_sasl_scram_replicator_config.rst |c3| ~~~~ .. include:: ../includes/intro_c3.rst * :ref:`Confluent Metrics Reporter `: required on the production cluster being monitored * :ref:`Confluent Monitoring Interceptors `: optional if you are using Control Center streams monitoring .. include:: ../includes/auth_sasl_scram_c3_config.rst .. _sasl_scram_metrics-reporter: |cmetric-full| ~~~~~~~~~~~~~~ This section describes how to enable SASL/SCRAM for |cmetric-full|, which is used for |c3| and Auto Data Balancer. .. include:: ../includes/auth_sasl_scram_metrics-reporter_config.rst .. _sasl_scram_interceptors: Confluent Monitoring Interceptors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: ../includes/intro_interceptors.rst Interceptors for General Clients ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_scram_interceptors_config.rst Interceptors for |kconnect-long| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_scram_interceptors-connect-workers_config.rst Interceptors for Replicator ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_scram_interceptors-replicator_config.rst |sr| ~~~~ .. important:: If you are configuring this for |sr| or |crest|, you must prefix each parameter with ``confluent.license``. For example, ``sasl.mechanism`` becomes ``confluent.license.sasl.mechanism``. .. include:: ../includes/intro_sr.rst .. include:: ../includes/auth_sasl_scram_sr_config.rst |crest| ~~~~~~~ .. important:: If you are configuring this for |sr| or |crest|, you must prefix each parameter with ``confluent.license``. For example, ``sasl.mechanism`` becomes ``confluent.license.sasl.mechanism``. Securing Confluent REST Proxy for SASL requires that you configure security between the REST proxy and the |ak| cluster. For a complete list of all configuration options, refer to :ref:`kafka-rest-security-kafka-auth-sasl`. .. include:: ../includes/auth_sasl_scram_rest_config.rst