.. _kafka_sasl_auth_gssapi: Configuring GSSAPI ------------------ SASL/GSSAPI Overview ~~~~~~~~~~~~~~~~~~~~ SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don't need to install a new server just for |ak-tm|. Ask your Kerberos administrator for a principal for each |ak| broker in your cluster and for every operating system user that will access |ak| with Kerberos authentication (via clients and tools). If you don't already have a Kerberos server, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (`Ubuntu `_, `Red Hat `_). Note that if you are using Oracle Java, you must download JCE policy files for your Java version and copy them to ``$JAVA_HOME/jre/lib/security``. You must create these principals yourself using the following commands: .. codewithvars:: bash sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}' sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}" It is a Kerberos requirement that all your hosts can be resolved with their fully-qualified domain names (FQDNs). The remainder of this page will show you how to configure SASL/GSSAPI for each component in the Confluent Platform. GSSAPI Logging ^^^^^^^^^^^^^^ To enable SASL/GSSAPI debug output, you can set the ``sun.security.krb5.debug`` system property to ``true``. For example: .. codewithvars:: bash export KAFKA_OPTS=-Dsun.security.krb5.debug=true kafka-server-start etc/kafka/server.properties .. include:: ../../includes/installation-types-zip-tar.rst .. _sasl_gssapi_broker: Brokers ~~~~~~~ .. include:: ../includes/intro_brokers.rst * :ref:`Confluent Metrics Reporter ` .. _jaas-config: JAAS ^^^^ .. include:: ../includes/auth_sasl_gssapi_broker_jaas.rst .. _auth-sasl-gssapi-config: Configuration ^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_gssapi_broker_config.rst Run ^^^ .. include:: ../includes/auth_sasl_gssapi_broker_run.rst .. _sasl_gssapi_clients: Clients ~~~~~~~ .. include:: ../includes/intro_clients.rst .. include:: ../includes/auth_sasl_gssapi_client_config.rst .. _sasl-gssapi-zk-config: |zk| ~~~~ This section describes how to configure |zk| so that brokers can use SASL/GSSAPI to authenticate to it. .. include:: ../includes/intro_zk.rst .. _zookeeper-jaas: JAAS ^^^^ .. include:: ../includes/auth_sasl_gssapi_zk_jaas.rst .. _zookeeper-configuration: Configuration ^^^^^^^^^^^^^ .. include:: ../includes/zk-auth-sasl.rst .. note:: The metadata stored in |zk| is such that only brokers will be able to modify the corresponding znodes, but znodes are world readable. While the data stored in |zk| is not sensitive, inappropriate manipulation of znodes can cause cluster disruption. .. include:: ../includes/auth_sasl_gssapi_zk_config.rst .. _zookeeper-run: Run ^^^ .. include:: ../includes/auth_sasl_gssapi_zk_run.rst .. _sasl_gssapi_connect-workers: |kconnect-long| ~~~~~~~~~~~~~~~ .. include:: ../includes/intro_connect.rst * :ref:`Confluent Monitoring Interceptors ` * :ref:`Confluent Metrics Reporter ` .. include:: ../includes/auth_sasl_gssapi_connect-workers_config.rst .. _sasl_gssapi_replicator: |crep-full| ~~~~~~~~~~~ .. include:: ../includes/intro_replicator.rst * :ref:`Kafka Connect ` .. include:: ../includes/auth_sasl_gssapi_replicator_config.rst |c3| ~~~~ .. include:: ../includes/intro_c3.rst * `Confluent Metrics Reporter `: required on the production cluster being monitored * :ref:`Confluent Monitoring Interceptors `: optional if you are using |c3-short| streams monitoring .. include:: ../includes/auth_sasl_gssapi_c3_config.rst .. _sasl_gssapi_metrics-reporter: |cmetric-full| ~~~~~~~~~~~~~~ .. include:: ../includes/intro_metrics-reporter.rst .. include:: ../includes/auth_sasl_gssapi_metrics-reporter_config.rst .. _sasl_gssapi_interceptors: Confluent Monitoring Interceptors ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: ../includes/intro_interceptors.rst Interceptors for General Clients ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_gssapi_interceptors_config.rst Interceptors for |kconnect-long| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_gssapi_interceptors-connect-workers_config.rst Interceptors for Replicator ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auth_sasl_gssapi_interceptors-replicator_config.rst |sr| ~~~~ .. include:: ../includes/intro_sr.rst .. include:: ../includes/auth_sasl_gssapi_sr_config.rst REST Proxy ~~~~~~~~~~~ Securing Confluent REST Proxy for SASL requires that you configure security between the REST proxy and the |ak| cluster. For a complete list of all configuration options, refer to :ref:`kafka-rest-security-kafka-auth-sasl`. .. include:: ../includes/auth_sasl_gssapi_rest_config.rst .. _rebalancer-sasl-config: Confluent Rebalancer ~~~~~~~~~~~~~~~~~~~~ To secure Confluent Rebalancer for SASL, specify the :ref:`metrics configuration options ` for the |ak| cluster in the ``rebalance-metrics-client.properties`` file (or a file name of your choosing): .. codewithvars:: bash confluent.rebalancer.metrics.security.protocol=SASL_SSL confluent.rebalancer.metrics.sasl.mechanism=GSSAPI confluent.rebalancer.metrics.sasl.kerberos.service.name=kafka confluent.rebalancer.metrics.ssl.truststore.location=/kafka.client.truststore.jks confluent.rebalancer.metrics.ssl.truststore.password= confluent.rebalancer.metrics.sasl.jaas.config= com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="kafka.user.keytab" principal="kafka@KAFKA.SECURE"; confluent.rebalancer.metrics.ssl.keystore.location=/kafka.server.keystore.com.jks confluent.rebalancer.metrics.ssl.keystore.password= confluent.rebalancer.metrics.ssl.key.password= Then pass the configuration in ``rebalance-metrics-client.properties`` at the ``confluent-rebalancer`` command line. For example: :: confluent-rebalancer execute --bootstrap-server :9092 --config-file rebalance-metrics-client.properties --command-config rebalance-admin-client.properties --throttle 100000 --verbose Note that the ``--config-file`` option specifies connectivity to the metrics cluster, and that the ``--command-config`` option specifies the admin client's connectivity to the cluster being rebalanced. To ensure a secure connection when specifying connectivity for the admin client (``rebalance-admin-client.properties``) use the same security configuration as used for ``rebalance-metrics-client.properties``, except you do not need to include the ``confluent.rebalancer.metrics.`` prefix for the keys. Also, if you need to identify a metrics cluster that is different from the one being rebalanced, you can use the ``--metrics-bootstrap-server`` option. By default, metrics are retrieved from the cluster specified in the ``--bootstrap-server`` option. You can also specify ``confluent.rebalancer.metrics.sasl.jaas.config`` by passing the JAAS configuration file location as a JVM parameter, as shown here: :: export REBALANCER_OPTS="-Djava.security.auth.login.config=" .. _sasl-gssapi-troubleshooting: Troubleshooting SASL/GSSAPI ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section provides basic troubleshooting tips to address common errors that can occur when configuring SASL/GSSAPI. .. _sasl-gssapi-kerberos-troubleshoot: Kerberos ^^^^^^^^ The following hostname error may appear in your service logs when hostnames and principals in Kerberos hosts do not match exactly: .. code-block:: text org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname@realm` Kafka Client will go to AUTHENTICATION_FAILED state. Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ... Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7)) ... 26 more To avoid such hostname errors: - Ensure that the value specified for ``sasl.kerberos.service.name`` in the properties file matches the primary name in the corresponding keytab file. For example, if ``principal="kafka_broker/kafka1.hostname.com@EXAMPLE.COM"``, then the primary is ``kafka_broker``, so you must configure ``sasl.kerberos.service.name=kafka_broker``. - Verify that the ``kinit``, ``klist``, and ``kdestroy`` commands work as expected to initialize, list, and delete Kerberos credentials using the keytab. For example: .. code-block:: text # To display the current contents of the cache klist # To acquire new credentials kinit -k -t ./filename.keytab kafka_broker/kafka1.hostname.com@EXAMPLE.COM # To verify that the credentials were initialized and updated klist # To remove any newly-initialized credentials (if necessary) kdestroy - Verify that the Kerberos environment is set up correctly between clients and servers. Refer to `Exercise 4: Using the Oracle Java SASL API `_ for a simple Kerberos-based ``SaslTestClient`` and ``SaslServer`` application that uses the Java SASL API to verify communications between client and server hosts. LDAP ^^^^ To test and troubleshoot your LDAP configuration when configuring |ak| to authenticate to LDAP using Kerberos, refer to :ref:`test-ldap-client-authentication`.