Authentication using SASL

Kafka brokers supports client authentication via SASL. Multiple SASL mechanisms can be enabled on the broker simultaneously while each client has to choose one mechanism. The currently supported mechanisms are GSSAPI (Kerberos) and PLAIN. We start with a general description of how to configure SASL for brokers and clients, followed by mechanism-specific details and we wrap up with some operational details

SASL configuration for Kafka brokers

1. Select one or more supported mechanisms to enable in the broker. GSSAPI and PLAIN are the mechanisms currently supported by Kafka.

2. Add a JAAS config file for the selected mechanisms as described in the examples for setting up GSSAPI (Kerberos) or PLAIN.

3. Pass the JAAS config file location as JVM parameter to each Kafka broker. For example:

-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

4. Configure a SASL port in server.properties, by adding at least one of SASL_PLAINTEXT or SASL_SSL to the listeners and optionally advertised.listeners properties, each of which should contain one or more comma-separated values:

listeners=SASL_PLAINTEXT://host.name:port
# The following is only needed if the value is different from ``listeners``, but it should contain
# the same security protocols as ``listeners``
advertised.listeners=SASL_PLAINTEXT://host.name:port

If SASL_SSL is used, then SSL must also be configured. Note that a PLAINTEXT port is needed if you intend to use any client that does not support SASL.

Note that advertised.host.name and advertised.port configure a single PLAINTEXT port and are incompatible with secure protocols. Please use advertised.listeners instead.

If you are only configuring a SASL port (or if you want the Kafka brokers to authenticate each other using SASL) then make sure you set the same SASL protocol for inter-broker communication:

security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)

5. Enable one or more SASL mechanisms in server.properties and configure the SASL mechanism for inter-broker communication if using SASL for inter-broker communication:

sasl.enabled.mechanisms=GSSAPI,PLAIN
sasl.mechanism.inter.broker.protocol=GSSAPI (or PLAIN)

6. Follow the steps in GSSAPI (Kerberos) or PLAIN to configure SASL for the enabled mechanisms. To enable multiple mechanisms in the broker, follow the steps here.

Important notes:

  1. KafkaServer is a section name in the JAAS file used by each broker. This section tells the broker which principal to use and the location of the keytab where this principal is stored. It allows the broker to login using the keytab specified in this section.
  2. The Client section is used to authenticate a SASL connection with ZooKeeper. It also allows the brokers to set ACLs on ZooKeeper nodes which locks these nodes down so that only the brokers can modify it. It is necessary to have the same primary name across all brokers. If you want to use a section name other than Client, set the system property zookeeper.sasl.client to the appropriate name (e.g. -Dzookeeper.sasl.client=ZkClient).
  3. ZooKeeper uses zookeeper as the service name by default. If you want to change this, set the system property zookeeper.sasl.client.username to the appropriate name (e.g. -Dzookeeper.sasl.client.username=zk).

SASL configuration for Kafka Clients

Note

SASL authentication is only supported by the new Java Kafka producer and consumer, the older API is not supported.

To configure SASL authentication on the clients:

1. Select a SASL mechanism for authentication and add a JAAS config file for the selected mechanism as described in the examples for setting up GSSAPI (Kerberos) or PLAIN. KafkaClient is the section name in the JAAS file used by Kafka clients.

2. Pass the JAAS config file location as JVM parameter to each client JVM. For example:

-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf

3. Configure the following properties in producer.properties or consumer.properties:

security.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism=GSSAPI (or PLAIN)

Follow the steps in GSSAPI (Kerberos) or PLAIN to configure SASL for the selected mechanism.

Authentication using SASL/Kerberos

Prerequisites

Kerberos

If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (Ubuntu, Redhat). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.

Kerberos Principals

If you are using the organization’s Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).

If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"

All hosts must be reachable using hostnames

It is a Kerberos requirement that all your hosts can be resolved with their FQDNs.

Configuring Kafka Brokers

  1. Add a suitably modified JAAS file similar to the one below to each Kafka broker’s config directory, let’s call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/security/keytabs/kafka_server.keytab"
    principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};

// ZooKeeper client authentication
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/security/keytabs/kafka_server.keytab"
    principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};

The KafkaServer section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It allows the broker to login using the keytab specified in this section. See notes for more details on Zookeeper’s SASL configuration.

  1. Pass the name of the JAAS file as a JVM parameter to each Kafka broker:
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

You may also wish to specify the path to the krb5.conf file (see JDK’s Kerberos Requirements for more details):

-Djava.security.krb5.conf=/etc/kafka/krb5.conf
  1. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting the Kafka broker.
  2. Configure SASL listeners and mechanisms in server.properties as described here. For example:
listeners=SASL_PLAINTEXT://host.name:port
# The following is only needed if the value is different from ``listeners``, but it should contain
# the same security protocols as ``listeners``
advertised.listeners=SASL_PLAINTEXT://host.name:port
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI

We must also configure the service name in server.properties, which should match the primary name of the Kafka brokers. In the above example, the principal is kafka/kafka1.hostname.com@EXAMPLE.com and the primary name is kafka:

sasl.kerberos.service.name=kafka

Configuring Kafka Clients

To configure SASL authentication on the clients:

1. Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their own principal (usually with the same name as the user running the client), so obtain or create these principals as needed. Then create a JAAS file for each principal. The KafkaClient section describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client using a keytab (recommended for long-running processes):

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/security/keytabs/kafka_client.keytab"
    principal="kafka-client-1@EXAMPLE.COM";
};

For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used along with useTicketCache=true as in:

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useTicketCache=true;
};
  1. Pass the name of the JAAS file as a JVM parameter to the client JVM:
-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf

You may also wish to specify the path to the krb5.conf file (see JDK’s Kerberos Requirements for more details).

-Djava.security.krb5.conf=/etc/kafka/krb5.conf
  1. Make sure the keytabs configured in the kafka_client_jaas.conf are readable by the operating system user who is starting kafka client.
  2. Configure the following properties in producer.properties and/or consumer.properties:
security.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka

Authentication using SASL/PLAIN

SASL/PLAIN is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication. Kafka supports a default implementation for SASL/PLAIN which can be extended for production use.

The username is used as the authenticated Principal, which is used in authorization (e.g. ACLs).

Configuring Kafka Brokers

Add a suitably modified JAAS file similar to the one below to each Kafka broker’s config directory, let’s call it kafka_server_jaas.conf for this example:

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret"
   user_alice="alice-secret";
};

This configuration defines two users (admin and alice). The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication. The set of properties user_{userName} defines the passwords for all users that connect to the broker and the broker validates all client connections including those from other brokers using these properties.

  1. Pass the JAAS config file location as JVM parameter to each Kafka broker:

    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
    
  2. Configure SASL listeners and mechanisms in server.properties as described here. For example:

    listeners=SASL_SSL://host.name:port
    security.inter.broker.protocol=SASL_SSL
    sasl.mechanism.inter.broker.protocol=PLAIN
    sasl.enabled.mechanisms=PLAIN
    

Configuring Kafka Clients

To configure SASL authentication on the clients:

1. The KafkaClient section describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the PLAIN mechanism:

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="alice"
  password="alice-secret";
};

2. The properties username and password in the KafkaClient section are used by clients to configure the user for client connections. In this example, clients connect to the broker as user alice.

3. Pass the JAAS config file location as JVM parameter to each client JVM:

-Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf

4. Configure the following properties in producer.properties or consumer.properties:

security.protocol=SASL_SSL
sasl.mechanism=PLAIN

Use of SASL/PLAIN in production

SASL/PLAIN should only be used with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.

The default implementation of SASL/PLAIN in Kafka specifies usernames and passwords in the JAAS configuration file as shown here. To avoid storing passwords on disk, you can plugin your own implementation of javax.security.auth.spi.LoginModule that provides usernames and passwords from an external source. The login module implementation should provide username as the public credential and password as the private credential of the Subject. The default implementation org.apache.kafka.common.security.plain.PlainLoginModule can be used as an example.

In production systems, external authentication servers may implement password authentication. Kafka brokers can be integrated with these servers by adding your own implementation of javax.security.sasl.SaslServer. The default implementation included in Kafka in the package org.apache.kafka.common.security.plain can be used as an example to get started.

  • New providers must be installed and registered in the JVM. Providers can be installed by adding provider classes to the normal CLASSPATH or bundled as a jar file and added to {JAVA_HOME}/lib/ext.

  • Providers can be registered statically by adding a provider to the security properties file {JAVA_HOME}/lib/security/java.security where providerClassName is the fully qualified name of the new provider and n is the preference order with lower numbers indicating higher preference:

    security.provider.n=providerClassName
    
  • Alternatively, you can register providers dynamically at runtime by invoking Security.addProvider at the beginning of the client application or in a static initializer in the login module. For example:

    Security.addProvider(new PlainSaslServerProvider());
    
  • For more details, see JCA Reference.

Enabling multiple SASL mechanisms in a broker

Specify the configuration for the login modules of all enabled mechanisms in the KafkaServer section of the JAAS config file. For example:

 KafkaServer {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   storeKey=true
   keyTab="/etc/security/keytabs/kafka_server.keytab"
   principal="kafka/kafka1.hostname.com@EXAMPLE.COM";

   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret"
   user_alice="alice-secret";
};

Enable the SASL mechanisms in server.properties:

sasl.enabled.mechanisms=GSSAPI,PLAIN

Specify the SASL security protocol and mechanism for inter-broker communication in server.properties if required:

security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism.inter.broker.protocol=GSSAPI (or PLAIN)

Follow the mechanism-specific steps in GSSAPI (Kerberos) and PLAIN to configure SASL for the enabled mechanisms.

Modifying SASL mechanisms in a Running Cluster

The SASL mechanisms can be modified in a running cluster using the following sequence:

  1. Enable new SASL mechanism by adding the mechanism to sasl.enabled.mechanisms in server.properties for each broker. Update JAAS config file to include both mechanisms as described here. Incrementally bounce the cluster nodes, taking into consideration the recommendations for doing rolling restarts to avoid downtime for end users..
  2. Restart clients using the new mechanism (if required).
  3. To change the inter-broker communication mechanism (if required), set sasl.mechanism.inter.broker.protocol in server.properties to the new mechanism and incrementally bounce the cluster again.
  4. To remove the old mechanism (if required), remove the old mechanism from sasl.enabled.mechanisms in server.properties and remove the entries for the old mechanism from JAAS config file. Incrementally bounce the cluster again.

Note that the sequence above is somewhat complex to cater for all possible mechanism changes. For example, to add a new mechanism in the brokers and swap the clients to use it, we would simply have to do 1 and 2.

Enabling Logging for SASL

To enable SASL debug output, you can set the sun.security.krb5.debug system property to true.