Use SASL/GSSAPI Authentication in Confluent Platform

Configure GSSAPI in Confluent Platform clusters

SASL/GSSAPI overview

SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Confluent Platform. Ask your Kerberos administrator for a principal for each Confluent Server broker in your Confluent Platform cluster and for every operating system user that will access Confluent Platform with Kerberos authentication (via clients and tools).

If you don’t already have a Kerberos server, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (Ubuntu, Red Hat). Note that if you are using Oracle Java, you must download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security. You must create these principals yourself using the following commands:

sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"

It is a Kerberos requirement that all your hosts can be resolved with their fully-qualified domain names (FQDNs).

The remainder of this page will show you how to configure SASL/GSSAPI for each component in the Confluent Platform.

GSSAPI logging

To enable SASL/GSSAPI debug output, you can set the sun.security.krb5.debug system property to true. For example:

export KAFKA_OPTS=-Dsun.security.krb5.debug=true
kafka-server-start etc/kafka/server.properties

Tip

These instructions are based on the assumption that you are installing Confluent Platform by using ZIP or TAR archives. For more information, see Install Confluent Platform On-Premises.

Brokers

Configure all brokers in the Kafka cluster to accept secure connections from clients. Any configuration changes made to the broker will require a rolling restart.

Enable security for Kafka brokers as described in the section below. Additionally, if you are using Confluent Control Center or Auto Data Balancer, configure your brokers for:

JAAS

Note

While use of separate JAAS files is supported, it is not the recommended approach. Instead, use the listener configuration specified in Configuration to replace steps 1 and 2 below. Note that step 3 below is still required.

  1. First create the broker’s JAAS configuration file in each Kafka broker’s configuration directory, let’s call it kafka_server_jaas.conf for this example.

  2. In each broker’s JAAS file, configure a KafkaServer section with a unique principal and keytab, i.e., secret key, for each broker. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting the Kafka broker.

    // Specifies a unique keytab and principal name for each broker
    KafkaServer {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        storeKey=true
        keyTab="/etc/security/keytabs/kafka_server.keytab"
        principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
    };
    
  3. If you are having the broker authenticate a SASL to ZooKeeper, refer to ZooKeeper JAAS and Zookeeper Configuration.

Configuration

  1. Enable GSSAPI mechanism in the server.properties file of every broker.

    # List of enabled mechanisms, can be more than one
    sasl.enabled.mechanisms=GSSAPI
    
    # Specify one of of the SASL mechanisms
    sasl.mechanism.inter.broker.protocol=GSSAPI
    
  1. If you want to enable SASL for interbroker communication, add the following to the broker properties file (it defaults to PLAINTEXT). Set the protocol to:

    • SASL_SSL: if TLS/SSL encryption is enabled (TLS/SSL encryption should always be used if SASL mechanism is PLAIN)
    • SASL_PLAINTEXT: if TLS/SSL encryption is not enabled
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    security.inter.broker.protocol=SASL_SSL
    
  2. Tell the Kafka brokers on which ports to listen for client and interbroker SASL connections. You must configure listeners, and optionally advertised.listeners if the value is different from listeners. Set the listener to:

    • SASL_SSL: if TLS/SSL encryption is enabled (TLS/SSL encryption should always be used if SASL mechanism is PLAIN)
    • SASL_PLAINTEXT: if TLS/SSL encryption is not enabled
    # With TLS/SSL encryption
    listeners=SASL_SSL://kafka1:9093
    advertised.listeners=SASL_SSL://localhost:9093
    
    # Without TLS/SSL encryption
    listeners=SASL_PLAINTEXT://kafka1:9093
    advertised.listeners=SASL_PLAINTEXT://localhost:9093
    
  3. Configure both SASL_SSL and PLAINTEXT ports if:

    • SASL is not enabled for interbroker communication
    • Some clients connecting to the cluster do not use SASL

    Example SASL listeners with TLS/SSL encryption, mixed with PLAINTEXT listeners

    # With TLS/SSL encryption
    listeners=PLAINTEXT://kafka1:9092,SASL_SSL://kafka1:9093
    advertised.listeners=PLAINTEXT://localhost:9092,SASL_SSL://localhost:9093
    
    # Without TLS/SSL encryption
    listeners=PLAINTEXT://kafka1:9092,SASL_PLAINTEXT://kafka1:9093
    advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
    
  4. If you are not using a separate JAAS configuration file to configure JAAS, then configure JAAS for the Kafka broker listener as follows:

    listener.name.sasl_ssl.gssapi.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
      useKeyTab=true \
      storeKey=true \
      keyTab="/etc/security/keytabs/kafka_server.keytab" \
      principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
    
  5. When using GSSAPI, configure a service name that matches the primary name of the Brokers configured in the Broker JAAS file. In earlier JAAS file examples, with principal="kafka/kafka1.hostname.com@EXAMPLE.COM";, the primary is “kafka”.

    sasl.kerberos.service.name=kafka
    

Run

If using a separate JAAS file, pass the name of the JAAS file as a JVM parameter when you start each Kafka broker:

export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
kafka-server-start etc/kafka/server.properties

Tip

These instructions are based on the assumption that you are installing Confluent Platform by using ZIP or TAR archives. For more information, see Install Confluent Platform On-Premises.

Here are some optional settings that you can pass in as a JVM parameter when you start each broker from the command line.

zookeeper.sasl.client

Use to enable SASL authentication to ZooKeeper.

  • Type: Boolean
  • Default: true
  • Usage example: To pass the parameter as a JVM parameter when you start the broker, specify -Dzookeeper.sasl.client=true.
zookeeper.sasl.client.username

For SASL authentication to ZooKeeper, to change the username set the system property to use the appropriate name.

  • Type: string
  • Default: zookeeper
  • Usage example: To pass the parameter as a JVM parameter when you start the broker, specify -Dzookeeper.sasl.client.username=zk.
zookeeper.sasl.clientconfig

Specifies the context key in the JAAS login file. This is used to change the section name for SASL authentication to ZooKeeper.

  • Type: string
  • Default: Client
  • Usage example: To pass the parameter as a JVM parameter when you start the broker, specify -Dzookeeper.sasl.clientconfig=ZkClient.
-Djava.security.krb5.conf
Optionally specify the path to the krb5.conf file (see JDK’s Kerberos Requirements for more details)

Clients

The new Producer and Consumer clients support security for Kafka versions 0.9.0 and higher.

If you are using the Kafka Streams API, you can read on how to configure equivalent SSL and SASL parameters.

  1. Configure the following properties in a client properties file client.properties.

    sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    security.protocol=SASL_SSL
    
  2. Configure a service name that matches the primary name of the Kafka server configured in the broker JAAS file.

    sasl.kerberos.service.name=kafka
    
  3. Configure the JAAS configuration property with a unique principal, i.e., usually the same name as the user running the client, and keytab, i.e., secret key, for each client.

    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
        useKeyTab=true \
        storeKey=true \
        keyTab="/etc/security/keytabs/kafka_client.keytab" \
        principal="kafkaclient1@EXAMPLE.COM";
    
  4. For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used along with useTicketCache=true.

    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
        useTicketCache=true;
    

ZooKeeper

This section describes how to configure ZooKeeper so that brokers can use SASL/GSSAPI to authenticate to it.

For further details on ZooKeeper SASL authentication:

  1. Client-Server mutual authentication: between the Kafka Broker (client) and ZooKeeper (server)
  2. Server-Server mutual authentication: between the ZooKeeper nodes within an ensemble

JAAS

  1. Create a JAAS file for each ZooKeeper node, let’s call it zookeeper_jaas.conf for this example.

  2. In each ZooKeeper node’s JAAS file, configure a Server section with a unique principal and keytab, i.e., secret key, for each node. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting the ZooKeeper node.

    // Specifies a unique keytab and principal name for each ZooKeeper node
    Server {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        keyTab="/path/to/server/keytab"
        storeKey=true
        useTicketCache=false
        principal="zookeeper/yourzkhostname@EXAMPLE.COM";
    };
    
  3. If you are having the broker authenticate a SASL to ZooKeeper, you must also configure a Client section in the broker’s JAAS file. It also allows the brokers to set ACLs on ZooKeeper nodes, which locks down these nodes so that only the brokers can modify it.

    // ZooKeeper client authentication
    Client {
       com.sun.security.auth.module.Krb5LoginModule required
       useKeyTab=true
       storeKey=true
       keyTab="/etc/security/keytabs/kafka_server.keytab"
      principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
    };
    

Configuration

To enable ZooKeeper authentication with SASL add the following to zookeeper.properties:

authProvider.sasl=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

Note

The metadata stored in ZooKeeper is such that only brokers will be able to modify the corresponding znodes, but znodes are world readable. While the data stored in ZooKeeper is not sensitive, inappropriate manipulation of znodes can cause cluster disruption.

To configure Kafka to set ACLs on data within ZooKeeper (and ensure that it is not world-writeable), in the etc/kafka/server.properties file, enable ZooKeeper ACLs:

zookeeper.set.acl=true

Note

By default, ZooKeeper uses the fully qualified principal for authorization. If you are defining ZooKeeper ACLs in the broker configuration using the zookeeper.set.acl parameter, use identical principals (which should not include hostnames) across all Kafka brokers. If you do not use identical principals, then you must set both the kerberos.removeHostFromPrincipal and kerberos.removeRealmFromPrincipal parameters to true in the ZooKeeper server configuration file. This configuration ensures that all brokers are authorized in the same way, and that the first part of the principal is the same across all Kafka brokers.

It is recommended to limit access to ZooKeeper using network segmentation (only brokers and some admin tools need access to ZooKeeper if the new consumer and new producer are used).

Run

When you start ZooKeeper, pass the name of its JAAS file as a JVM parameter:

export KAFKA_OPTS="-Djava.security.auth.login.config=etc/kafka/zookeeper_jaas.conf"
zookeeper-server-start etc/kafka/zookeeper.properties

Kafka Connect

This section describes how to enable security for Kafka Connect. Securing Kafka Connect requires that you configure security for:

  1. Kafka Connect workers: part of the Kafka Connect API, a worker is really just an advanced client, underneath the covers
  2. Kafka Connect connectors: connectors may have embedded producers or consumers, so you must override the default configurations for Connect producers used with source connectors and Connect consumers used with sink connectors
  3. Kafka Connect REST: Kafka Connect exposes a REST API that can be configured to use TLS/SSL using additional properties

Configure security for Kafka Connect as described in the section below. Additionally, if you are using Confluent Control Center streams monitoring for Kafka Connect, configure security for:

Configure all the following properties in connect-distributed.properties.

  1. Configure the Connect workers to use SASL/GSSAPI.

    sasl.mechanism=GSSAPI
    sasl.kerberos.service.name=kafka
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    security.protocol=SASL_SSL
    
  2. Configure the JAAS configuration property with a unique principal, i.e., usually the same name as the user running the worker, and keytab, i.e., secret key, for each worker.

    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="connect@EXAMPLE.COM";
    
  3. For the connectors to leverage security, you also have to override the default producer/consumer configuration that the worker uses. Depending on whether the connector is a source or sink connector:

    • Source connector: configure the same properties adding the producer prefix.

      producer.sasl.mechanism=GSSAPI
      producer.sasl.kerberos.service.name=kafka
      # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
      producer.security.protocol=SASL_SSL
      producer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
         useKeyTab=true \
         storeKey=true \
         keyTab="/etc/security/keytabs/kafka_client.keytab" \
         principal="connect@EXAMPLE.COM";
      
    • Sink connector: configure the same properties adding the consumer prefix.

      consumer.sasl.mechanism=GSSAPI
      consumer.sasl.kerberos.service.name=kafka
      # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
      consumer.security.protocol=SASL_SSL
      consumer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
         useKeyTab=true \
         storeKey=true \
         keyTab="/etc/security/keytabs/kafka_client.keytab" \
         principal="connect@EXAMPLE.COM";
      

Confluent Replicator

Confluent Replicator is a type of Kafka source connector that replicates data from a source to destination Kafka cluster. An embedded consumer inside Replicator consumes data from the source cluster, and an embedded producer inside the Kafka Connect worker produces data to the destination cluster.

Replicator version 4.0 and earlier requires a connection to ZooKeeper in the origin and destination Kafka clusters. If ZooKeeper is configured for authentication, the client configures the ZooKeeper security credentials via the global JAAS configuration setting -Djava.security.auth.login.config on the Connect workers, and the ZooKeeper security credentials in the origin and destination clusters must be the same.

To configure Confluent Replicator security, you must configure the Replicator connector as shown below and additionally you must configure:

Configure Confluent Replicator to use SASL/GSSAPI by adding these properties in the Replicator’s JSON configuration file.

{
  "name":"replicator",
    "config":{
      ....
      "src.kafka.security.protocol" : "SASL_SSL",
      "src.kafka.sasl.mechanism" : "GSSAPI",
      "src.kafka.sasl.kerberos.service.name" : "kafka",
      "src.kafka.sasl.jaas.config" : "com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_client.keytab\" principal=\"replicator@EXAMPLE.COM\";",
      ....
    }
  }
}

See also

To see an example Confluent Replicator configuration, see the SASL source authentication demo script. For demos of common security configurations see: Replicator security demos

To configure Confluent Replicator for a destination cluster with SASL/GSSAPI authentication, modify the Replicator JSON configuration to include the following:

{
  "name":"replicator",
    "config":{
      ....
      "dest.kafka.security.protocol" : "SASL_SSL",
      "dest.kafka.sasl.mechanism" : "GSSAPI",
      "dest.kafka.sasl.kerberos.service.name" : "kafka",
      "dest.kafka.sasl.jaas.config" : "com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_client.keytab\" principal=\"replicator@EXAMPLE.COM\";",
      ....
    }
  }
}

Additionally, you can configure the following properties on the Connect worker:

sasl.mechanism=GSSAPI
security.protocol=SASL_SSL
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_client.keytab" principal="replicator@EXAMPLE.COM";
sasl.kerberos.service.name=kafka
producer.sasl.mechanism=GSSAPI
producer.security.protocol=SASL_SSL
producer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_client.keytab" principal="replicator@EXAMPLE.COM";
producer.sasl.kerberos.service.name=kafka

Tip

If you want to run different security for Replicator running as a connector than that of your Connect workers, you can do this by overriding the security credentials on the destination using the producer.override technique described in Run Replicator on the source cluster. In that case, do not set the above properties on the Connect worker. For example, if you want to use Replicator with SASL_SSL/GSSAPI security, but have Connect workers running RBAC OAUTHBEARER authentication, you can do so. The producer.overrides will cover the Replicator configuration, and your worker configuration can still have the OAUTHBEARER configuration.

For more information, see the general security configuration for Connect workers in Kafka Connect Security Basics, and Replicator Security Overview.

See also

To see an example Confluent Replicator configuration, see the SASL destination authentication demo script. For demos of common security configurations see: Replicator security demos

Confluent Control Center

Confluent Control Center uses Kafka Streams as a state store, so if all the Kafka brokers in the cluster backing Control Center are secured, then the Control Center application also needs to be secured.

Note

When RBAC is enabled, Control Center cannot be used in conjunction with Kerberos because Control Center cannot support any SASL mechanism other than OAUTHBEARER.

Enable security for the Control Center application as described in the section below. Additionally, configure security for the following components:

  • Confluent Metrics Reporter <sasl_gssapi_metrics-reporter>: required on the production cluster being monitored
  • Confluent Monitoring Interceptors: optional if you are using Control Center streams monitoring
  1. Enable SASL/GSSAPI and the security protocol for Control Center in the etc/confluent-control-center/control-center.properties file.

    confluent.controlcenter.streams.sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    confluent.controlcenter.streams.security.protocol=SASL_SSL
    
  2. Since you are using GSSAPI, configure a Kerberos service name that matches the primary name of the Kafka server configured in the broker JAAS file.

    confluent.controlcenter.streams.sasl.kerberos.service.name=kafka
    
  3. Configure the JAAS configuration property with a unique principal, which is usually the same Control Center name as the user running Control Center, and a keytab (secret key).

    confluent.controlcenter.streams.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
     useKeyTab=true \
     storeKey=true \
     keyTab="/etc/security/keytabs/kafka_client.keytab" \
     principal="controlcenter@EXAMPLE.COM";
    

Confluent Metrics Reporter

This section describes how to configure your brokers to enable security for Confluent Metrics Reporter, which is used for Confluent Control Center and Auto Data Balancer.

To configure the Confluent Metrics Reporter for SASL/GSSAPI, make the following configuration changes in the server.properties file in every broker in the production cluster being monitored.

  1. Verify that the Confluent Metrics Reporter is enabled.

    metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
    confluent.metrics.reporter.bootstrap.servers=kafka1:9093
    
  2. Enable the SASL/GSSAPI mechanism for Confluent Metrics Reporter.

    confluent.metrics.reporter.sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    confluent.metrics.reporter.security.protocol=SASL_SSL
    
  3. Since you are using GSSAPI, configure a Kerberos service name that matches the primary name of the Kafka server configured in the broker JAAS file.

    confluent.metrics.reporter.sasl.kerberos.service.name=kafka
    
  4. Configure the JAAS configuration property with a unique principal and keytab, i.e., secret key.

    confluent.metrics.reporter.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="client@EXAMPLE.COM";
    

Confluent Monitoring Interceptors

Confluent Monitoring Interceptors are used for Confluent Control Center streams monitoring. This section describes how to enable security for Confluent Monitoring Interceptors in three places:

  1. General clients
  2. Kafka Connect
  3. Confluent Replicator

Important

The typical use case for Confluent Monitoring Interceptors is to provide monitoring data to a separate monitoring cluster that most likely has different configurations. Interceptor configurations do not inherit configurations for the monitored component. If you wish to use configurations from the monitored component, you must add the appropriate prefix. For example, the option confluent.monitoring.interceptor.security.protocol=SSL, if being used for a producer, must be prefixed with producer. and would appear as producer.confluent.monitoring.interceptor.security.protocol=SSL.

Interceptors for General Clients

For Confluent Control Center stream monitoring to work with Kafka clients, you must configure SASL/GSSAPI for the Confluent Monitoring Interceptors in each client.

  1. Verify that the client has configured interceptors.
  • Producer:

    interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
    
  • Consumer:

    interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
    
  1. Configure the SASL mechanism and security protocol for the interceptor.

    confluent.monitoring.interceptor.sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    confluent.monitoring.interceptor.security.protocol=SASL_SSL
    
  2. Configure the Kerberos service name for the Confluent Monitoring Interceptors confluent.monitoring.interceptor.sasl.kerberos.service.name to match the broker’s Kerberos service name sasl.kerberos.service.name.

    confluent.monitoring.interceptor.sasl.kerberos.service.name=kafka
    
  3. Configure the JAAS configuration property with a unique principal and keytab, i.e., secret key.

    confluent.monitoring.interceptor.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="client@EXAMPLE.COM";
    

Interceptors for Kafka Connect

  1. For Confluent Control Center stream monitoring to work with Kafka Connect, you must configure SASL/GSSAPI for the Confluent Monitoring Interceptors in Kafka Connect. Configure the Connect workers by adding these properties in connect-distributed.properties, depending on whether the connectors are sources or sinks.
  • Source connector: configure the Confluent Monitoring Interceptors SASL mechanism with the producer prefix.

    producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
    producer.confluent.monitoring.interceptor.sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    producer.confluent.monitoring.interceptor.security.protocol=SASL_SSL
    producer.confluent.monitoring.interceptor.sasl.kerberos.service.name=kafka
    
  • Sink connector: configure the Confluent Monitoring Interceptors SASL mechanism with the consumer prefix.

    consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
    consumer.confluent.monitoring.interceptor.sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    consumer.confluent.monitoring.interceptor.security.protocol=SASL_SSL
    consumer.confluent.monitoring.interceptor.sasl.kerberos.service.name=kafka
    
  1. Configure the Kerberos service name for the Confluent Monitoring Interceptors confluent.monitoring.interceptor.sasl.kerberos.service.name to match the broker’s Kerberos service name sasl.kerberos.service.name. Use the correct configuration parameter prefix for source and sink connectors.
  • Source connector: configure the Confluent Monitoring Interceptor to use the Kerberos service name with the producer prefix.

    producer.confluent.monitoring.interceptor.sasl.kerberos.service.name=kafka
    
  • Sink connector: configure the Confluent Monitoring Interceptor to use the Kerberos service name with the consumer prefix.

    consumer.confluent.monitoring.interceptor.sasl.kerberos.service.name=kafka
    
  1. Configure the JAAS configuration property with a unique principal, i.e., usually the same name as the user running the Connect worker, and keytab, i.e., secret key. Use the correct configuration parameter prefix for source and sink connectors.
  • Source connector: configure the Confluent Monitoring Interceptors JAAS configuration with the producer prefix.

    producer.confluent.monitoring.interceptor.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="connect@EXAMPLE.COM";
    
  • Sink connector: configure the Confluent Monitoring Interceptors JAAS configuration with the consumer prefix.

    consumer.confluent.monitoring.interceptor.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="connect@EXAMPLE.COM";
    

Interceptors for Replicator

For Confluent Control Center stream monitoring to work with Replicator, you must configure SASL for the Confluent Monitoring Interceptors in the Replicator JSON configuration file. Here is an example subset of configuration properties to add.

{
  "name":"replicator",
    "config":{
      ....
      "src.consumer.group.id": "replicator",
      "src.consumer.interceptor.classes": "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor",
      "src.consumer.confluent.monitoring.interceptor.sasl.mechanism": "GSSAPI",
      "src.consumer.confluent.monitoring.interceptor.security.protocol": "SASL_SSL",
      "src.consumer.confluent.monitoring.interceptor.sasl.kerberos.service.name": "kafka",
      "src.consumer.confluent.monitoring.interceptor.sasl.jaas.config" : "com.sun.security.auth.module.Krb5LoginModule required \nuseKeyTab=true \nstoreKey=true \nkeyTab=\"/etc/security/keytabs/kafka_client.keytab\" \nprincipal=\"replicator@EXAMPLE.COM\";",
      ....
    }
  }
}

Schema Registry

Schema Registry uses Kafka to persist schemas, and so it acts as a client to write data to the Kafka cluster. Therefore, if the Kafka brokers are configured for security, you should also configure Schema Registry to use security. You may also refer to the complete list of Schema Registry configuration options.

  1. Here is an example subset of schema-registry.properties configuration parameters to add for SASL authentication:

    kafkastore.bootstrap.servers=kafka1:9093
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    kafkastore.security.protocol=SASL_SSL
    kafkastore.sasl.mechanism=GSSAPI
    
  2. Since you are using GSSAPI, configure a service name that matches the primary name of the Kafka server configured in the broker JAAS file.

    kafkastore.sasl.kerberos.service.name=kafka
    
  3. Configure the JAAS configuration property with a unique principal, i.e., usually the same name as the user running Schema Registry, and keytab, i.e., secret key.

    kafkastore.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="schemaregistry@EXAMPLE.COM";
    

REST Proxy

Securing Confluent REST Proxy for SASL requires that you configure security between the REST proxy and the Confluent Platform cluster.

For a complete list of all configuration options, refer to SASL Authentication.

  1. Here is an example subset of kafka-rest.properties configuration parameters to add for SASL/GSSAPI authentication

    client.bootstrap.servers=kafka1:9093
    client.sasl.mechanism=GSSAPI
    # Configure SASL_SSL if TLS/SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
    client.security.protocol=SASL_SSL
    
  2. Configure a service name that matches the primary name of the Kafka server configured in the broker JAAS file.

    client.sasl.kerberos.service.name=kafka
    
  3. Configure the JAAS configuration property with a unique principal, i.e., usually the same name as the user running the REST Proxy, and keytab, i.e., secret key.

    client.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
       useKeyTab=true \
       storeKey=true \
       keyTab="/etc/security/keytabs/kafka_client.keytab" \
       principal="restproxy@EXAMPLE.COM";
    

Confluent Rebalancer

To secure Confluent Rebalancer for SASL, specify the metrics configuration options for the Confluent Platform cluster in the rebalance-metrics-client.properties file (or a file name of your choosing):

confluent.rebalancer.metrics.security.protocol=SASL_SSL
confluent.rebalancer.metrics.sasl.mechanism=GSSAPI
confluent.rebalancer.metrics.sasl.kerberos.service.name=kafka
confluent.rebalancer.metrics.ssl.truststore.location=<path>/kafka.client.truststore.jks
confluent.rebalancer.metrics.ssl.truststore.password=<truststore-password>
confluent.rebalancer.metrics.sasl.jaas.config= com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="<path/>kafka.user.keytab" principal="kafka@KAFKA.SECURE";
confluent.rebalancer.metrics.ssl.keystore.location=<path>/kafka.server.keystore.com.jks
confluent.rebalancer.metrics.ssl.keystore.password=<keystore-password>
confluent.rebalancer.metrics.ssl.key.password=<key-password>

Then pass the configuration in rebalance-metrics-client.properties at the confluent-rebalancer command line. For example:

confluent-rebalancer execute --bootstrap-server <localhost>:9092 --config-file rebalance-metrics-client.properties --command-config rebalance-admin-client.properties --throttle 100000 --verbose

Note that the --config-file option specifies connectivity to the metrics cluster, and that the --command-config option specifies the admin client’s connectivity to the cluster being rebalanced. To ensure a secure connection when specifying connectivity for the admin client (rebalance-admin-client.properties) use the same security configuration as used for rebalance-metrics-client.properties, except you do not need to include the confluent.rebalancer.metrics. prefix for the keys.

Also, if you need to identify a metrics cluster that is different from the one being rebalanced, you can use the --metrics-bootstrap-server option. By default, metrics are retrieved from the cluster specified in the --bootstrap-server option.

You can also specify confluent.rebalancer.metrics.sasl.jaas.config by passing the JAAS configuration file location as a JVM parameter, as shown here:

export REBALANCER_OPTS="-Djava.security.auth.login.config=<path-to-jaas.conf>"

Troubleshooting SASL/GSSAPI

This section provides basic troubleshooting tips to address common errors that can occur when configuring SASL/GSSAPI.

Kerberos

The following hostname error may appear in your service logs when hostnames and principals in Kerberos hosts do not match exactly:

org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname@realm` Kafka Client will go to AUTHENTICATION_FAILED state.
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]
      at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
  ...
Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))
      ... 26 more

To avoid such hostname errors:

  • Ensure that the value specified for sasl.kerberos.service.name in the properties file matches the primary name in the corresponding keytab file. For example, if principal="kafka_broker/kafka1.hostname.com@EXAMPLE.COM", then the primary is kafka_broker, so you must configure sasl.kerberos.service.name=kafka_broker.

  • Verify that the kinit, klist, and kdestroy commands work as expected to initialize, list, and delete Kerberos credentials using the keytab. For example:

    # To display the current contents of the cache
    klist
    # To acquire new credentials
    kinit -k -t ./filename.keytab kafka_broker/kafka1.hostname.com@EXAMPLE.COM
    # To verify that the credentials were initialized and updated
    klist
    # To remove any newly-initialized credentials (if necessary)
    kdestroy
    
  • Verify that the Kerberos environment is set up correctly between clients and servers. Refer to Exercise 4: Using the Oracle Java SASL API for a simple Kerberos-based SaslTestClient and SaslServer application that uses the Java SASL API to verify communications between client and server hosts.

LDAP

To test and troubleshoot your LDAP configuration when configuring Confluent Platform to authenticate to LDAP using Kerberos, refer to Testing and Troubleshooting LDAP Client Authentication.