Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Using the Kafka LDAP Authorizer

Install

Important

To use the Confluent security plugins, you must have a current Confluent Enterprise subscription. Without the license key, you can use Confluent security plugins for a 30-day trial period.

The Confluent security plugins are an extension to Confluent Platform components. The security plugins are installed by default if your using ZIP and TAR archives, but must be installed manually if you are using DEB or RPM packages.

The default location for the Kafka LDAP Authorizer is:

<path-to-confluent>/share/java/kafka/ldap-plugins-<version>.jar

ZIP and TAR Archives

If you installed Confluent Platform by using ZIP or TAR archives, the security plugins are installed by default and are located in <path-to-confluent>/share/java/ in the individual component directories.

Ubuntu and Debian

If you installed Confluent Platform in a Ubuntu or Debian environment, you must install the plugins separately with this command:

sudo apt-get update && sudo apt-get install confluent-security

RHEL and CentOS

If you installed Confluent Platform in a RHEL, CentOS, or Fedora-based environment, you must install the plugins separately with this command:

sudo yum install confluent-security

Activate the Plugin

After the installation is complete, you add the following config in the Kafka broker config file to activate (e.g. /etc/kafka/server.properties). The URL for your LDAP server and the configuration options to connect to your LDAP server and obtain the mapping of users to groups should also be added to the broker configuration. Configuration options for the LDAP authorizer are prefixed with ldap.authorizer..

Tutorial: Starting a single-node Kafka cluster with group-based authorization

Follow these instructions to start a single-node Kafka cluster with group-based authorization using groups obtained from your LDAP server. The instructions below use SASL_PLAINTEXT as the security protocol for the Kafka broker and Kafka clients with SASL/SCRAM-SHA-256 as the SASL mechanism. Instructions to use SASL/GSSAPI to enable both authentication and group-based authorization using a Kerberos server (e.g. Active Directory or Apache Directory Service) are also provided.

Prerequisites

An LDAP server (e.g Active Directory) must be set up before starting up the Kafka cluster. The example below assumes that you have an LDAP server at the URL LDAPSERVER.EXAMPLE.COM:3268 that is accessible using DNS lookup from the host where the broker is run. The example expects a Kerberos-enabled LDAP server and the LDAP authorizer configuration uses GSSAPI for authentication. These security settings and other configuration options must match your LDAP server configuration.

The example uses the following host, realm and port, these should be updated to point to your LDAP server.

  • LDAP server host: LDAPSERVER.EXAMPLE.COM
  • LDAP realm: EXAMPLE.COM
  • LDAP port: 3268 (global context port used in the example)

At least one group must be created containing one or more users. The example assumes that your LDAP server contains a group named Kafka Developers and a user named alice who is a member of Kafka Developers group. The user principal and group must be updated to match the user and group from your LDAP server that you want to use for the tests.

The users in the example are:

  • kafka : for brokers (groups are not used in the example for authorization of brokers, but broker authorization could also be configured using groups if required)
  • alice : member of group Kafka Developers

If your LDAP server authenticates clients using Kerberos, a keytab file is required for the LDAP authorizer and the keytab file and principal should be updated in authorizer JAAS configuration option ldap.authorizer.sasl.jaas.config.

Start Kafka Cluster with LDAP Authorizer

The instructions below assume that you are running scripts from the installation directory after installing Confluent Platform from an archive. The directories of the scripts and config files should be adjusted to match your installation.

Start ZooKeeper

Start single-node ZooKeeper using the default configuration.

bin/zookeeper-server-start etc/kafka/zookeeper.properties > /tmp/zookeeper.log 2>&1 &

Create users

Create users kafka for brokers and alice for clients.

bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=kafka-secret]' --entity-type users --entity-name kafka
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice

Configure listeners for broker

Copy the following lines at the end of your broker configuration file (e.g. etc/kafka/server.properties).

listeners=SASL_PLAINTEXT://:9092
advertised.listeners=SASL_PLAINTEXT://127.0.0.1:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
listener.name.sasl_plaintext.scram-sha-256.sasl.jaas.config= \
  org.apache.kafka.common.security.scram.ScramLoginModule required \
  username="kafka" \
  password="kafka-secret";

Configure LDAP authorizer

Copy the following lines to the end of your broker configuration file (e.g. etc/kafka/server.properties) and update the configs to match the configuration of your LDAP server.

# Configure authorizer
authorizer.class.name=io.confluent.kafka.security.ldap.authorizer.LdapAuthorizer
# Set Kafka broker user as super user (alternatively, set ACLs before starting brokers)
super.users=User:kafka
# LDAP provider URL
ldap.authorizer.java.naming.provider.url=ldap://LDAPSERVER.EXAMPLE.COM:3268/DC=EXAMPLE,DC=COM
# Refresh interval for LDAP cache. If set to zero, persistent search is used.
ldap.authorizer.refresh.interval.ms=60000
# Security authentication protocol for LDAP context
ldap.authorizer.java.naming.security.authentication=GSSAPI
# Security principal for LDAP context
ldap.authorizer.java.naming.security.principal=ldap@EXAMPLE.COM
# JAAS configuration for the LDAP authorizer, update keytab path and thr principal for user performing LDAP search
ldap.authorizer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
  keyTab="/tmp/keytabs/ldap.keytab" \
  principal="ldap@EXAMPLE.COM" \
  debug="true" \
  storeKey="true" \
  useKeyTab="true";

# Search base for group-based search
ldap.authorizer.group.search.base=CN=Users
# Object class for groups
ldap.authorizer.group.object.class=group
# Name of the attribute from which group name used in ACLs is obtained
ldap.authorizer.group.name.attribute=sAMAccountName
# Regex pattern to obtain group name used in ACLs from the attribute `ldap.authorizer.group.name.attribute`
ldap.authorizer.group.name.attribute.pattern=
# Name of the attribute from which group members (user principals) are obtained
ldap.authorizer.group.member.attribute=member
# Regex pattern to obtain user principal from group member attribute
ldap.authorizer.group.member.attribute.pattern=CN=(.*),CN=Users,DC=EXAMPLE,DC=COM

Start Kafka broker

Authorize broker user kafka for cluster operations. Note that the example uses user-principal based ACL for brokers, but brokers may also be configured to use group-based ACLs.

bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --cluster --operation=All --allow-principal=User:kafka

Start your broker with Kerberos options that enable LDAP authorizer to authenticate with your LDAP server.

export KAFKA_OPTS="-Djava.security.krb5.kdc=LDAPSERVER.EXAMPLE.COM -Djava.security.krb5.realm=EXAMPLE.COM"
bin/kafka-server-start etc/kafka/server.properties > /tmp/kafka.log 2>&1 &

Test LDAP group-based authorization

Create a topic

bin/kafka-topics --create --topic testtopic --partitions 10 --replication-factor 1 --zookeeper localhost

Configure producer and consumer

Add the following lines to your producer and consumer configuration files (e.g. etc/kafka/producer.properties and etc/kafka/consumer.properties)

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="alice" password="alice-secret";

Run console producer without authorizing user

bin/kafka-console-producer --broker-list localhost:9092 --topic testtopic --producer.config etc/kafka/producer.properties

Type in some messages. You should see authorization failures.

Authorize group and rerun producer

bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --topic=testtopic --producer '--allow-principal=Group:Kafka Developers'
bin/kafka-console-producer --broker-list localhost:9092 --topic testtopic --producer.config etc/kafka/producer.properties

Type in some messages. Records are produced successfully using group-based authorization where the user->group mapping was obtained from your LDAP server.

Run console consumer without access to consumer group

bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic testtopic --from-beginning --consumer.config etc/kafka/consumer.properties

Consume should fail authorization since neither user alice nor the group Kafka Developers that alice belongs to has authorization to consume using the group test-consumer-group

Authorize group and rerun consumer

bin/kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --add --topic=testtopic --group test-consumer-group --allow-principal="Group:Kafka Developers"
bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic testtopic --from-beginning --consumer.config etc/kafka/consumer.properties

Consume should succeed now using group-based authorization where the user->group mapping was obtained from your LDAP server.

Authentication and group-based authorization using an LDAP server

A Kerberos-enabled LDAP server (e.g. Active Directory or Apache Directory Server) may be used for authentication as well as group-based authorization if users and groups are managed by this server. The instructions below use SASL/GSSAPI for authentication using AD or DS and obtain group membership of the users from the same server.

The example assumes you have the following three user principals and keytabs for these principals:

  • kafka/localhost@EXAMPLE.COM : Service principal for brokers
  • alice@EXAMPLE.COM : Client principal, member of group Kafka Developers
  • ldap@EXAMPLE.COM : Principal used by LDAP authorizer

Note that the user principals used for authorization is the local name (e.g. kafka, alice) by default and these short principals are used to determine group membership. Brokers may be configured with custom principal.builder.class or sasl.kerberos.principal.to.local.rules to override this behaviour. The attributes used for mapping users to groups may also be customized to match your LDAP server.

If you have already started the broker using SASL/SCRAM-SHA-256 following the instructions above, stop the server first. The instructions below assume that you have already updated configuration for brokers, producers and consumers as described earlier.

Configure listeners to use GSSAPI by updating the following properties in your broker configuration file (e.g. etc/kafka/server.properties).

sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.kerberos.service.name=kafka
listener.name.sasl_plaintext.gssapi.sasl.jaas.config= \
  com.sun.security.auth.module.Krb5LoginModule required \
  keyTab="/tmp/keytabs/kafka.keytab" \
  principal="kafka/localhost@EXAMPLE.COM" \
  debug="true" \
  storeKey="true" \
  useKeyTab="true";

Add or update the following properties in your producer and consumer configuration files (e.g. etc/kafka/producer.properties and etc/kafka/consumer.properties)

sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
sasl.jaas.config= com.sun.security.auth.module.Krb5LoginModule required \
  keyTab="/tmp/keytabs/alice.keytab" \
  principal="alice@EXAMPLE.COM" \
  debug="true" \
  storeKey="true" \
  useKeyTab="true";

Restart the broker, and run the producer and consumer as described earlier. Producers and consumers are now authenticated using your Kerberos server. Group information is also obtained from the same server using LDAP.