Configure Security for the Admin REST APIs for Confluent Server

Admin REST APIs Authentication

You can use HTTP Basic Authentication or mutual TLS (mTLS) authentication for communication between a client and the Admin REST APIs. You can use SASL or mTLS for communication between the Admin REST APIs and the brokers the APIs are running on.

Important

Without principal propagation, authentication terminates at the REST Proxy. This means that all requests to Kafka are made as the REST Proxy user. For more information, see Principal Propagation.

HTTP Basic Authentication

With HTTP Basic Authentication you can authenticate with the Admin REST APIs using a username and password pair. They are presented to the REST Proxy server using the Authorization HTTP header.

To enable HTTP Basic Authentication:

  1. Add the following configuration to your Apache Kafka® properties file (etc/kafka/server.properties):

    kafka.rest.authentication.method=BASIC
    kafka.rest.authentication.realm=KafkaRest
    kafka.rest.authentication.roles=thisismyrole
    
  2. Create a JAAS configuration file. For an example, see CONFLUENT_HOME/etc/kafka/server-jaas.properties:

    KafkaRest {
        org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
        debug="true"
        file="$CONFLUENT_HOME}/etc/kafka/password.properties";
    };
    

    Tip

    KafkaRest is in line with the realm specified as kafka.rest.authentication.realm in kafka.properties.

  3. Create a password properties file (CONFLUENT_HOME/etc/kafka/password.properties). For example:

    thisismyusername: thisismypass,thisismyrole
    
  4. Start Confluent Server with HTTP Basic authentication:

    KAFKA_OPTS="-Djava.security.auth.login.config=${CONFLUENT_HOME}/etc/kafka/server-jaas.properties" \
    kafka-server-start ${CONFLUENT_HOME}/etc/kafka/server.properties
    
  5. Login to your Confluent Server with the username thisismyusername and the password thisismypass. The password in your password.properties file can also be hashed. For more information, see this link.

Configuration Options

kafka.rest.authentication.method

Indicates the method the Admin REST APIs uses to authenticate requests. Either NONE or BASIC. To activate HTTP Basic Authentication, you must set it to BASIC.

  • Type: string
  • Default: “NONE”
  • Importance: high
kafka.rest.authentication.realm

If kafka.rest.authentication.method = BASIC, this configuration tells which section from the system JAAS configuration file to use to authenticate HTTP Basic Authentication credentials.

  • Type: string
  • Default: “”
  • Importance: high
kafka.rest.authentication.roles

If kafka.rest.authentication.method = BASIC, this configuration tells which user roles are allowed to authenticate with the Admin REST APIs through HTTP Basic Authentication. If set to *, any role will be allowed to authenticate.

  • Type: string
  • Default: “*”
  • Importance: medium

Mutual TLS authentication

With mutual TLS (mTLS) authentication, you can authenticate with a HTTPS enabled Admin REST APIs using a client side X.509 certificate.

To enable mTLS, you must first enable HTTPS on the Admin REST APIs. For the configuration options you must set, see Confluent REST API Configuration Options for HTTPS.

After HTTPS is configured, you must configure the Admin REST APIs truststore to verify the incoming client X.509 certificates. For example, you can configure the Admin REST APIs truststore to point to a keystore with the root CA certificate used to sign the client certificates loaded into it.

Finally, you can turn mTLS on by setting confluent.http.server.ssl.client.authentication to REQUIRED (the boolean version of this, ssl.client.auth, is deprecated).

Configuration Options

confluent.http.server.ssl.client.auth

DEPRECATED: Used for HTTPS. Whether or not to require the HTTPS client to authenticate using the server’s trust store. Must be set to true to enable mTLS.

Use confluent.http.server.ssl.client.authentication instead.

  • Type: boolean
  • Default: false
  • Importance: medium
confluent.http.server.ssl.client.authentication

Used for HTTPS. Whether to require the HTTPS client to authenticate using the server’s trust store. Must be set to REQUIRED to enable mTLS.

This configuration overrides the deprecated confluent.http.server.ssl.client.auth.

Valid values are NONE, REQUESTED or REQUIRED. NONE disables TLS client authentication, REQUESTED requests but does not require TLS client authentication, and REQUIRED requires TLS HTTPS clients to authenticate using the server’s truststore.

  • Type: string
  • Default: NONE
  • Importance: medium
confluent.http.server.ssl.truststore.location

Location of the trust store.

  • Type: string
  • Default: “”
  • Importance: high
confluent.http.server.ssl.truststore.password

The password for the trust store file.

  • Type: password
  • Default: “”
  • Importance: high
confluent.http.server.ssl.truststore.type

The type of trust store file.

  • Type: string
  • Default: JKS
  • Importance: medium

Authentication between the Admin REST APIs and Kafka Brokers

The Admin REST APIs running in the Confluent Server communicate with the Kafka broker internally using normal Kafka Java clients (by default using the inter-broker listener on the same broker). That means if the listener the client is communicating on is secured, you must configure the security parameters for the Admin REST APIs Java clients to communicate with Kafka through the aforementioned listener.

SASL Authentication

Kafka SASL configurations are described here.

Note that all of the SASL configurations (for the Admin REST APIs to broker communication) are prefixed with client., or alternatively admin..

To enable SASL authentication with the Kafka broker set kafka.rest.client.security.protocol to either SASL_PLAINTEXT or SASL_SSL.

Then set kafka.rest.client.sasl.jaas.config with the credentials to be used by the Admin REST APIs to authenticate with Kafka. For example:

kafka.rest.client.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafkarest" password="kafkarest";

Alternatively you can create a JAAS configuration file, for example CONFLUENT_HOME/etc/kafka/server-jaas.properties:

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafkarest"
  password="kafkarest";
};

The name of the section in the JAAS file must be KafkaClient. Then pass it as a JVM argument:

export KAFKA_OPTS="-Djava.security.auth.login.config=${CONFLUENT_HOME}/etc/kafka/server-jaas.properties"

For details about configuring Kerberos see JDK’s Kerberos Requirements.

Configuration Options

kafka.rest.client.security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

  • Type: string
  • Default: PLAINTEXT
  • Importance: high
kafka.rest.client.sasl.jaas.config

JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described in Oracle’s documentation. The format for the value is: ‘(=)*;’

  • Type: string
  • Default: null
  • Importance: high
kafka.rest.client.sasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka’s JAAS configuration or in Kafka’s configuration.

  • Type: string
  • Default: null
  • Importance: medium
kafka.rest.client.sasl.mechanism

SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

  • Type: string
  • Default: GSSAPI
  • Importance: medium
kafka.rest.client.sasl.kerberos.kinit.cmd

Kerberos kinit command path.

  • Type: string
  • Default: /usr/bin/kinit
  • Importance: low
kafka.rest.client.sasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.

  • Type: long
  • Default: 60000
  • Importance: low
kafka.rest.client.sasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.

  • Type: double
  • Default: 0.05
  • Importance: low
kafka.rest.client.sasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket.

  • Type: double
  • Default: 0.8
  • Importance: low

Mutual TLS authentication

Kafka TLS/SSL configurations are described here.

Admin REST APIs to Kafka TLS/SSL configurations are described here.

To enable mTLS with the Kafka broker you must set kafka.rest.client.security.protocol to SSL or SASL_SSL.

If the Kafka broker is configured with ssl.client.authentication=required, and you configure client certificates for the Admin REST APIs with kafka.rest.client.ssl.keystore.*, that should make the Admin REST APIs do TLS/SSL authentication with the Kafka broker.

Principal Propagation

This is a commercial component of Confluent Platform.

Principal propagation takes the principal from the authentication mechanism configured for a client to authenticate with REST Proxy and propagates this principal when making requests to the Kafka broker. Without principal propagation, authentication terminates at the REST Proxy. This means that all requests to Kafka are made as the REST Proxy user.

From a security perspective, the propagation process is:

  1. The first JSON request (over HTTP) is authenticated using the configured mechanism.
  2. REST Proxy translates the principal used in the HTTP authentication into a principal that can be authenticated (SSL/SASL) against the Kafka broker.

For example, if you use TLS for both stages then you must have the client TLS certificates for REST Proxy too.

Credentials for all principals that will propagate must be present on the REST Proxy server. Note that this is both a technical challenge (for example, TLS principals must map to Kerberos principals) and a security challenge (everything required to impersonate a user is stored in REST Proxy outside the user’s control).

The following sections provide further details and configuration examples.

HTTP Basic Authentication to SASL Authentication

To enable HTTP Basic Authentication to SASL Authentication credentials propagation, you must set kafka.rest.authentication.method to BASIC, kafka.rest.confluent.rest.auth.propagate.method to JETTY_AUTH,and kafka.rest.client.security.protocol to either SASL_PLAINTEXT or SASL_SSL.

Security plugin supports all the sasl.mechanism supported by Kafka clients. Just like a regular Kafka client, the plugin also expects a JAAS configuration file to be configured through -Djava.security.auth.login.config. It is required for all the principals to be specified in the JAAS configuration file under the section KafkaClient.

In the JAAS configuration file, all of the principals must be explicitly specified. The plugin supports specifying principals using following supported mechanisms: GSSAPI, PLAIN, SCRAM-SHA-256 and SCRAM-SHA-512. Also, the plugin ignores any configured sasl.mechanism and picks it automatically based on the LoginModule specified for the principal.

KafkaClient {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/security/keytabs/restproxy-localhost.keytab"
  principal="CN=restproxy/localhost@EXAMPLE.COM";

  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  storeKey=true
  keyTab="/etc/security/keytabs/kafka_client_2.keytab"
  principal="kafka-client-2@EXAMPLE.COM";

  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="alice-plain"
  password="alice-secret";

  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="alice-scram"
  password="alice-secret";

  org.apache.kafka.common.security.scram.ScramLoginModule required
  username="alice-scram-256"
  password="alice-secret"
  mechanism="SCRAM-SHA-256";
};

Here is the mapping of sasl.mechanism for the configured login modules:

Principal’s Login Module SASL Mechanism
com.sun.security.auth.module.Krb5LoginModule
GSSAPI
org.apache.kafka.common.security.plain.PlainLoginModule
PLAIN
org.apache.kafka.common.security.scram.ScramLoginModule
SCRAM-SHA-512
For SCRAM-SHA-256 set
mechanism=SCRAM-SHA-256
as an option in ScramLoginModule

All the mechanisms except SCRAM-SHA-256 would be automatically detected by the plugin and SCRAM-SHA-256 can be explicitly mentioned as an option in the ScramLoginModule.

Configuration Options

kafka.rest.confluent.rest.auth.propagate.method

The mechanism used to authenticate the Admin REST APIs requests. When broker security is enabled, the principal from this authentication mechanism is propagated to Kafka broker requests. Either JETTY_AUTH or SSL.

  • Type: string
  • Default: “SSL”
  • Importance: low

mTLS to SASL Authentication

To enable mTLS to SASL Authentication, you must set confluent.http.server.ssl.client.auth to true, kafka.rest.confluent.rest.auth.propagate.method to SSL, and client.security.protocol to either SASL_PLAINTEXT or SASL_SSL.

The incoming X500 principal from the client is used as the principal while interacting with the Kafka broker. You can use confluent.rest.auth.ssl.principal.mapping.rules to map the DN from the client certificate to a name that can be used for principal propagation. For example, a rule like RULE:^CN=(.*?)$/$1/, would strip off the CN= portion of the DN.

Requires a JAAS configuration file with KafkaClient section containing all principals along with its login module and options; configured via -Djava.security.auth.login.config.

Configuration Options

kafka.rest.confluent.rest.auth.propagate.method

The mechanism used to authenticate the Admin REST APIs requests. When broker security is enabled, the principal from this authentication mechanism is propagated to Kafka broker requests.

  • Type: string
  • Default: “SSL”
  • Importance: low
kafka.rest.confluent.rest.auth.ssl.principal.mapping.rules

A list of rules for mapping distinguished name (DN) from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, DN of the X.500 certificate is the principal. Each rule starts with “RULE:” and contains an expression using the formats below. The default rule returns string representation of the X.500 certificate DN. If the DN matches the pattern, then the replacement command is run over the name. This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a “/L” or “/U’ to the end of the rule:

  • Type: list
  • Default: DEFAULT
  • Importance: low

TLS/SSL Authentication to TLS/SSL Authentication

To enable mTLS to mTLS, you must set confluent.http.server.ssl.client.authentication to REQUIRED (the boolean version of this, ssl.client.auth, is deprecated), and kafka.rest.confluent.rest.auth.propagate.method to SSL.

Set kafka.rest.kafka.rest.resource.extension.class to io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension. This property is described under configuration options for Role-Based Access Control (RBAC).

For TLS/SSL propagation to work, it is required to load all the certificates corresponding to the required principals in a single client keystore file. Once this is done, the plugin would pick the appropriate certificate alias based on the logged on principal while making requests to Kafka. Currently, the logged on principal must exactly match the X.509 Principal of the certificate.

For example, if there were two clients integrated to the Admin REST APIs the setup could be as simple as below:

  • Client A authenticates to the Admin REST APIs using its keystore which contains Certificate-A
  • Client B authenticates to the Admin REST APIs using its keystore which contains Certificate-B
  • The Admin REST APIs’s keystore kafka.rest.client.ssl.keystore.location is loaded with Certificate-A and Certificate-B. The certificate is then chosen by the plugin based on who the client is.

Configuration Options

kafka.rest.confluent.rest.auth.propagate.method

The mechanism used to authenticate the Admin REST APIs requests. When broker security is enabled, the principal from this authentication mechanism is propagated to Kafka broker requests.

  • Type: string
  • Default: “SSL”
  • Importance: low

License Client Configuration

A Kafka client is used to check the license topic for compliance. Review the following information about how to configure this license client when using principal propagation.

Configure license client authentication
When using principal propagation, client license authentication is inherited from the inter-broker listeners.
Configure license client authorization

When using principal propagation and RBAC or ACLs, you must configure client authorization for the license topic.

Note

Starting with Confluent Platform 6.2.1, the _confluent-command internal topic is available as the preferred alternative to the _confluent-license topic for components such as Schema Registry, REST Proxy, and Confluent Server (which were previously using _confluent-license). Both topics will be supported going forward. Here are some guidelines:

  • New deployments (Confluent Platform 6.2.1 and later) will default to using _confluent-command as shown below.
  • Existing clusters will continue using the _confluent-license unless manually changed.
  • Newly created clusters on Confluent Platform 6.2.1 and later will default to creating the _confluent-command topic, and only existing clusters that already have a _confluent-license topic will continue to use it.
  • RBAC authorization

    Run this command to add ResourceOwner for the component user for the Confluent license topic resource (default name is _confluent-command).

    confluent iam rbac role-binding create \
    --role ResourceOwner \
    --principal User:<service-account-id> \
    --resource Topic:_confluent-command \
    --kafka-cluster <kafka-cluster-id>
    
  • ACL authorization

    Run this command to configure Kafka authorization, where bootstrap server, client configuration, service account ID is specified. This grants create, read, and write on the _confluent-command topic.

    kafka-acls --bootstrap-server <broker-listener> --command-config <client conf> \
    --add --allow-principal User:<service-account-id>  --operation Create --operation Read --operation Write \
    --topic _confluent-command
    

Role-Based Access Control (RBAC)

This is a commercial component of Confluent Platform.

Prerequisites:

To enable token authentication (in the kafka.properties file) set kafka.rest.rest.servlet.initializor.classes to io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler and kafka.rest.kafka.rest.resource.extension.class to io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension.

kafka.rest.rest.servlet.initializor.classes=io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler
kafka.rest.kafka.rest.resource.extension.class=io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension

When token authentication is enabled, the generated token is used to impersonate the API requests. The Admin REST APIs Kafka clients use the SASL_PLAINTEXT or SASL_SSL authentication mechanism to authenticate with Kafka brokers.

Configuration Options

kafka.rest.rest.servlet.initializor.classes

List of custom initialization classes for the Admin REST APIs. To use RBAC, set it to io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler.

  • Type: string
  • Default: “”
  • Importance: high
kafka.rest.kafka.rest.resource.extension.class

List of custom extension classes for the Admin REST APIs. To use RBAC, set it to io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension.

  • Type: string
  • Default: PLAINTEXT
  • Importance: high
kafka.rest.client.security.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. To use RBAC, set it to either SASL_PLAINTEXT or SASL_SSL`.

  • Type: string
  • Default: “”
  • Importance: high
kafka.rest.public.key.path

Location of the PEM encoded public key file to be used for verifying tokens.

  • Type: string
  • Default: “”
  • Importance: high
kafka.rest.confluent.metadata.bootstrap.server.urls

Comma-separated list of bootstrap metadata server URLs to which this REST Proxy connects. For example: http://localhost:8080,http://localhost:8081

  • Type: string
  • Default: “”
  • Importance: high
kafka.rest.confluent.metadata.basic.auth.user.info

Service user credentials information in the format: user:password.

  • Type: string
  • Default: “”
  • Importance: high