Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Authentication and Encryption with Confluent Operator¶
This document describes how to configure authentication and network encryption with Confluent Operator. For security concepts in Confluent Platform, see Security.
The following are the authentication and encryption methods supported with Operator:
- Plaintext (no encryption) with SASL PLAIN authentication (default)
- TLS-encrypted network traffic with SASL PLAIN authentication
- TLS-encrypted traffic with no authentication
- TLS-encrypted traffic with mTLS authentication
Security in a Confluent Platform cluster is controlled by the security configured for the Apache Kafka® brokers. To configure Confluent Platform cluster security, configure security for the Kafka broker cluster, make sure it works, and then layer on additional security for the remaining Confluent Platform components. The following list provides the typical order for adding security to your cluster:
- Configure Kafka broker security and validate accessibility.
- Configure security for applicable Confluent Platform components and validate accessibility.
- Configure security for external clients and validate accessibility.
At a high level, the following settings in the configuration file
($VALUES_FILE
) control the encryption and authentication settings for Kafka.
kafka:
tls:
enabled: # Specifies the encryption method
authentication:
type: # Specifies the authentication method
Below is the summarized settings for the Kafka encryption and authentication supported with Operator.
TLS encryption with no authentication
kafka: tls: enabled: true
TLS encryption and mTLS authentication
kafka: tls: enabled: true authentication: type: tls
TLS encryption with SASL PLAIN authentication
kafka: tls: enabled: true authentication: type: plain
No encryption with SASL PLAIN authentication
kafka: tls: enabled: false authentication: type: plain
The examples in this guide use the following assumptions:
$VALUES_FILE
refers to the configuration file you set up in Create the global configuration file.To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file (
$VALUES_FILE
). However, in your production deployments, use the--set
or--set-file
option when applying sensitive data with Helm. For example:helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal=”cn=mds,dc=test,dc=com” \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operator
is the namespace that Confluent Platform is deployed in.All commands are executed in the
helm
directory under the directory Confluent Operator was downloaded to.
Encryption¶
Operator supports the plaintext (no encryption) and TLS encryption methods for Confluent Platform with plaintext being the default.
Network encryption with TLS¶
Operator supports Transport Layer Security (TLS), an industry-standard encryption protocol, to protect network communications of Confluent Platform.
Certificates and keys¶
TLS relies on keys and certificates to establish trusted connections. This section describes how to manage keys and certificates in preparation to configure TLS encryption for Confluent Platform.
Generate self-signed certificates for testing¶
To validate your security configuration, follow the steps below to create a self-signed certificate for testing. You need to generate your own certificate for the production deployment.
Create a root key:
openssl genrsa -out rootCA.key 2048
Create a root certificate:
openssl req -x509 -new -nodes \ -key rootCA.key \ -days 3650 \ -out rootCA.pem \ -subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=TestCA"
Create a server private key:
openssl genrsa -out server.key 2048
Generate a CSR for a server certificate:
openssl req -new -key server.key \ -out server.csr \ -subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=*.operator.svc.cluster.local"
Sign the certificate:
openssl x509 -req \ -in server.csr \ -extensions server_ext \ -CA rootCA.pem \ -CAkey rootCA.key \ -CAcreateserial \ -out server.crt \ -days 365 \ -extfile \ <(echo "[server_ext]"; echo "extendedKeyUsage=serverAuth,clientAuth"; echo "subjectAltName=DNS:*.operator.svc.cluster.local")
The above steps create the following files that you map to the corresponding
keys in the tls
section of the config file ($VALUES_FILE
) for cacerts
,
fullchain
, and privkey
:
rootCA.pem
containscacerts
.server.crt
containsfullchain
(the full chain).server.key
containsprivkey
(the private key).
SAN attributes for certificates¶
When creating certificates for use by the Kafka brokers, configure the Subject Alternative Name (SAN) attribute. SAN allows you to specify multiple hostnames for a single certificate.
There are two ways to configure SAN attributes for accessing the Kafka clusters from outside of the Kubernetes.
Note
You access the Kafka clusters from outside of Kubernetes through the external load balancers. For retrieving the load balancer domain, see Configure external load balancers.
If you are permitted by your organization to use wildcard domains in your certificate SANs, use the following SAN attribute when generating a certificate for the Kafka brokers:
*.<kafka-load-balancer-domain>
If wildcards are not permitted, you must provide multiple SAN attributes. Use the following SAN attributes, which are based on the Kafka broker prefix, bootstrap prefix, and number of brokers:
<broker-prefix>0.<kafka-load-balancer-domain> <broker-prefix>1.<kafka-load-balancer-domain> ... <broker-prefix><N-1>.<kafka-load-balancer-domain> <bootstrap-prefix>.<kafka-load-balancer-domain>
For example, if your Kafka load balancer domain is
confluent-platform.example.com
, your broker prefix isb
(this is the default if you don’t explicitly set the broker prefix when configuring the Kafka load balancer), your bootstrap prefix iskafka
(this is the default), and you plan to deploy three brokers, use the following SANs when generating your certificate:b0.confluent-platform.example.com b1.confluent-platform.example.com b2.confluent-platform.example.com kafka.confluent-platform.example.com
Note that you do not get the elasticity of Kafka with this configuration.
Manage credentials outside of the configuration file¶
For increased security, you can keep credentials, such as private keys, outside
of your config file ($VALUES_FILE
). Furthermore, certificate data can be
large and cumbersome, and if you have this data already in separate files, you
may want to avoid copying and pasting that data into the $VALUES_FILE
and
dealing with YAML syntax.
You can pass the contents of these files when issuing Helm commands rather than
putting them directly in the config file ($VALUES_FILE
). For example:
helm upgrade --install kafka ./confluent-operator \
--values $VALUES_FILE \
--namespace operator \
--set kafka.enabled=true \
--set kafka.tls.enabled=true \
--set kafka.metricReporter.tls.enabled=true \
--set-file kafka.tls.cacerts=/tmp/ca-bundle.pem \
--set-file kafka.tls.fullchain=/tmp/server-cert.pem \
--set-file kafka.tls.privkey=/tmp/server-key.pem
Kafka configuration for TLS encryption¶
Enable and configure TLS encryption in the kafka
section in your config file
($VALUES_FILE
) as shown in the snippet below:
kafka:
tls:
enabled: true ----- [1]
fullchain: |- ----- [2]
-----BEGIN CERTIFICATE-------
... omitted
-----END CERTIFICATE---------
privkey: |- ----- [3]
-----BEGIN RSA PRIVATE KEY---
... omitted
-----END RSA PRIVATE KEY-----
The following should be set:
- [1] Set
enabled: true
to enable TLS encryption. - [2] Provide a full chain certificate. A
fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order. - [3] Provide the private key of the certificate. The
privkey
contains the private key associated with the broker certificate.
When you enable TLS network encryption, the encryption is applied only to the external clients. Any traffic going to the Kafka brokers through the internal listener (port 9071) is not be encrypted. This applies especially to other Confluent Platform components deployed by Operator.
Component configuration for TLS encryption¶
This section describes how to set up the rest of the Confluent Platform components so that they (a) successfully communicate with Kafka configured with TLS encryption, and (b) serve TLS-encrypted traffic themselves.
These are general guidelines and do not provide all security dependencies.
Review the component values.yaml
files in the
<operator-home>/helm/confluent-operator/charts
folder for additional
information about component dependencies.
Configure the components as below in the configuration file ($VALUES_FILE
):
<component>:
tls:
enabled: true ----- [1]
cacerts: |- ----- [2]
-----BEGIN CERTIFICATE------
... omitted
-----END CERTIFICATE--------
fullchain: |- ----- [3]
-----BEGIN CERTIFICATE------
... omitted
-----END CERTIFICATE--------
privkey: |- ----- [4]
-----BEGIN RSA PRIVATE KEY--
... omitted
-----END RSA PRIVATE KEY----
- [1] External communication encryption mode. To serve TLS-encrypted traffic on its external listener, set
enabled: true
. - [2] One or more concatenated Certificate Authorities (CAs) for the component to trust. If the component is expected to communicate with Kafka with TLS-encrypted traffic,
cacerts
should at least contain an intermediate or root CA that was used to use the certificate or full chain that will be presented by that Kafka cluster. - [3] Provide the full certificate chain that the component will use to serve TLS-encrypted traffic. The
fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order. - [4] Provide the private key associated with the broker certificate.
JMX and Jolokia configuration for TLS encryption¶
Confluent Operator supports TLS encryption for the JMX and Jolokia endpoints of every Confluent Platform component.
TLS is set up at port 7203 for JMX and at port 7777 for Jolokia.
To enable TLS encryption for JMX and Jolokia endpoints, set the following for
each desired component in the configuration file ($VALUES_FILE
):
<component>:
tls:
jmxTLS: true ----- [1]
fullchain: ----- [2]
privkey: ----- [3]
- [1] Set
jmxTLS: true
to enable TLS encryption for JMX and Jolokia endpoints. - [2] Provide the full TLS certificate chain.
- [3] Provide the private key for the certificate.
The TLS certificates used to secure the JMX and Jolokia endpoints are the same
that will be used in general by the component. There is no separate fullchain
and privkey
setting just for the JMX and Jolokia endpoints.
Authentication¶
Operator supports the SASL PLAIN and mTLS authentication methods for Confluent Platform with SASL PLAIN being the default.
Authentication with SASL PLAIN¶
When PLAIN SASL authentication is configured, external clients and internal Confluent Platform components provide a username and password for both to authenticate with Kafka brokers.
Kafka configuration for SASL PLAIN authentication¶
SASL PLAIN is the default authentication method when you use Operator to manage
Confluent Platform, and you do not need to explicitly enable SASL PLAIN. If you prefer to
explicitly enable SASL PLAIN to clearly document the configuration, set it in
the kafka
section in the configuration file ($VALUES_FILE
) as shown below:
kafka:
tls:
authentication:
type: plain
Add global Confluent Platform user¶
Add the inter-broker user for Kafka. Other Confluent Platform components also use this user to authenticate to Kafka.
Specify the user name and password in the sasl
section in your config file ($VALUES_FILE) as follows:
global:
sasl:
plain:
username: <username>
password: <password>
You can provide the above sensitive data using a Helm command line flag rather
than directly in the config file ($VALUES_FILE
). For example, to provide a
password from the command line:
helm upgrade --install kafka ./confluent-operator \
--values $VALUES_FILE \
--namespace operator \
--set kafka.enabled=true \
--set global.sasl.plain.password=<my-password>
Add custom SASL users¶
To add the SASL users, add the users in the kafka
section of the
configuration file ($VALUES_FILE
) as below:
kafka:
sasl:
plain:
- <user1>=<password1>
- <user2>=<password2>
- <user3>=<password3>
This setting is dynamic and you can add users without restarting the running Kafka cluster.
Component configuration for SASL PLAIN authentication¶
To set up the rest of the Confluent Platform components to authenticate with Kafka using SASL
PLAIN, set the Kafka authentication type to plain, type: plain
, in the
configuration file ($VALUES_FILE
).
<component>:
dependencies:
kafka:
tls:
authentication:
type: plain
Authentication with mTLS¶
Kafka configuration for mTLS authentication¶
Enable and configure mutual TLS (mTLS) authentication in the kafka
section
in your config file ($VALUES_FILE
). To enable mTLS authentication, you must
also enable TLS encryption. The following snippet enables mTLS authentication:
kafka:
tls:
enabled: true ----- [1]
authentication:
type: tls ----- [2]
cacerts: |- ----- [3]
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-------
fullchain: |- ----- [4]
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-------
privkey: |- ----- [5]
-----BEGIN RSA PRIVATE KEY-
... omitted
-----END RSA PRIVATE KEY---
- [1] Set
enabled: true
to enable TLS encryption. - [2] Set
type: tls
to enable mTLS authentication. - [3] Provide CA certs that consists of any CAs you wish the Kafka brokers to use to authenticate certificates presented by clients.
- [4] Provide a full chain certificate that consists of a root CA, any intermediate CAs, and finally the certificate for the Kafka broker, in that order.
- [5] Provide the private key associated with the Kafka broker certificate.
Component configuration for mTLS authentication¶
To set up the rest of the Confluent Platform components to use mTLS to authenticate to Kafka,
configure the components as below in the configuration file ($VALUES_FILE
).
These are general guidelines and do not provide all security dependencies.
Review the component values.yaml
files in the
<operator-home>/helm/confleunt-operator/charts
folder for additional
information about component dependencies.
<component>:
dependencies:
kafka:
tls:
enabled: true ----- [1]
authentication:
type: tls ----- [2]
bootstrapEndpoint: ----- [3]
tls:
cacerts: |- ----- [4]
-----BEGIN CERTIFICATE-------
... omitted
-----END CERTIFICATE---------
fullchain: |- ----- [5]
-----BEGIN CERTIFICATE-------
... omitted
-----END CERTIFICATE---------
privkey: |- ----- [6]
-----BEGIN RSA PRIVATE KEY---
... omitted
-----END RSA PRIVATE KEY-----
[1] External communication encryption mode.
enabled: true
is required for mTLS authentication.[2] Authentication methods. Set
type: tls
to use mTLS authentication.[3] Set the destination endpoint for the Kafka cluster as below.
For the component to communicate with Operator-deployed Kafka over Kafka’s internal listener:
- If Kafka cluster is deployed to the same namespace as this component:
<kafka-cluster-name>:9071
- If Kafka cluster is deployed to a different namespace as this component:
<kafka-cluster-name>.<kafka-namespace>.svc.cluster.local:9071
The <kafka-cluster-name> is the value set in
name
under thekafka
component section in your config file ($VALUES_FILE
).If you want to encrypt the internal communication (
internal: true
) and if TLS is enabled, setbootstrapEndpoint
on port 9092 instead of 9071. Currently, the internal TLS communication is achieved by routing through the external listener port (9092) which is encrypted with TLS. The internal port is not encrypted.- If Kafka cluster is deployed to the same namespace as this component:
For the component to communicate with Operator-deployed Kafka over Kafka’s external listener:
<bootstrap-prefix>.<load-balancer-domain>:9092
These values are set in
loadBalancer:bootstrapPrefix
andloadBalancer:domain
under thekafka
component in the config file ($VALUES_FILE
). If you have not setbootstrapPrefix
, use the default value ofkafka
.For the component to communicate with a Kafka cluster that has not been deployed by this Operator, set this to that cluster’s bootstrap URL.
[4] Provide CA certificate(s) for the component to trust the certificates presented by the Kafka brokers. It consists of any CAs you wish the Kafka brokers to use to authenticate certificates presented by clients.
[5] Provide the full certificate chain that the component will use to authenticate. The
fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order.[6] Provide the private key for the certificate in [5].
For example, if you configured the Kafka brokers with mTLS to encrypt internal and external communication, the configuration parameters in the Replicator security section would resemble the following:
replicator:
name: replicator
tls:
cacerts: |-
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-----
fullchain: |-
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-----
privkey: |-
-----BEGIN RSA PRIVATE KEY-----
... omitted
-----END RSA PRIVATE KEY-----
dependencies:
kafka:
tls:
enabled: true
internal: true
authentication:
type: "tls"
bootstrapEndpoint: kafka.operator.svc.cluster.local:9092
JMX and Jolokoa configuration for mTLS authentication¶
Confluent Operator supports enabling mTLS authentication for the JMX and Jolokia endpoints of every Confluent Platform component.
Note: Currently, Operator does not support SASL PLAIN authentication for the JMX and Jolokia endpoints.
mTLS JMX is set up at port 7203 for JMX and at port 7777 for Jolokia.
To enable mTLS authentication for JMX and Jolokia, set the following for each desired component in the config file ($VALUES_FILE
):
<component>:
tls:
jmxTLS: true ----- [1]
jmxAuthentication:
type: tls ----- [2]
cacerts: ----- [3]
fullchain: ----- [4]
privkey: ----- [5]
[1] Set jmxTLS: true to enable TLS encryption for JMX and Jolokia endpoints.
[2] Set type: tls to enable mTLS authentication for JMX and Jolokia endpoints.
[3] Provide CA certificates to trust clients authenticating via mTLS.
[4] Provide the full TLS certificate chain.
[5] Provide the private key for the certificate.
CA certificates, the full chain, and the private key values are used to secure both inter-component and external communication for the JMX and Jolokia endpoints.
Client configuration for encryption and authentication¶
Security configurations of clients depend on Kafka security settings.
Kafka security configuration information can be retrieved from the Kafka cluster
using the following commands (using the example name kafka
running on the
namespace operator
).
For internal clients (running inside the Kubernetes cluster):
kubectl get kafka kafka -ojsonpath='{.status.internalClient}' -n operator
For external clients (running outside the Kubernetes cluster):
kubectl get kafka kafka -ojsonpath='{.status.externalClient}' -n operator
To configure clients, refer to the following:
- If the Kafka cluster is using TLS encryption, configure the client as described in Encryption with SSL.
- If the Kafka cluster is using mTLS encryption and authentication, configure the client as described in Encryption and Authentication with SSL.
- If the Kafka cluster is using TLS encryption with SASL PLAIN authentication, configure the client as described in Encryption and Authentication with SASL.
Confluent Control Center encryption and authentication¶
Control Center supports the following encryption and authentication methods:
- Encryption: A REST endpoint over HTTPS on port 9021
- Authentication: Basic or LDAP
Based on the authentication method you select, configure the associated settings
for the method in the config file ($VALUES_FILE
).
controlcenter:
auth:
basic:
enabled: false
##
## map with key as user and value as password and role
property: {}
# property:
# admin: Developer1,Administrators
# disallowed: no_access
ldap:
enabled: false
nameOfGroupSearch: c3users
property: {}