Configure Authentication and Encryption with Confluent Operator¶
This document describes how to configure authentication and network encryption with Confluent Operator. For security concepts in Confluent Platform, see Security.
The following are the authentication and encryption methods supported with Operator:
- Plaintext (no encryption) with SASL PLAIN authentication (default)
- TLS-encrypted network traffic with SASL PLAIN authentication
- TLS-encrypted traffic with no authentication
- TLS-encrypted traffic with mTLS authentication
See also
We’ve provided deployment templates for the two recommended secure production setups. You can take these, add in your environment and certificate information, and use for deployment:
Security in a Confluent Platform cluster is controlled by the security configured for the Apache Kafka® brokers. To configure Confluent Platform cluster security, configure security for the Kafka broker cluster, make sure it works, and then layer on additional security for the remaining Confluent Platform components. The following list provides the typical order for adding security to your cluster:
- Configure Kafka broker security and validate accessibility.
- Configure security for applicable Confluent Platform components and validate accessibility.
- Configure security for external clients and validate accessibility.
At a high level, the following settings in the configuration file
($VALUES_FILE
) control the encryption and authentication settings for Kafka.
kafka:
tls:
enabled: # Specifies the encryption method
authentication:
type: # Specifies the authentication method
Below is the summarized settings for the Kafka encryption and authentication supported with Operator.
TLS encryption with no authentication
kafka: tls: enabled: true
TLS encryption and mTLS authentication
kafka: tls: enabled: true authentication: type: tls
TLS encryption with SASL PLAIN authentication
kafka: tls: enabled: true authentication: type: plain
No encryption with SASL PLAIN authentication
kafka: tls: enabled: false authentication: type: plain
Important
To add or update the certificates for Kafka, you need to restart the Kafka cluster by either restarting the Kafka pods or rolling the Kafka cluster.
The examples in this guide use the following assumptions:
$VALUES_FILE
refers to the configuration file you set up in Create the global configuration file.To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file (
$VALUES_FILE
). However, in your production deployments, use the--set
or--set-file
option when applying sensitive data with Helm. For example:helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal="cn=mds\,dc=test\,dc=com" \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operator
is the namespace that Confluent Platform is deployed in.All commands are executed in the
helm
directory under the directory Confluent Operator was downloaded to.
Encryption¶
Operator supports the plaintext (no encryption) and TLS encryption methods for Confluent Platform with plaintext being the default.
Network encryption with TLS¶
Operator supports Transport Layer Security (TLS), an industry-standard encryption protocol, to protect network communications of Confluent Platform.
Certificates and keys¶
TLS relies on keys and certificates to establish trusted connections. This section describes how to manage keys and certificates in preparation to configure TLS encryption for Confluent Platform.
Generate self-signed certificates for testing¶
To validate your security configuration, follow the steps below to create a self-signed certificate for testing. You need to generate your own certificate for the production deployment.
Create a root key:
openssl genrsa -out rootCA.key 2048
Create a root certificate:
openssl req -x509 -new -nodes \ -key rootCA.key \ -days 3650 \ -out rootCA.pem \ -subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=TestCA"
Create a server private key:
openssl genrsa -out server.key 2048
Generate a CSR for a server certificate:
openssl req -new -key server.key \ -out server.csr \ -subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=*.operator.svc.cluster.local"
Sign the certificate:
openssl x509 -req \ -in server.csr \ -extensions server_ext \ -CA rootCA.pem \ -CAkey rootCA.key \ -CAcreateserial \ -out server.crt \ -days 365 \ -extfile \ <(echo "[server_ext]"; echo "extendedKeyUsage=serverAuth,clientAuth"; echo "subjectAltName=DNS:*.operator.svc.cluster.local")
The above steps create the following files that you map to the corresponding
keys in the tls
section of the config file ($VALUES_FILE
) for cacerts
,
fullchain
, and privkey
:
rootCA.pem
containscacerts
.server.crt
containsfullchain
(the full chain).server.key
containsprivkey
(the private key).
SAN attributes for certificates¶
When creating certificates for use by the Kafka brokers, configure the Subject Alternative Name (SAN) attribute. SAN allows you to specify multiple hostnames for a single certificate.
There are two ways to configure SAN attributes for accessing the Kafka clusters from outside of the Kubernetes.
If you are permitted by your organization to use wildcard domains in your certificate SANs, use the following SAN attributes when generating certificates:
SAN attribute for the external bootstrap and broker addresses:
*.<kafka-domain>
SAN attribute for the internal addresses:
*.<component-name>.<namespace>.svc.cluster.local
The
<component-name>
is the value set inname:
under the component section in your config file ($VALUES_FILE
).
If wildcards are not permitted, you must provide multiple SAN attributes. Use the following SAN attributes, which are based on the Kafka broker prefix, the bootstrap prefix, and the number of brokers:
SAN attributes for the external bootstrap and broker addresses:
<bootstrap-prefix>.<kafka-domain> <broker-prefix>0.<kafka-domain> <broker-prefix>1.<kafka-domain> ... <broker-prefix><N-1>.<kafka-domain>
For example, if you have the following configuration:
- Your Kafka domain is
confluent-platform.example.com
. - Your broker prefix is
b
(this is the default if you don’t explicitly set the broker prefix when configuring Kafka). - Your bootstrap prefix is
kafka
(this is the default). - You are deploying three Kafka brokers in the
operator
namespace.
Use the following SANs when generating your external address certificates:
b0.confluent-platform.example.com b1.confluent-platform.example.com b2.confluent-platform.example.com kafka.confluent-platform.example.com
- Your Kafka domain is
SAN attributes for the internal Kafka bootstrap address and the specific internal addresses for each broker:
<kafka-component-name>.<kafka-namespace>.svc.cluster.local <kafka-component-name>-0.<kafka-component-name>.<kafka-namespace>.svc.cluster.local <kafka-component-name>-1.<kafka-component-name>.<kafka-namespace>.svc.cluster.local ... <kafka-component-name>-<N-1>.<kafka-component-name>.<kafka-namespace>.svc.cluster.local
The
<kafka-component-name>
is the value set inname:
under thekafka
component section in your config file ($VALUES_FILE
).For example, if you have the following configuration:
- Your Kafka component name is
kafka
(this is the default). - You are deploying three Kafka brokers in the
operator
namespace.
Use the following SANs when generating your internal address certificates:
kafka.operator.svc.cluster.local kafka-0.kafka.operator.svc.cluster.local kafka-1.kafka.operator.svc.cluster.local kafka-2.kafka.operator.svc.cluster.local
- Your Kafka component name is
SAN attributes for the internal addresses of other components:
<component-name>.<namespace>.svc.cluster.local
The
<component-name>
is the value set inname:
under the component section in your config file ($VALUES_FILE
).
Note that when you scale up your Kafka cluster, your certificate will need to have DNS SANs for the additional brokers in your cluster. If you want to avoid having to regenerate new certificates before adding new brokers, you could consider putting more DNS SANs in your certificate than your initial broker count. For instance, if you plan to start with 3 brokers but imagine you may need to scale up to 6 brokers, you could have DNS SANs for 6 (or more) brokers in your initial certificate so that when it comes time to scale up, you can do so without creating a new certificate.
If you enable TLS encryption for the JMX and Jolokia endpoints, see JMX and Jolokia configuration for TLS encryption for the additional SAN requirement for JMX.
Manage credentials outside of the configuration file¶
For increased security, you can keep credentials, such as private keys, outside
of your config file ($VALUES_FILE
). Furthermore, certificate data can be
large and cumbersome, and if you have this data already in separate files, you
may want to avoid copying and pasting that data into the $VALUES_FILE
and
dealing with YAML syntax.
You can pass the contents of these files when issuing Helm commands rather than
putting them directly in the config file ($VALUES_FILE
). For example:
helm upgrade --install kafka ./confluent-operator \
--values $VALUES_FILE \
--namespace operator \
--set kafka.enabled=true \
--set kafka.tls.enabled=true \
--set kafka.metricReporter.tls.enabled=true \
--set-file kafka.tls.cacerts=/tmp/ca-bundle.pem \
--set-file kafka.tls.fullchain=/tmp/server-cert.pem \
--set-file kafka.tls.privkey=/tmp/server-key.pem
Kafka configuration for TLS encryption¶
Enable and configure TLS encryption in the kafka
section in your config file
($VALUES_FILE
) as shown in the snippet below:
kafka:
tls:
enabled: true ----- [1]
interbrokerTLS: true ----- [2]
internalTLS: true ----- [3]
fullchain: |- ----- [4]
---BEGIN CERTIFICATE---
... omitted
---END CERTIFICATE---
privkey: |- ----- [5]
---BEGIN RSA PRIVATE KEY---
... omitted
---END RSA PRIVATE KEY---
configOverrides:
server:
- listener.name.internal.ssl.principal.mapping.rules= --- [6]
- listener.name.replication.ssl.principal.mapping.rules= --- [7]
The following should be set:
[1] Set
enabled: true
to enable TLS encryption.[2] Set
interbrokerTLS: true
to enable TLS encryption for inter-broker communication and communication between Kafka and Replicator. [1] must be set totrue
for this setting to take effect.Setting
interBrokerTLS: true
is supported only for fresh installations. Upgrading an existing deployment to enable inter-broker TLS is not supported.If [1] and [2] are set to
true
, the internal address must be included in the Kafka certificate.For
interbrokerTLS: true
, add the inter-broker listener principal mapping rules to the Kafka server configuration usingconfigOverrides
in [7].[3] Set
internalTLS: true
to enable TLS encryption for inter-component communication. [1] must be set totrue
for this setting to take effect.For
internalTLS: true
, add the internal listener principal mapping rules to the Kafka server configuration usingconfigOverrides
in [6].[4] Provide a full chain certificate. A
fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order.The external, internal, and inter-broker listeners use the same certificate.
The certificate you provide must have DNS SANs for the external bootstrap and broker addresses, the internal Kafka bootstrap address, and the specific internal addresses for each broker. For information on configuring SANs in a certificate, see SAN attributes for certificates.
[5] Provide the private key of the certificate. The
privkey
contains the private key associated with the broker certificate.[6] Add the following internal listener principal mapping rules when
internalTLS: true
.kafka: configOverrides: server: - "listener.name.internal.ssl.principal.mapping.rules=RULE:^CN=([a-zA-Z0-9.]*).*$/$1/L, DEFAULT"
[7] Add the following internal listener principal mapping rules when
interbrokerTLS: true
.kafka: configOverrides: server: - "listener.name.replication.ssl.principal.mapping.rules=RULE:^CN=([a-zA-Z0-9.]*).*$/$1/L, DEFAULT"
Component configuration for TLS encryption¶
This section describes how to set up the rest of the Confluent Platform components so that they (a) successfully communicate with Kafka configured with TLS encryption, and (b) serve TLS-encrypted traffic themselves.
These are general guidelines and do not provide all security dependencies.
Review the component values.yaml
files in the
<operator-home>/helm/confluent-operator/charts
folder for additional
information about component dependencies.
Configure the components as below in the configuration file ($VALUES_FILE
):
<component>:
tls:
enabled: true ----- [1]
internalTLS: true ----- [2]
cacerts: |- ----- [3]
---BEGIN CERTIFICATE---
... omitted
---END CERTIFICATE-----
fullchain: |- ----- [4]
---BEGIN CERTIFICATE---
... omitted
---END CERTIFICATE---
privkey: |- ----- [5]
-----BEGIN RSA PRIVATE KEY--
... omitted
-----END RSA PRIVATE KEY----
[1] External communication encryption mode. To serve TLS-encrypted traffic on its external listener, set
enabled: true
.[2] Internal communication encryption mode. To serve TLS-encrypted traffic on its internal listener, set
internalTLS: true
. [1] must be set totrue
for this setting to take effect.[3] One or more concatenated Certificate Authorities (CAs) for the component to trust the certificates presented by the Kafka brokers.
[4] Provide the full certificate chain that the component will use to serve TLS-encrypted traffic. The
fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order.Some components such as Replicator, Connect, and Schema Registry can be configured with TLS for internal traffic. This means traffic between replicas, e.g. Replicator to Replicator in the same cluster, can now be encrypted with the same certificates as those used for their external listener.
[5] Provide the private key associated with the broker certificate.
JMX and Jolokia configuration for TLS encryption¶
Confluent Operator supports TLS encryption for the JMX and Jolokia endpoints of every Confluent Platform component.
TLS is set up at port 7203 for JMX and at port 7777 for Jolokia.
To enable TLS encryption for JMX and Jolokia endpoints, set the following for
each desired component in the configuration file ($VALUES_FILE
):
<component>:
tls:
jmxTLS: true ----- [1]
fullchain: ----- [2]
privkey: ----- [3]
[1] Set
jmxTLS: true
to enable TLS encryption for JMX and Jolokia endpoints.Setting
jmxTLS: true
is only supported in green field installations and is not supported in upgrades.With
jmxTLS: true
, include the following in the SAN attribute of the Kafka certificate:*.<kafka-component-name>.<kafka-namespace>
If a wildcard (
*
) is not allowed in your SAN, include the following:<kafka-component-name>.<kafka-namespace> <kafka-component-name>-0.<kafka-component-name>.<kafka-namespace> <kafka-component-name>-1.<kafka-component-name>.<kafka-namespace> ... <kafka-component-name>-<N-1>.<kafka-component-name>.<kafka-namespace>
[2] Provide the full TLS certificate chain.
[3] Provide the private key for the certificate.
The TLS certificates used to secure the JMX and Jolokia endpoints are the same
that will be used in general by the component. There is no separate fullchain
and privkey
setting just for the JMX and Jolokia endpoints.
Authentication¶
Operator supports the SASL PLAIN and mTLS authentication methods for Confluent Platform with SASL PLAIN being the default.
Authentication with SASL PLAIN¶
When PLAIN SASL authentication is configured, external clients and internal Confluent Platform components provide a username and password for both to authenticate with Kafka brokers.
Kafka configuration for SASL PLAIN authentication¶
SASL PLAIN is the default authentication method when you use Operator to manage
Confluent Platform, and you do not need to explicitly enable SASL PLAIN. If you prefer to
explicitly enable SASL PLAIN to clearly document the configuration, set it in
the kafka
section in the configuration file ($VALUES_FILE
) as shown below:
kafka:
tls:
authentication:
type: plain
Add global Confluent Platform user¶
Add the inter-broker user for Kafka. Other Confluent Platform components also use this user to authenticate to Kafka.
Specify the user name and password in the sasl
section in your config file ($VALUES_FILE) as follows:
global:
sasl:
plain:
username: <username>
password: <password>
You can provide the above sensitive data using a Helm command line flag rather
than directly in the config file ($VALUES_FILE
). For example, to provide a
password from the command line:
helm upgrade --install kafka ./confluent-operator \
--values $VALUES_FILE \
--namespace operator \
--set kafka.enabled=true \
--set global.sasl.plain.password=<my-password>
Add custom SASL users¶
To add the SASL users, add the users in the kafka
section of the
configuration file ($VALUES_FILE
) as below:
kafka:
sasl:
plain:
- <user1>=<password1>
- <user2>=<password2>
- <user3>=<password3>
This setting is dynamic and you can add users without restarting the running Kafka cluster.
Component configuration for SASL PLAIN authentication¶
To set up the rest of the Confluent Platform components to authenticate with Kafka using SASL
PLAIN, set the Kafka authentication type to plain, type: plain
, in the
configuration file ($VALUES_FILE
).
<component>:
dependencies:
kafka:
tls:
authentication:
type: plain
Authentication with mTLS¶
Kafka configuration for mTLS authentication¶
Enable and configure mutual TLS (mTLS) authentication in the kafka
section
in your config file ($VALUES_FILE
). To enable mTLS authentication, you must
also enable TLS encryption. The following snippet enables mTLS authentication:
kafka:
tls:
enabled: true ----- [1]
authentication:
type: tls ----- [2]
cacerts: |- ----- [3]
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-------
fullchain: |- ----- [4]
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-------
privkey: |- ----- [5]
-----BEGIN RSA PRIVATE KEY-
... omitted
-----END RSA PRIVATE KEY---
- [1] Set
enabled: true
to enable TLS encryption. - [2] Set
type: tls
to enable mTLS authentication. - [3] Provide CA certs that consists of any CAs you want the Kafka brokers to use to authenticate certificates presented by clients.
- [4] Provide a full chain certificate that consists of a root CA, any intermediate CAs, and finally the certificate for the Kafka broker, in that order.
- [5] Provide the private key associated with the Kafka broker certificate.
Component configuration for mTLS authentication¶
To set up the rest of the Confluent Platform components to use mTLS to authenticate to Kafka,
configure the components as below in the configuration file ($VALUES_FILE
).
These are general guidelines and do not provide all security dependencies.
Review the component values.yaml
files in the
<operator-home>/helm/confleunt-operator/charts
folder for additional
information about component dependencies.
<component>:
dependencies:
kafka:
tls:
enabled: true ----- [1]
authentication:
type: tls ----- [2]
bootstrapEndpoint: ----- [3]
tls:
cacerts: |- ----- [4]
-----BEGIN CERTIFICATE-------
... omitted
-----END CERTIFICATE---------
fullchain: |- ----- [5]
-----BEGIN CERTIFICATE-------
... omitted
-----END CERTIFICATE---------
privkey: |- ----- [6]
-----BEGIN RSA PRIVATE KEY---
... omitted
-----END RSA PRIVATE KEY-----
[1] External communication encryption mode.
enabled: true
is required for mTLS authentication.[2] Authentication methods. Set
type: tls
to use mTLS authentication.[3] Set the destination endpoint for the Kafka cluster as below.
For the component to communicate with Operator-deployed Kafka over Kafka’s internal listener:
- If Kafka cluster is deployed to the same namespace as this component:
<kafka-component-name>:9071
- If Kafka cluster is deployed to a different namespace as this component:
<kafka-component-name>.<kafka-namespace>.svc.cluster.local:9071
The <kafka-component-name> is the value set in
name
under thekafka
component section in your config file ($VALUES_FILE
).- If Kafka cluster is deployed to the same namespace as this component:
For the component to communicate with Operator-deployed Kafka over Kafka’s external listener:
Using load balancer:
<bootstrap-prefix>.<domain>:9092
These values are set in
loadBalancer:bootstrapPrefix
andloadBalancer:domain
under thekafka
section. If you have not setbootstrapPrefix
, use the default value ofkafka
.Using NodePort:
<host>:<port-offset>
These values are set in
nodePort:host
andnodePort:portOffset
under thekafka
section.Using static host-based routing:
<bootstrap-prefix>.<domain>:9092
These values are set in
staticForHostBasedRouting:bootstrapPrefix
andstaticForHostBasedRouting:domain
under thekafka
section. If you have not setbootstrapPrefix
, use the default value ofkafka
.Using static port-based routing:
<host>:<port-offset>
These values are set in
staticForPortBasedRouting:host
andstaticForPortBasedRouting:portOffset
under thekafka
section.
For the component to communicate with a Kafka cluster that has not been deployed by this Operator, set this to that cluster’s bootstrap URL.
[4] Provide CA certificate(s) for the component to trust the certificates presented by the Kafka brokers. It consists of any CAs you want the Kafka brokers to use to authenticate certificates presented by clients.
[5] Provide the full certificate chain that the component will use to authenticate. The
fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order.[6] Provide the private key for the certificate in [5].
For example, if you configured the Kafka brokers with mTLS to encrypt internal and external communication, the configuration parameters in the Replicator security section would resemble the following:
replicator:
name: replicator
tls:
cacerts: |-
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-----
fullchain: |-
-----BEGIN CERTIFICATE-----
... omitted
-----END CERTIFICATE-----
privkey: |-
-----BEGIN RSA PRIVATE KEY-----
... omitted
-----END RSA PRIVATE KEY-----
dependencies:
kafka:
tls:
enabled: true
internal: true
authentication:
type: "tls"
bootstrapEndpoint: kafka.operator.svc.cluster.local:9092
JMX and Jolokoa configuration for mTLS authentication¶
Confluent Operator supports enabling mTLS authentication for the JMX and Jolokia endpoints of every Confluent Platform component.
Note: Currently, Operator does not support SASL PLAIN authentication for the JMX and Jolokia endpoints.
mTLS JMX is set up at port 7203 for JMX and at port 7777 for Jolokia.
To enable mTLS authentication for JMX and Jolokia, set the following for each desired component in the config file ($VALUES_FILE
):
<component>:
tls:
jmxTLS: true ----- [1]
jmxAuthentication:
type: tls ----- [2]
cacerts: ----- [3]
fullchain: ----- [4]
privkey: ----- [5]
[1] Set
jmxTLS: true
to enable TLS encryption for JMX and Jolokia endpoints.Setting
jmxTLS: true
is only supported in green field installations and is not supported in upgrades.[2] Set
type: tls
to enable mTLS authentication for JMX and Jolokia endpoints.[3] Provide CA certificates to trust clients authenticating via mTLS.
[4] Provide the full TLS certificate chain.
[5] Provide the private key for the certificate.
CA certificates, the full chain, and the private key values are used to secure both inter-component and external communication for the JMX and Jolokia endpoints.
Client configuration for encryption and authentication¶
Security configurations of clients depend on Kafka security settings.
Kafka security configuration information can be retrieved from the Kafka cluster
using the following commands (using the example name kafka
running on the
namespace operator
).
For internal clients (running inside the Kubernetes cluster):
kubectl get kafka kafka -ojsonpath='{.status.internalClient}' -n operator
For external clients (running outside the Kubernetes cluster):
kubectl get kafka kafka -ojsonpath='{.status.externalClient}' -n operator
To configure clients, refer to the following:
- If the Kafka cluster is using TLS encryption, configure the client as described in Encryption with SSL.
- If the Kafka cluster is using mTLS encryption and authentication, configure the client as described in Encryption and Authentication with SSL.
- If the Kafka cluster is using TLS encryption with SASL PLAIN authentication, configure the client as described in Encryption and Authentication with SASL.
Confluent Control Center encryption and authentication¶
Control Center supports the following encryption and authentication methods:
Encryption: A REST endpoint over HTTPS on port 9021
Authentication: Basic or LDAP
Warning
- When deployed with Operator, Confluent Control Center cannot connect to Schema Registry using basic authentication.
- Confluent Control Center does not support connecting to Connect and ksqlDB using basic authentication.
Based on the authentication method you select, configure the associated settings
for the method in the config file ($VALUES_FILE
).
controlcenter:
auth:
basic:
enabled: false
##
## map with key as user and value as password and role
property: {}
# property:
# admin: Developer1,Administrators
# disallowed: no_access
ldap:
enabled: false
nameOfGroupSearch: c3users
property: {}