Configure Network Encryption with Confluent for Kubernetes¶
This document describes how to configure network encryption with Confluent for Kubernetes (CFK). For security concepts in Confluent Platform, see Security.
To secure network communications of Confluent components, CFK supports Transport Layer Security (TLS), an industry-standard encryption protocol.
TLS relies on keys and certificates to establish trusted connections. This section describes how to manage keys and certificates when you configure TLS encryption for Confluent Platform.
CFK supports the following mechanisms to enable TLS encryption:
- Auto-generated certificates
CFK auto-generates the server certificates, using the certificate authority (CA) that you provide.
If all access and communication to Confluent services is within the Kubernetes network, auto-generated certificates are recommended.
- User-provided certificates
User provides the private key, public key, and CA.
If you need to enable access to Confluent services from an external-to-Kubernetes domain, user-provided certificates are recommended.
- Separate certificates for internal and external communications
You provide separate TLS certificates for the internal and external communications so that you do not mix external and internal domains in the certificate SAN.
This feature is supported for ksqlDB, Schema Registry, MDS, and Kafka REST services, starting in CFK 2.6.0 and Confluent Platform 7.4.0 release.
- Dynamic Kafka certificate updates
When you rotate certificates by providing new server certificates, CFK automatically updates the configurations to use those new certificates. And, by default, this update triggers a rolling restart of the affected Confluent Platform pod.
To minimize disruption during rolling restarts of Kafka brokers, you can enable dynamic certificate loading for Kafka and Kafka REST service. CFK will update TLS private keys and certificates without rolling the Kafka cluster.
This feature is only supported at the individual listener level.
Use auto-generated TLS certificates¶
You can use auto-generated certificates to encrypt internal traffic within a Kubernetes network. You bring the CA you want Confluent for Kubernetes (CFK) to use, and CFK automates provisioning, securing, and operating certificates for internal networking communications.
Starting in CFK 2.4.0, you have an option to customize auto-generated certificates in the CFK Helm values. The certificate rotation and renewal policy will be defined based on the following settings that you specify:
- The secret name or Vault directory path for the certificates
- The validity period of the certificates
- The renewal time for the certificates
- The certificate Subject Alternative Names (SANs)
For upgrading from the legacy, default auto-generated certificates to the configurable certificates, see Upgrade default auto-generated certificates to configurable auto-generated certificates.
Configure auto-generated certificates¶
Use a certificate manager to create a CA certificate and key with the file names,
tls.crt
andtls.key
respectively.Define the secret for the CA certificate pairs.
Create a Kubernetes secret with keys,
tls.crt
andtls.key
. These CA key pairs will be used to sign the certificates.To create a secret with CA certificate and key for auto-generated certificates:
kubectl create secret tls <secret name> \ --cert=/path/to/ca.pem \ --key=/path/to/ca-key.pem
If you select this option, you must follow the next step and set
managedCerts.enable: true
and setmanagedCerts.caCertificate.secretRef=<secret name>
.To use Vault for auto-generated secrets, set up Vault and specify the path to the CA pair certificates in the next step. The
tls.crt
andtls.key
files must be present in the directory.If you select this option, you must follow the next step and set
managedCerts.enable: true
and setmanagedCerts.caCertificate.directoryPathInContainer=<path>
.
(Optional) If you want to customize certificate settings, configure the settings in the CFK Helm values (in values.yaml).
Edit the Helm values file, and then apply the changes with the
helm upgrade
command:managedCerts: enabled: --- [1] caCertificate: --- [2] secretRef: --- [3] directoryPathInContainer: --- [4] certDurationInDays: --- [5] renewBeforeInDays: --- [6] sans: --- [7]
[1] Set to
true
to enable and configure CFK-managed certificates at the Helm level.When this is set to
true
, the settings undermanagedCerts:
will be used for auto-generated certificates.The clusters will roll after this setting is enabled.
[2] CA certificate pair for auto-generated certificates in this CFK deployment. Use
secretRef
ordirectoryPathInContainer
to provide the certificates.[3] Set to the CA pair secret reference name.
The expected keys are
tls.crt
andtls.key
for CA certificate and CA certificate key respectively.When you use the Kubernetes secrets method to provide TLS certificates, CFK automates creating and configuring the keystore and truststore.
[4] Set to the path where CA pair certificates are mounted. The
tls.crt
andtls.key
files must be present in the Vault directory path.When you use Vault directory path (
directoryPathInContainer
), CFK does not automate the creation of keystore and truststore. You need to create the keystore and truststore first.directoryPathInContainer
overridessecretRef
if both are defined.[5] Set to the number of days for which the auto-generated certificates are valid. The default value is 60 (days).
After CFK is deployed, you can update this setting at the CR level with an annotation. See Manage auto-generated certificates.
[6] Set to the renewal time for auto-generated certificates. The default value is 30 (days).
After CFK is deployed, you can update this setting at the CR level with an annotation. See Manage auto-generated certificates.
[7] SANs to be added for all auto-generated certificates generated by this CFK. Use this only for adding wild card SANs.
Modifying this setting will roll all Confluent clusters and will regenerate the certificates for all Confluent Platform clusters managed by CFK.
For example:
managedCerts: enabled: true caCertificate: secretRef: my-capair-secert renewBeforeInDays: 20 certDurationInDays: 50 sans: "*.global"
(Optional) Alternative to the previous step, you can pass the settings in the
helm upgrade
command. For example:helm upgrade --install confluent-operator ./confluent-for-kubernetes \ --set managedCerts.caCertificate.secretRef=my-capair-secret \ --set managedCerts.certDurationInDays=50 \ --set managedCerts.renewBeforeInDays=20 \ -n confluent
Configure each component’s custom resource (CR) to use auto-generated certificates:
spec: tls: autoGeneratedCerts: true
CFK will create the required server certificates and store them as Kubernetes secrets for Confluent components to use:
kubectl get secrets
NAME TYPE
...
zookeeper-generated-jks kubernetes.io/tls
kafka-generated-jks kubernetes.io/tls
...
The generated server certificates expire in 365 days for the default
auto-generated certificates. The customized auto-generated certificates expire
as set in certDurationInDays
in the Helm value.
For a tutorial scenario on using auto-generated certs, see the quickstart tutorial.
Provide custom TLS certificates¶
When you provide TLS certificates, CFK takes the provided files and configures Confluent components accordingly.
For each component, the following TLS certificate information should be provided:
The CA for the component to trust, including the CAs used to issue server certificates for any Confluent component cluster
These are required so that peer-to-peer communication (e.g. between Kafka brokers) and communication between components (e.g. from Connect workers to Kafka) will work.
The component’s server certificate (public key)
The component’s server private key
You can provide the TLS certificate information in the following formats.
- When using Kubernetes secrets for the certificates:
- When using HashiCorp Vault directory path for the certificates:
Define SAN¶
The server certificate Subject Alternative Name (SAN) list must be properly defined and cover all hostnames that the Confluent component will be accessed on:
- If TLS for internal communication network encryption is enabled, include the
internal network,
<component>.<namespace>.svc.cluster.local
, in the SAN list. - If TLS for external network communication is enabled, include the external domain name in the SAN list.
The following are the internal and external SANs of each Confluent component that need to be included in the component certificate SAN. The examples use the default component prefixes.
- Kafka
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
- Example:
kafka.confluent.svc.cluster.local
- Example:
Internal access SAN:
kafka-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of brokers, 0 to (number of brokers - 1).- Example:
kafka-0.kafka.confluent.svc.cluster.local
- The range can be handled through a wildcard domain, for example,
*.kafka.confluent.svc.cluster.local
.
- Example:
External bootstrap domain SAN:
<bootstrap_prefix>.my-external-domain
- Example:
kafka-bootstrap.acme.com
- Example:
External broker SAN:
<broker_prefix><x>.my-external-domain
- Example:
b0.acme.com
- The range can be handled through a wildcard domain, for example,
*.acme.com
- Example:
- MDS
Internal access SAN:
kafka-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of brokers, 0 to (number of brokers - 1).- Example:
kafka-0.kafka.confluent.svc.cluster.local
- Example:
External domain SAN:
<mds_prefix>.my-external-domain
- Example:
mds.my-external-domain
- Example:
- ZooKeeper
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
zookeeper-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of ZooKeeper servers, 0 to (number of servers - 1).- Example:
zookeeper-0.zookeeper.confluent.svc.cluster.local
- Example:
No external access domain
- Schema Registry
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
schemaregistry-<x>.<customResourceName>.<namespace>. svc.cluster.local
<x>
is the ordinal number of Schema Registry servers, 0 to (number of servers - 1).- Example:
schemaregistry-0.schemaregistry.confluent.svc.cluster.local
- Example:
External domain SAN:
<schemaregistry_prefix>.my-external-domain
- REST Proxy
Internal access SAN:
kafkarestproxy-<x>.<customResourceName>.<namespace>. svc.cluster.local
<x>
is the ordinal number of REST Proxy servers, 0 to (number of servers - 1).- Example:
kafkarestproxy-0.kafkarestproxy.confluent.svc.cluster.local
- Example:
External domain SAN:
<kafkarestproxy_prefix>.my-external-domain
- Connect
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal SAN:
connect-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of Connect servers, 0 to (number of servers - 1).- Example:
connect-0.connect.confluent.svc.cluster.local
- Example:
External domain SAN:
<connect_prefix>.my-external-domain
- ksqlDB
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
ksqldb-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of ksqlDB servers, 0 to (number of servers - 1).- Example:
ksqldb-0.ksqldb.confluent.svc.cluster.local
- Example:
External domain SAN:
<ksqldb_prefix>.my-external-domain
- Control Center
- Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
- Internal access SAN:
controlcenter-0.<customResourceName>.<namespace>.svc.cluster.local
- Example:
controlcenter-0.controlcenter.confluent.svc.cluster.local
- Example:
- External domanin SAN:
<controlcenter_prefix>.my-external-domain
- Internal bootstrap access SAN:
For an example of how to create certificates with appropriate SAN configurations, see the Create your own certificates tutorial.
Provide TLS keys and certificates in PEM format¶
Prepare the following files:
ca.pem
: This contains the list of CAs to trust, in PEM-encoded format. List the certificates by simply concatenating them, one below the other, for example:-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----
server.pem
: This contains the full server certificate chain in PEM-encoded format.server-key.pem
: This contains the PEM-encoded server certificate private key.
Create a Kubernetes secret with the following keys:
kubectl create secret generic kafka-tls \ --from-file=fullchain.pem=server.pem \ --from-file=cacerts.pem=ca.pem \ --from-file=privkey.pem=server-key.pem
Alternatively, you can create a Kubernetes secret with the following keys:
kubectl create secret generic kafka-tls \ --from-file=tls.crt=server.pem \ --from-file=ca.crt=ca.pem \ --from-file=tls.key=server-key.pem
The
tls.crt
,ca.crt
andtls.key
keys are typically present in secrets created by cert-manager, a popular open source tool to manage certificates. For convenience, CFK supports this convention, but the expected contents of the files and how they are used within Confluent Platform are identical whether using the*.pem
keys or the*.crt
and*.key
keys.Cluster Linking and Schema Exporter require their TLS secret key to be in the PKCS 8 key format. You can convert the key to PKCS8 with the following command:
openssl pkcs8 -topk8 \ -in <input key> \ -out <output in the PKCS 8 format> -nocrypt
Configure the component CR to use the secret:
spec: tls: secretRef: kafka-tls
Provide TLS keys and certificates in Java KeyStore format¶
For the Confluent Platform component custom resources (CRs) that create statefulset pods, you can provide Java KeyStore formatted TLS keys and certificates using the CR API.
Java KeyStore format keys and certificates are not supported directly through the application resource CRDs, such as ClusterLink CRD and SchemaExporter CRD.
To provide TLS keys and certificates for Confluent component CRs:
Prepare the following files:
keystore.jks
: PKCS12 format keystore, containing the component server key.truststore.jks
: PKCS12 format truststore, containing the certificates to trust.jksPassword.txt
: Password for the JKS.Create the
jksPassword.txt
file withjksPassword=<password_for_jks>
:echo -n "jksPassword=<password_for_jks>" > jksPassword.txt
Create a Kubernetes secret:
kubectl create secret generic kafka-tls \ --from-file=keystore.jks=keystore.jks \ --from-file=truststore.jks=truststore.jks \ --from-file=jksPassword.txt=jksPassword.txt
Configure in the component CR:
spec: tls: secretRef: kafka-tls
Use separate TLS certificates for internal and external communications¶
When TLS is enabled for a Confluent component, by default, the global certificate is used for internal and external traffic as below:
kind: <component>
spec:
tls:
secretRef: <certificate secret>
For ksqlDB, Schema Registry, MDS, and Kafka REST services, you have an option to configure the components and services to use separate TLS certificates for internal and external listeners.
After you configure listeners as described below, review the
spec.dependencies
section in other Confluent Platform component CRs. If other Confluent Platform
components have a dependency on the component you updated to use separate
certificates feature, a matching TLS secret and an endpoint should be provided
in order to establish the connection successfully.
Provide a TLS certificate for the internal listener¶
Configure ksqlDB, Schema Registry, MDS, or Kafka REST service to use separate TLS certificates for its internal listener.
For ksqlDB or Schema Registry, specify the certificate in the internal listener section in its CR:
kind: <component> spec: listeners: internal: tls: enabled: true secretRef: <internal TLS cert>
For MDS or Kafka REST service, specify the certificate in the internal listener section in Kafka CR:
kind: Kafka spec: services: mds or kafkaRest: listeners: internal: tls: enabled: true secretRef: <internal TLS cert>
You can configure the external listener to use the global TLS certificate or its own external listener certificate.
The global certificates are specified in:
spec.tls.secretRef
for ksqlDB and Schema Registryspec.services.<mds or kafkaRest>.tls.secretRef
for MDS and Kafka REST services
Provide a TLS certificate for the external listener¶
Configure ksqlDB, Schema Registry, MDS, or Kafka REST service to use separate TLS certificates for its external listener.
For ksqlDB and Schema Registry, specify the certificate in the external listener section in its CR:
kind: <component> spec: listeners: external: tls: enabled: true secretRef: <external TLS cert>
For MDS or Kafka REST service, specify the certificate in the external listener section in Kafka CR:
kind: Kafka spec: services: mds or kafkaRest: listeners: external: tls: enabled: true secretRef: <internal TLS cert>
You can configure the internal listener to use the global TLS certificate or its own internal listener certificate.
The global certificates are specified in:
spec.tls.secretRef
for ksqlDB and Schema Registryspec.services.<mds or kafkaRest>.tls.secretRef
for MDS and Kafka REST services
Dynamic Kafka certificate update¶
You can enable dynamic TLS certificates rotation for Kafka listeners and Kafka REST class service so that the Kafka cluster does not roll when certificates change.
This feature is only supported:
With certificates configured with Kubernetes secret (
secretRef
)With PEM certificates
At the individual Kafka listener level
Global, autogenerated, or other component certificate cannot be used for dynamic certificate loading.
The following are example snippets of the supported scenarios:
spec: listeners: internal: authentication: type: mtls tls: enabled: true secretRef:tls-certs
services: kafkaRest: authentication: type: mtls tls: enabled: true secretRef: tls-certs-rest
To enable dynamic certificate loading:
Create the
_confluent-operator
topic as described in Manage Topics.The following is an example KafkaTopic CR:
apiVersion: platform.confluent.io/v1beta1 kind: KafkaTopic metadata: name: confluent-operator namespace: operator spec: name: _confluent-operator kafkaRestClassRef: name: kafkarestclass
Apply the annotations to enable dynamic certificate update and to force reconcile:
kubectl annotate kafka kafka platform.confluent.io/enable-dynamic-configs="true"
kubectl annotate kafka kafka platform.confluent.io/force-reconcile="true"
Kafka pod rolls the first time the feature is enabled.