Configure Network Encryption for Confluent Platform Using Confluent for Kubernetes
This document describes how to configure network encryption with Confluent for Kubernetes (CFK). For security concepts in Confluent Platform, see Security.
To secure network communications of Confluent components, CFK supports Transport Layer Security (TLS), an industry-standard encryption protocol.
TLS relies on keys and certificates to establish trusted connections. This section describes how to manage keys and certificates when you configure TLS encryption for Confluent Platform.
CFK supports the following mechanisms to enable TLS encryption:
- Auto-generated certificates
CFK auto-generates the server certificates, using the certificate authority (CA) that you provide.
If all access and communication to Confluent services is within the Kubernetes network, auto-generated certificates are recommended.
- User-provided certificates
User provides the private key, public key, and CA.
If you need to enable access to Confluent services from an external-to-Kubernetes domain, user-provided certificates are recommended.
- Separate certificates for internal and external communications
You provide separate TLS certificates for the internal and external communications so that you do not mix external and internal domains in the certificate SAN.
This feature is supported for ksqlDB, Schema Registry, MDS, and Kafka REST services, starting in CFK 2.6.0 and Confluent Platform 7.4.0 release.
- Dynamic Kafka certificate updates
When you rotate certificates by providing new server certificates, CFK automatically updates the configurations to use those new certificates. And, by default, this update triggers a rolling restart of the affected Confluent Platform pod.
To minimize disruption during rolling restarts of Kafka brokers, you can enable dynamic certificate loading for Kafka and Kafka REST service. CFK will update TLS private keys and certificates without rolling the Kafka cluster.
This feature is only supported at the individual listener level.
Use auto-generated TLS certificates
You can use auto-generated certificates to encrypt internal traffic within a Kubernetes network. You bring the certificate authority (CA) you want Confluent for Kubernetes (CFK) to use, and CFK automates provisioning, securing, and operating certificates for internal networking communications.
The generated server certificates expire in 365 days.
For a tutorial scenario on using auto-generated certs, see the quickstart tutorial.
There are three ways to configure auto-generated certificates as listed below in the order of preference:
Use annotations for CA certificates in component custom resource (CR).
Configure managed certificates at the CFK operator level using Helm values.
Use the legacy CA pair (ca-pair-sslcerts) if none of the above are configured.
Use the TLS secret named,
ca-pair-sslcerts
, for the auto-generated certificate feature.
Configure auto-generated certificates using annotations
Apply the following managed certificate annotations at the component CR level. These configurations override all other CA certificate configurations, such as the other CA certificates set at the CFK level.
platform.confluent.io/managed-cert-ca-pair-secret
platform.confluent.io/managed-cert-duration-in-days
platform.confluent.io/managed-cert-renew-before-in-days
platform.confluent.io/managed-cert-add-sans
For example:
kubectl annotate kafka kafka platform.confluent.io/managed-cert-ca-pair-secret="my-custom-ca-pair"
kubectl annotate kafka kafka platform.confluent.io/managed-cert-duration-in-days=60
kubectl annotate kafka kafka platform.confluent.io/managed-cert-renew-before-in-days=30
kubectl annotate kafka kafka platform.confluent.io/managed-cert-add-sans="kafka.operator,kafka.operator.local"
Configure auto-generated certificates using Helm values
Starting in CFK 2.4.0, you can customize auto-generated certificates in the CFK Helm values. The certificate rotation and renewal policy will be defined based on the following settings that you specify:
The secret name or Vault directory path for the certificates
The validity period of the certificates
The renewal time for the certificates
The certificate Subject Alternative Names (SANs)
For upgrading from the legacy, default auto-generated certificates to the configurable certificates, see Upgrade default auto-generated certificates to configurable auto-generated certificates.
To configure auto-generated certificates:
Use a certificate manager to create a CA certificate and key with the file names,
tls.crt
, andtls.key
respectively.Define the secret for the CA certificate pairs.
Create a Kubernetes secret with keys,
tls.crt
andtls.key
. These CA key pairs will be used to sign the certificates.To create a secret with a CA certificate and key for auto-generated certificates:
kubectl create secret tls <secret name> \ --cert=/path/to/ca.pem \ --key=/path/to/ca-key.pem
You can assign any name to the secret (
<secret name>
). There is no requirement for the name.If you select this option, you must follow the next step and set
managedCerts.enable: true
and setmanagedCerts.caCertificate.secretRef=<secret name>
.To use Vault for auto-generated secrets, set up Vault and specify the path to the CA pair certificates in the next step. The
tls.crt
andtls.key
files must be present in the directory.If you select this option, you must follow the next step and set
managedCerts.enable: true
and setmanagedCerts.caCertificate.directoryPathInContainer=<path>
.
If you want to customize certificate settings, configure the settings in the CFK Helm values (in values.yaml).
Edit the Helm values file, and then apply the changes with the
helm upgrade
command:managedCerts: enabled: --- [1] caCertificate: --- [2] secretRef: --- [3] directoryPathInContainer: --- [4] certDurationInDays: --- [5] renewBeforeInDays: --- [6] sans: --- [7]
[1] Set to
true
to enable and configure CFK-managed certificates at the Helm level.When this is set to
true
, the settings undermanagedCerts:
will be used for auto-generated certificates.The clusters will roll after this setting is enabled.
[2] CA certificate pair for auto-generated certificates in this CFK deployment. Use
secretRef
ordirectoryPathInContainer
to provide the certificates.[3] Set to the CA pair secret reference name.
The expected keys are
tls.crt
andtls.key
for the CA certificate and CA certificate key respectively.When you use the Kubernetes secrets method to provide TLS certificates, CFK automates creating and configuring the keystore and truststore.
[4] Set to the path where CA pair certificates are mounted. The
tls.crt
andtls.key
files must be present in the Vault directory path.When you use Vault directory path (
directoryPathInContainer
), CFK does not automate the creation of keystore and truststore. You need to create the keystore and truststore first.directoryPathInContainer
overridessecretRef
if both are defined.[5] Set to the number of days for which the auto-generated certificates are valid. The default value is 60 (days).
After CFK is deployed, you can update this setting at the CR level with an annotation. See Manage auto-generated certificates.
[6] Set to the renewal time for auto-generated certificates. The default value is 30 (days).
After CFK is deployed, you can update this setting at the CR level with an annotation. See Manage auto-generated certificates.
[7] SANs to be added for all auto-generated certificates generated by this CFK. Use this only for adding wild card SANs.
Modifying this setting will roll all Confluent clusters and will regenerate the certificates for all Confluent Platform clusters managed by CFK.
For example:
managedCerts: enabled: true caCertificate: secretRef: my-capair-secert renewBeforeInDays: 20 certDurationInDays: 50 sans: "*.global"
(Optional) Alternative to the previous step, you can pass the settings in the
helm upgrade
command. For example:helm upgrade --install confluent-operator ./confluent-for-kubernetes \ --set managedCerts.caCertificate.secretRef=my-capair-secret \ --set managedCerts.certDurationInDays=50 \ --set managedCerts.renewBeforeInDays=20 \ -n confluent
Configure each component’s custom resource (CR) to use auto-generated certificates:
spec: tls: autoGeneratedCerts: true
CFK will create the required server certificates and store them as Kubernetes secrets for Confluent components to use:
kubectl get secrets
NAME TYPE
...
zookeeper-generated-jks kubernetes.io/tls
kafka-generated-jks kubernetes.io/tls
...
The generated server certificates expire in 365 days for the default
auto-generated certificates. The customized auto-generated certificates expire
as set in certDurationInDays
in the Helm value.
For a tutorial scenario on using auto-generated certs, see the quickstart tutorial.
Configure auto-generated certificates using the legacy CA pair secret (ca-pair-sslcerts)
If none of the above two preferred options are configured, you can use the TLS
secret, ca-pair-sslcerts
, for auto-generated certificates.
Provide a root certificate authority as a Kubernetes Secret named
ca-pair-sslcerts
. Provide the certificate authority public and private key in the following format:kubectl create secret tls ca-pair-sslcerts \ --cert=/path/to/ca.pem \ --key=/path/to/ca-key.pem
Configure each component to use auto-generated certificates.
spec: tls: autoGeneratedCerts: true
CFK will create the required server certificates and store them as Kubernetes secrets, for Confluent components to use. For example:
kubectl get secrets
NAME TYPE
...
zookeeper-generated-jks kubernetes.io/tls
kafka-generated-jks kubernetes.io/tls
...
Provide custom TLS certificates
When you provide TLS certificates, CFK takes the provided files and configures Confluent components accordingly.
For each component, the following TLS certificate information should be provided:
The CA for the component to trust, including the CAs used to issue server certificates for any Confluent component cluster
These are required so that peer-to-peer communication (e.g. between Kafka brokers) and communication between components (e.g. from Connect workers to Kafka) will work.
The component’s server certificate (public key)
The component’s server private key
CFK supports the following TLS certificate formats:
TLS Group 1: PEM-encoded files
This group uses Privacy-Enhanced Mail (PEM) files, specifically
fullchain.pem
,privkey.pem
, andcacerts.pem
.In this configuration, CFK automatically generates the necessary key store and trust store from these PEM files.
TLS Group 2: Java KeyStore files
This group involves the use of Java KeyStore (JKS) files, specifically
keystore.jks
,truststore.jks
, and an optional password file,jksPassword.txt
. CFK expects the.jks
files in pkcs12 format.This group is supported by providing a direct trust store and key store, which allows more control over the naming of aliases for certificates.
TLS Group 3: PEM-encoded files
Similar to TLS Group 1, TLS Group 3 also relies on PEM files but expects specific file names,
tls.crt
,tls.key
, andca.crt
.CFK handles the conversion of these files into the required key store and trust store structures, similar to TLS Group 1.
You can provide custom TLS certificates in the following formats.
When using Kubernetes secrets for the certificates:
Use PEM-encoded files: TLS Group 1 or TLS Group 3.
Use Java KeyStore files: TLS Group 2.
When using HashiCorp Vault directory path for the certificates:
Use Java KeyStore files: TLS Group 2.
Important
Use a unique TLS secret for each Confluent Platform component. With separate secrets, you have more control over restarts of the components when you update certificates. You can avoid potential startup failures due to a parallel restart.
Define SAN
The certificate must have the Subject Alternative Name (SAN) list, and the SAN list must be properly defined and cover all hostnames that the Confluent component will be accessed on:
If TLS for internal communication network encryption is enabled, include the internal network,
<component>.<namespace>.svc.cluster.local
, in the SAN list.If TLS for external network communication is enabled, include the external domain name in the SAN list.
The following are the internal and external SANs of each Confluent component that need to be included in the component certificate SAN. The examples use the default component prefixes.
- Kafka
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Example:
kafka.confluent.svc.cluster.local
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of brokers, 0 to (number of brokers - 1).Example:
kafka-0.kafka.confluent.svc.cluster.local
The range can be handled through a wildcard domain, for example,
*.kafka.confluent.svc.cluster.local
.
External bootstrap domain SAN:
<bootstrap_prefix>.my-external-domain
Example:
kafka-bootstrap.acme.com
External broker SAN:
<broker_prefix><x>.my-external-domain
Example:
b0.acme.com
The range can be handled through a wildcard domain, for example,
*.acme.com
- MDS
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of brokers, 0 to (number of brokers - 1).Example:
kafka-0.kafka.confluent.svc.cluster.local
External domain SAN:
<mds_prefix>.my-external-domain
Example:
mds.my-external-domain
- KRaft
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of the KRaft controller, 0 to (number of servers - 1).Example:
kraftcontroller-0.kraftcontroller.confluent.svc.cluster.local
External domain SAN:
<kraftcontroller_prefix>.my-external-domain
- ZooKeeper
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of ZooKeeper servers, 0 to (number of servers - 1).Example:
zookeeper-0.zookeeper.confluent.svc.cluster.local
No external access domain
Important
Starting with Confluent Platform version 8.0, ZooKeeper is no longer part of Confluent Platform.
- Schema Registry
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>. svc.cluster.local
<x>
is the ordinal number of Schema Registry servers, 0 to (number of servers - 1).Example:
schemaregistry-0.schemaregistry.confluent.svc.cluster.local
External domain SAN:
<schemaregistry_prefix>.my-external-domain
- REST Proxy
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>. svc.cluster.local
<x>
is the ordinal number of REST Proxy servers, 0 to (number of servers - 1).Example:
kafkarestproxy-0.kafkarestproxy.confluent.svc.cluster.local
External domain SAN:
<kafkarestproxy_prefix>.my-external-domain
- Connect
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of Connect servers, 0 to (number of servers - 1).Example:
connect-0.connect.confluent.svc.cluster.local
External domain SAN:
<connect_prefix>.my-external-domain
- ksqlDB
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
<customResourceName>-<x>.<customResourceName>.<namespace>.svc.cluster.local
<x>
is the ordinal number of ksqlDB servers, 0 to (number of servers - 1).Example:
ksqldb-0.ksqldb.confluent.svc.cluster.local
External domain SAN:
<ksqldb_prefix>.my-external-domain
- Control Center (Legacy)
Internal bootstrap access SAN:
<customResourceName>.<namespace>.svc.cluster.local
Internal access SAN:
<customResourceName>-0.<customResourceName>.<namespace>.svc.cluster.local
Example:
controlcenter-0.controlcenter.confluent.svc.cluster.local
External domain SAN:
<controlcenter_prefix>.my-external-domain
For an example of how to create certificates with appropriate SAN configurations, see the Create your own certificates tutorial.
Provide TLS keys and certificates in PEM format
Prepare the following files:
ca.pem
: This contains the list of CAs to trust, in PEM-encoded format. List the certificates by simply concatenating them, one below the other, for example:-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----
server.pem
: This contains the full server certificate chain in PEM-encoded format.server-key.pem
: This contains the PEM-encoded server certificate private key.
Create a Kubernetes secret with the following keys, referred as TLS Group 1 in CFK:
kubectl create secret generic kafka-tls \ --from-file=fullchain.pem=server.pem \ --from-file=cacerts.pem=ca.pem \ --from-file=privkey.pem=server-key.pem
Alternatively, you can create a Kubernetes secret with the following keys, referred as TLS Group 3 in CFK:
kubectl create secret generic kafka-tls \ --from-file=tls.crt=server.pem \ --from-file=ca.crt=ca.pem \ --from-file=tls.key=server-key.pem
The
tls.crt
,ca.crt
andtls.key
keys are typically present in secrets created by cert-manager, a popular open source tool to manage certificates. For convenience, CFK supports this convention, but the expected contents of the files and how they are used within Confluent Platform are identical whether using the*.pem
keys or the*.crt
and*.key
keys.Cluster Linking and Schema Exporter require their TLS secret key to be in the PKCS 8 key format. You can convert the key to PKCS8 with the following command:
openssl pkcs8 -topk8 \ -in <input key> \ -out <output in the PKCS 8 format> -nocrypt
Configure the component CR to use the secret:
spec: tls: secretRef: kafka-tls
Provide TLS keys and certificates in Java KeyStore format
For the Confluent Platform component custom resources (CRs) that create statefulset pods, you can provide Java KeyStore formatted TLS keys and certificates using the CR API.
Java KeyStore format keys and certificates are not supported directly through the application resource CRDs, such as ClusterLink CRD and SchemaExporter CRD.
To provide TLS keys and certificates for Confluent component CRs:
Prepare the following files:
keystore.jks
: PKCS12 format keystore, containing the component server key.truststore.jks
: PKCS12 format truststore, containing the certificates to trust.jksPassword.txt
: Password that CFK uses for both JKS keystore and truststore.Create the
jksPassword.txt
file withjksPassword=<password_for_jks>
:echo -n "jksPassword=<password_for_jks>" > jksPassword.txt
Create a Kubernetes secret with the following keys, referred as TLS Group 2 in CFK:
kubectl create secret generic kafka-tls \ --from-file=keystore.jks=keystore.jks \ --from-file=truststore.jks=truststore.jks \ --from-file=jksPassword.txt=jksPassword.txt
Configure in the component CR:
spec: tls: secretRef: kafka-tls
You can find an example script that generates JKS truststore and keystore in the CFK example GitHub repository. It is not recommended that you use the example script in a production environment.
Use separate TLS certificates for internal and external communications
When TLS is enabled for a Confluent component, by default, the global certificate is used for internal and external traffic as below:
kind: <component>
spec:
tls:
secretRef: <certificate secret>
For Kafka, ksqlDB, Schema Registry, MDS, and Kafka REST services, you have an option to configure the components and services to use separate TLS certificates for internal and external listeners.
After you configure listeners as described below, review the
spec.dependencies
section in other Confluent Platform component CRs. If other Confluent Platform
components have a dependency on the component you updated to use separate
certificates feature, a matching TLS secret and an endpoint should be provided
in order to establish the connection successfully.
Provide a TLS certificate for the internal listener
Configure Kafka, ksqlDB, Schema Registry, MDS, or Kafka REST service to use separate TLS certificates for its internal listener.
For Kafka, ksqlDB or Schema Registry, specify the certificate in the internal listener section in its CR:
kind: <component> spec: listeners: internal: tls: enabled: true secretRef: <internal TLS cert>
For MDS or Kafka REST service, specify the certificate in the internal listener section in Kafka CR:
kind: Kafka spec: services: mds or kafkaRest: listeners: internal: tls: enabled: true secretRef: <internal TLS cert>
You can configure the external listener to use the global TLS certificate or its own external listener certificate.
The global certificates are specified in:
spec.tls.secretRef
for ksqlDB and Schema Registryspec.services.<mds or kafkaRest>.tls.secretRef
for MDS and Kafka REST services
Provide a TLS certificate for the external listener
Configure Kafka, ksqlDB, Schema Registry, MDS, or Kafka REST service to use separate TLS certificates for its external listener.
For Kafka, ksqlDB and Schema Registry, specify the certificate in the external listener section in its CR:
kind: <component> spec: listeners: external: tls: enabled: true secretRef: <external TLS cert>
For MDS or Kafka REST service, specify the certificate in the external listener section in Kafka CR:
kind: Kafka spec: services: mds or kafkaRest: listeners: external: tls: enabled: true secretRef: <external TLS cert>
You can configure the internal listener to use the global TLS certificate or its own internal listener certificate.
The global certificates are specified in:
spec.tls.secretRef
for ksqlDB and Schema Registryspec.services.<mds or kafkaRest>.tls.secretRef
for MDS and Kafka REST services
Enable dynamic Kafka certificate update
You can enable dynamic TLS certificates rotation for Kafka listeners and Kafka REST class service so that the Kafka cluster does not roll when certificates change.
This feature is only supported:
With certificates configured with Kubernetes secret (
secretRef
)With PEM certificates
At the individual Kafka listener level
Global, autogenerated, or other component certificate cannot be used for dynamic certificate loading.
The following are example snippets of the supported scenarios:
spec: listeners: internal: authentication: type: mtls tls: enabled: true secretRef:tls-certs
services: kafkaRest: authentication: type: mtls tls: enabled: true secretRef: tls-certs-rest
To enable dynamic certificate loading:
Configure the password encoder secret for Kafka as described in Manage Password Encoder Secrets.
This is a requirement to enable dynamic certificate rotation.
Create the
_confluent-operator
topic as described in Manage Topics.The following is an example KafkaTopic CR:
apiVersion: platform.confluent.io/v1beta1 kind: KafkaTopic metadata: name: confluent-operator namespace: operator spec: name: _confluent-operator kafkaRestClassRef: name: kafkarestclass
Apply the annotations to enable dynamic certificate update and to force reconcile:
kubectl annotate kafka kafka platform.confluent.io/enable-dynamic-configs="true"
kubectl annotate kafka kafka platform.confluent.io/force-reconcile="true"
Kafka pod rolls the first time the feature is enabled.
Disable SNI validation
Starting in Confluent Platform 8.0, Server Name Indication (SNI) validation is enabled by default, and any client that does not pass SNI headers cannot communicate communicate with Confluent Platform components.
SNI validation is not enabled by default in Confluent Platform 7.x.
If you need to run clients without SNI validation in Confluent Platform 8.x, set
sni.host.check.enabled=false
using configuration overrides in the component custom resource:
kind: <component>
spec:
configOverrides:
server:
- sni.host.check.enabled=false
Configure client-side field level encryption Using CFK
Client-side field level encryption (CSFLE) is designed to enable encryption and decryption of sensitive data fields at the client level before the data is sent to Kafka. With CSFLE, the actual encryption and decryption operations happen within your client application; Confluent Platform, including deployment via Confluent for Kubernetes (CFK), facilitates key management and schema configuration but does not itself handle encryption or decryption.
For an overview of CSFLE in Confluent Platform, see Client-Side Field Level Encryption Encryption Overview.
For detailed CSFLE configuration steps, see Use Client-Side Field Level Encryption on Confluent Platform.
For example CSFLE configurations in Confluent Platform, see the Client-Side Field Level Encryption example configuration.
Requirements and considerations
Confluent Platform 8.0 or later with the CSFLE Add-On enabled.
CSFLE in Confluent Platform is supported only in non-shared mode.
Schema Registry supports CSFLE mainly for DEK permissions checks.
Client performs all key management service (KMS) interactions, including encryption and decryption.
CSFLE supports Java clients for producing/consuming encrypted messages.
CSFLE is not available in Confluent CLI, Connect, ksqlDB, Flink, or non-Java clients at this time.
CSFLE is not integrated with Control Center.
Only
string
andbyte
type Avro fields are supported for CSFLE tagging and encryption.The kafka-avro-console-producer and kafka-avro-console-consumer tools work with CSFLE.
Supported KMS types include local KEK, AWS KMS (Amazon Web Services), and HashiCorp Vault. For the full list, see Supported KMS types.
The CSFLE API is protected using Confluent Platform Role-Based Access Control (RBAC).
Configure CSFLE in CFK
To deploy CSFLE with CFK:
Deploy Confluent Platform components.
Specifically, configure Schema Registry with the required configuration for CSFLE using the
configOverrides.server
property in the SchemaRegistry custom resource (CR) YAML.For example:
kind: SchemaRegistry spec: replicas: 1 image: application: confluentinc/cp-schema-registry init: confluentinc/confluent-init-container configOverrides: server: - resource.extension.class=io.confluent.kafka.schemaregistry.rulehandler.RuleSetResourceExtension,io.confluent.dekregistry.DekRegistryResourceExtension - confluent.license.addon.csfle=<cp-enterprise-license-key>
The value for
confluent.license.addon.csfle
is the same as your main Confluent Platform Enterprise license key.Grant necessary RBAC permissions for users and key resources (topics, subjects, and KEKs). For example:
Give the
ResourceOwner
role to Schema Registry’s internal Kafka client for the_dek_registry_keys
topic.Grant user roles for topics, subjects, and KEK resources.
You can use the ConfluentRolebinding CR or the confluent iam rbac role-binding create command. For example:
confluent iam rbac role-binding create \ --principal User:sr \ --role ResourceOwner \ --resource Topic:_dek_registry_keys \ --kafka-cluster <CLUSTER_ID>
For enabling RBAC for DEK Registry, see Access control (RBAC) for CSFLE.
Register Schemas and RuleSets using the REST API, tagging fields for encryption.
Define the schema for the topic and add tags to the schema fields in the schema that you want to encrypt.
Define an encryption policy that specifies rules to use to encrypt the tags.
Register the KEK resource using the Schema Registry REST API or the register-deks command.
An example JSON payload for the REST API:
{ "name": "my-kek", "kmsType": "local-kms", "kmsKeyId": "mykey", "shared": false }
For details on managing CSFLE keys, see Manage CSFLE keys.
Configure the Java client with KMS credentials or local secret, and produce messages with encrypted fields.
Clients must be configured to provide the secret or KMS credentials at runtime. For local KEK, this is usually a base64 string; for AWS KMS, environment variables for credentials must be set.
Example Java client properties:
props.put("rule.executors._default_.param.secret", "pgbju8SjcaWJOtTSgeBckA=="); props.put("schema.registry.basic.auth.user.info", "testadmin:testadmin");
Only fields of type
string
orbyte
with the correct tag are supported for encryption.When a message is produced with CSFLE, the tagged field is encrypted using the configured KEK.
Consumers must provide the correct key/credential to decrypt and read the tagged field.
Troubleshooting
Encryption/decryption problems: Check client logs and Schema Registry logs to help with DEK permissions. Most issues involve missing/incorrect KEK or client KMS setup.
RBAC issues: Ensure correct role-bindings are created for all required resources (topics, subjects, KEKs).