Manage Client-Side Encryption in Confluent Cloud for Self-Managed Connectors
Client-Side Field Level Encryption (CSFLE) and client-side payload encryption (CSPE) are security features that allow you to safeguard sensitive data, such as personally identifiable information (PII), by enabling field-level or payload encryption both at the producer and consumer levels. By encrypting and decrypting individual fields or complete payload/message within your data, CSFLE or CSPE ensures that access to sensitive information is tightly controlled, granting only authorized stakeholders access to the data they are permitted to see.
Important
CSFLE or CSPE is supported for self-managed connectors only when both Schema Registry and Kafka are running in Confluent Cloud. This is essential for CSFLE or CSPE functionality when using self-managed connectors.
Confluent Platform 7.5.4 or later, or Confluent Platform 7.6.1 or later, is required to use CSFLE or CSPE with self-managed connectors when Schema Registry and Kafka are running in Confluent Cloud.
CSLFE or CSPE for self-managed connectors when Schema Registry and Kafka are both running on Confluent Platform is only supported on Confluent Platform 8.0 and later. This is not supported on Confluent Platform versions earlier than 8.0.
Limitations
Refer to the following for usage limitations:
The connector does not support automatic schema registration. You must manually register schemas before creating the connectors.
The connector only supports encryption for fields of type
stringorbytesfor CSFLE.CSFLE does not support
stringandbytesfields within nested JSON_SR formats when thevalue.converter.decimal.formatis set toBASE64. To workaround this limitation, setvalue.converter.decimal.formattoNUMERIC.
Note
The reporter topics are not covered. Ensure that the error and success response do not contain any sensitive information while using reporter topics.
Supported connectors
The following table list the connector and its minimum version that support CSFLE.
Connector | Minimum supported version |
|---|---|
12.2.9 | |
2.0.1 | |
1.4.1 | |
1.3.27 | |
1.2.6 | |
10.6.0 | |
2.6.10 | |
2.0.3 | |
1.0.5 | |
2.0.10 | |
2.6.10 | |
1.6.27 | |
1.6.27 | |
1.1.7 | |
Kafka Connect for Azure Cosmos DB (Source and Sink) | 1.17.0 |
2.0.4 | |
1.0.9 | |
2.5.7 | |
2.0.10 | |
1.2.6 | |
1.0.19 | |
Datadog Logs Sink | 1.3.0 |
2.4.2 | |
2.5.4 | |
2.5.4 | |
14.1.2 | |
1.2.4 | |
10.2.1 | |
10.2.1 | |
2.1.8 | |
1.2.9 | |
1.0.16 | |
Google Firebase Realtime Database Connector (Source and Sink) | 1.2.6 |
2.6.10 | |
1.2.4 | |
1.0.9 | |
1.7.8 | |
0.2.5 | |
2.1.15 | |
12.2.9 | |
1.2.11 | |
10.8.2 | |
1.2.13 | |
2.1.15 | |
12.2.9 | |
MongoDB Atlas Sink | 1.15.0 |
1.0.7 | |
1.1.0 | |
1.0.10 | |
0.0.8 | |
2.0.25 | |
2.0.25 | |
2.5.4 | |
3.2.11 | |
1.3.2 | |
Snowflake Sink | 3.1.1 |
1.2.8 | |
2.1.15 | |
2.2.1 | |
1.1.5 | |
2.0.67 | |
1.5.10 | |
2.1.15 | |
1.2.9 | |
1.3.2 | |
1.0.18 | |
1.3.4 |
Requirements
To use CSFLE or CSPE in Confluent Cloud with self-managed connectors, you must meet the following requirements:
Your Kafka cluster and Schema Registry must both be provisioned and running in Confluent Cloud.
Manage client-side encryption
At a high level, you can manage client-side encryption for connectors using the following 2-step process:
Configure CSFLE
You can choose between the following two methods:
Configure CSFLE without sharing KEK
If you do not want share your Key Encryption Key (KEK) with Confluent, follow the steps below:
Define the schema for the topic and add tags to the fields in the schema that you want to encrypt.
Create encryption keys for each KMS.
Add encryption rules that specify the encryption key you want to use to encrypt the tags.
Grant DeveloperWrite permission for encryption key and DeveloperRead permission for the Schema Registry API keys.
Add the following parameters in the connector configuration:
For AWS, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.access.key.id=?The AWS access key identifier.
rule.executors._default_.param.secret.access.key=?The AWS secret access key.
For Azure, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.tenant.idThe Azure tenant identifier.
rule.executors._default_.param.client.idThe Azure client identifier.
rule.executors._default_.param.client.secretThe Azure client secret.
For Google Cloud, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.account.typeThis parameter contains the Google Cloud account type.
rule.executors._default_.param.client.idThe Google Cloud client identifier.
rule.executors._default_.param.client.emailThe Google Cloud client email address.
rule.executors._default_.param.private.key.idThe Google Cloud private key identifier.
rule.executors._default_.param.private.keyThe Google Cloud private key.
For HashiCorp Vault, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.token.idThe token identifier for HashiCorp Vault.
rule.executors._default_.param.namespaceThe namespace for HashiCorp Vault Enterprise (optional).
For more information, see CSFLE without sharing access to your Key Encryption Keys (KEKs) .
Configure CSPE
You can choose between the following two methods:
Configure CSPE without sharing KEK
If you do not want share your Key Encryption Key (KEK) with Confluent, follow the steps below:
Define the schema for the topic that you want to encrypt.
Create encryption keys for each KMS.
Add encoding rules that specify the encryption key you want to use for encryption.
Grant DeveloperWrite permission for encryption key and DeveloperRead permission for the Schema Registry API keys.
Add the following parameters in the connector configuration:
For AWS, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.access.key.id=?The AWS access key identifier.
rule.executors._default_.param.secret.access.key=?The AWS secret access key.
For Azure, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.tenant.idThe Azure tenant identifier.
rule.executors._default_.param.client.idThe Azure client identifier.
rule.executors._default_.param.client.secretThe Azure client secret.
For Google Cloud, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.account.typeThis parameter contains the Google Cloud account type.
rule.executors._default_.param.client.idThe Google Cloud client identifier.
rule.executors._default_.param.client.emailThe Google Cloud client email address.
rule.executors._default_.param.private.key.idThe Google Cloud private key identifier.
rule.executors._default_.param.private.keyThe Google Cloud private key.
For HashiCorp Vault, pass the following configuration parameters:
Parameter
Description
rule.executors._default_.param.token.idThe token identifier for HashiCorp Vault.
rule.executors._default_.param.namespaceThe namespace for HashiCorp Vault Enterprise (optional).
For more information, see CSFLE without sharing access to your Key Encryption Keys (KEKs) .
Enable CSFLE or CSPE for connectors
To enable CSFLE or CSPE for connectors, define the following parameters with the mentioned boolean values in the connector configuration:
Note
If you do not add these values in the connector configuration, CSFLE or CSPE might not work properly.
csfle.enabled=truevalue.converter.auto.register.schemas=falsevalue.converter.use.latest.version=truekey.converter.auto.register.schemas=falsekey.converter.use.latest.version=true
Note
To fetch the latest value schema from Schema Registry, use
value.converter.latest.cache.ttl.sec, that allows you to define the time interval, in seconds, after which the connector fetches the latest version of the value schema. By default, its value is set to-1. To enable it, enter the desired time interval in seconds for this parameter.Similar to the value schema, use
key.converter.latest.cache.ttl.secto define the time interval, in seconds, after which the converter fetches the latest key schema from Schema Registry. The default value is-1. Change this value to the desired time interval in seconds.