Secrets Management¶
Confluent Platform secrets allow you to store and manage sensitive information, such as
passwords and API tokens. Compliance requirements often dictate that services
should not store sensitive data as clear text in files. This sensitive data
can include passwords, such as the values for configuration parameters
ssl.key.password
, ssl.keystore.password
, ssl.truststore.password
, or
any other sensitive data in configuration files or log files. Secrets are
administered with the
confluent secret commands.
When you run confluent secret, the configuration file is modified to include code that directs the configuration resolution system to pull the configuration from a secret provider. A second file (the secrets file) that contains the encrypted secrets is also created.
Secrets use “envelope encryption”, which is a standard way to protect sensitive data with a highly secure method. A user specifies a master passphrase, which is used along with a cryptographic salt value to derive a master encryption key. The master encryption key is used to generate a separate data encryption key.
The master encryption and data encryption keys are used to encrypt the sensitive data in the configuration files. The service can later decrypt the keys. If an unauthorized user gains access to a configuration file, they cannot see the encrypted values and they have no way to decrypt them without knowing the master encryption key.
See also
For a tutorial on how to use secrets, see Tutorial: Secret Protection.
How secrets extend existing Apache Kafka® security¶
Secrets extend the protection provided by Apache Kafka® in KIP-226 for brokers, KIP-297 for Connect, and KIP-421 for the automatic resolution of variables specified in external component configurations.
Secrets for dynamic broker configurations¶
You can use secrets protection for Apache Kafka® dynamic broker configurations (KIP-226) as follows:
- If you want to store all of your keystore configurations encrypted in a file, you can do so. The Kafka broker can use this keystore configuration instead of the dynamic broker configuration for password encryption only. As is the case for other components, this approach does not enable dynamic updates. ZooKeeper is not used in this approach.
- If you want to encrypt passwords just for the broker and support dynamic updates (for example, for SSL keystores), then you can use dynamic broker configurations without new secret protection. When using this approach, encrypted passwords are stored in ZooKeeper.
- If you want to use a common file for encrypted passwords of all components and the broker, then you should use dynamic updates for the broker. In such cases, you can use secret protection in conjunction with KIP-226, where a combination of the secret file and ZooKeeper are used.
Secrets and configuration variables¶
Secrets extends protection for variables in configuration files (KIP-421), including Connect (KIP-297) as follows:
- Running the confluent secret
command encrypts secrets stored within configuration files by
replacing the secret with a variable (a tuple
${providerName:[path:]key}
, whereproviderName
is the name of a ConfigProvider,path
is an optional string, andkey
is a required string). Running confluent secret adds all the information Kafka components will require to fetch the actual, decrypted value. Secrets are stored in an encrypted format in an external file. - With help of the Configuration Provider (as described in KIP-421), upon startup all components (such as broker, connect, producer, consumer, admin client, etc.) can automatically fetch the decrypted secret from the external file.
- When using secrets to encrypt
Kafka Dynamic Configurations, you must
reload the configuration using the AdminClient and
kafka-configs.sh
script as described by KIP-226.
Limitations¶
Confluent Secrets cannot be used to encrypt the following:
A JAAS file, nor can you encrypt files referenced by a JAAS configuration. However, you can encrypt a JAAS configuration parameter that is declared in a properties file. For more information, see the example in Encrypt JAAS configuration parameters.
The
password.properties
file (referenced by the Jetty PropertyFileLoginModule, which authenticates users by checking for their credentials in a password file)The
zookeeper.properties
fileAny librdkafka-based clients. To encrypt librdkafka-based clients, you must use a solution from within your own source libraries.
Passwords in systemd
override.conf
filesIf you use the Confluent CLI for secrets protection, and you include a backslash character without escaping it in a property value that you’re encrypting, the following error occurs:
Error: properties: Line xxx: invalid unicode literal
, wherexxx
is the line with the backslash.You must escape the backslash character when entering a value in a Confluent Platform configuration file or when using Confluent CLI. For example, if a password property value is
1\o/-r@c6pD
, which includes a valid backslash character, then you must enter it as1\\o/-r@c6pD
in theserver.properties
file.
Quick start¶
- Prerequisite
- The Confluent CLI must be installed.
Create a directory for storing the
security.properties
file. For example:mkdir /usr/secrets/
Generate the master encryption key based on a passphrase.
Typically, a passphrase is much longer than a password and is easily remembered as a string of words (for example,``Data in motion``). You can specify the passphrase either in clear text on the command line, or store it in a file. A best practice is to enter this passphrase into a file and then pass it to the CLI (specified as
--passphrase @<passphrase.txt>
). By using a file, you can avoid the logging history, which shows the passphrase in plain text.Choose a location for the secrets file on your local host (not a location where Confluent Platform services run). The secrets file contains encrypted secrets for the master encryption key, data encryption key, and configuration parameters, along with metadata, such as which cipher was used for encryption.
confluent secret master-key generate \ --local-secrets-file /usr/secrets/security.properties \ --passphrase @<passphrase.txt>
Your output should resemble:
Save the master key. It cannot be retrieved later. +------------+----------------------------------------------+ | Master Key | abC12DE+3fG45Hi67J8KlmnOpQr9s0Tuv+w1x2y3zab= | +------------+----------------------------------------------+
Save the master key because it cannot be retrieved later.
Export the master key in the environment variable, or add the master key to a bash script.
Important
The subsequent confluent secret commands will fail if the environment variable is not set.
export CONFLUENT_SECURITY_MASTER_KEY=abC12DE+3fG45Hi67J8KlmnOpQr9s0Tuv+w1x2y3zab=
Encrypt the specified configuration parameters.
This step encrypts the properties specified by
--config
in the configuration file specified by--config-file
. The property values are read from the configuration file, encrypted, and written to the local secrets file specified by--local-secrets-file
. In place of the property values, instructions that are written into the configuration file allow the configuration resolution system to retrieve the secret values at runtime.The file path you specify in
--remote-secrets-file
is written into the configuration instructions and identifies where the resolution system can locate the secrets file at runtime. If you are running the secrets command centrally and distributing the secrets file to each node, then specify the eventual path of the secrets file in--remote-secrets-file
. If you plan to run the secrets command on each node, then theremote-secrets-file
should match the location specified by--local-secrets-file
.Note
Updates specified with
--local-secrets-file
flag modify thesecurity.properties
file. For every broker where you specify--local-secrets-file
, you can store thesecurity.properties
file in a different location, which you specify using the--remote-secrets-file
.For example, when encrypting a broker:
- In
--local-secrets-file
, specify the file where the Confluent CLI will add and/or modify encrypted parameters. This modifies thesecurity.properties
file. - In
--remote-secrets-file
, specify the location ofsecurity.properties
file that the broker will reference.
If the
--config
flag is not specified, any property that contains the stringpassword
is encrypted in the configuration key. When runningencrypt
use a comma to specify multiple keys, for example:--config "config.storage.replication.factor,config.storage.topic"
. This option is not available when using theadd
orupdate
commands.Use the following example command to encrypt the
config.storage.replication.factor
andconfig.storage.topic
parameters:confluent secret file encrypt --config-file /etc/kafka/connect-distributed.properties \ --local-secrets-file /usr/secrets/security.properties \ --remote-secrets-file /usr/secrets/security.properties \ --config "config.storage.replication.factor,config.storage.topic"
You should see a similar entry in your
security.properties
file. This example shows the encryptedconfig.storage.replication.factor
parameter.config.storage.replication.factor = ${securepass:/usr/secrets/security.properties:connect-distributed.properties/config.storage.replication.factor}
- In
Decrypt the encrypted configuration parameter.
confluent secret file decrypt \ --local-secrets-file /usr/secrets/security.properties \ --config-file /etc/kafka/connect-distributed.properties \ --output-file decrypt.txt
You should see the decrypted parameter. This example shows the decrypted
config.storage.replication.factor
parameter.config.storage.replication.factor=1
Tip
For more information about the security.properties
file, see the properties file reference.
Production¶
To operationalize this workflow, you can augment your orchestration tooling to distribute this to the destination hosts. These hosts may include Kafka brokers, Connect workers, Schema Registry instances, ksqlDB servers, Control Center, or any service using password encryption. The confluent secret commands are flexible to accommodate whatever secret distribution model you prefer. You can either do the secret generation and configuration modification on each destination host directly, or do it all on a single host and then distribute the secret data to the destination hosts. Here are the tasks to distribute the secret data:
- Export the master encryption key into the environment on every host that will have a configuration file with password protection.
- Distribute the secrets file: copy the secrets file
/path/to/security.properties
from the local host on which you have been working to/path/to/security.properties
on the destination hosts. - Propagate the necessary configuration file changes: update the configuration file on all hosts so that the configuration parameter now has the tuple for secrets.
Usage examples¶
Here are some usage examples for the confluent secret commands.
Important
The Confluent CLI must be installed.
Add remote encrypted configs from file¶
This command adds new encrypted configuration parameters (--config
) to the
specified file (--config-file
). The encrypted secrets are stored in the local secrets file (--local-secrets-file
).
Tip
You can specify multiple key-value pairs by separating each configuration parameter with a newline character,
for example: --config "ssl.keystore.password = sslPassword \n ssl.truststore.password = password"
.
If you include config group.id
but do not include a key/value pair, or submit an empty list, you will receive an output error.
confluent secret file add --config-file /etc/kafka/connect-distributed.properties \
--local-secrets-file /usr/secrets/security.properties \
--remote-secrets-file /usr/secrets/security.properties \
--config group.id=connect-cluster
After running this command your properties file should resemble:
group.id = ${securepass:/usr/secrets/security.properties:connect-distributed.properties/group.id}
Using prefixes in secrets configurations¶
Secrets config.providers
do not propagate to prefixes such as client.*
.
Thus, when using prefixes with secrets you must specify config.providers
and config.providers.securepass.class
:
client.config.providers=securepass
client.config.providers.securepass.class=io.confluent.kafka.security.config.provider.SecurePassConfigProvider
Control Center
Each component that communicates with a secured Control Center instance requires a
specific configuration to be set by its prefix. When configuring config.providers
and config.providers.securepass.class
for Confluent Control Center secrets configurations,
specify:
confluent.controlcenter.streams.client.config.providers=securepass
confluent.controlcenter.streams.client.config.providers.securepass.class=io.confluent.kafka.security.config.provider.SecurePassConfigProvider
When Confluent Control Center is communicating with multiple Kafka clusters:
confluent.controlcenter.kafka.<name>.client.config.providers=securepass
confluent.controlcenter.kafka.<name>.client.config.providers.securepass.class=io.confluent.kafka.security.config.provider.SecurePassConfigProvider
Encrypt JAAS configuration parameters¶
All Confluent Platform components support an embedded JAAS configuration, which provides a secure way to specify your JAAS configuration. Secrets also support using an embedded JAAS configuration. This configuration method allows for a more granular level of security for your secrets JAAS configuration. For example, rather than encrypting your entire secrets JAAS configuration, as you would using a properties file, when using an embedded JAAS configuration, you can encrypt the password only. In this way, your secrets configuration details will be available in logs, but not your password.
Secrets does not support using a static JAAS configuration file that is passed at runtime, as is done with brokers.
Note
- The ZooKeeper Client JAAS configuration is not supported by secrets protection.
- You cannot encrypt multiple passwords in the same JAAS configuration using the Confluent CLI secrets encryption command for encrypting the JAAS.
Before performing any operation on the JAAS configuration, you must first provide the
configuration key in the predefined path: <entry name>/<LoginModule>/<key>
.
Following is an example JAAS configuration in a property file (kafka/server.properties
):
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required /
useKeyTab=false /
adminpassword=tempPass /
useTicketCache=true /
doNotPrompt=true;
Note
You can specify the path to adminpassword as: sasl.jaas.config/com.sun.security.auth.module.Krb5LoginModule/adminpassword
The standard CLI syntax for encrypting, adding, or updating a secrets JAAS configuration is:
confluent secret file encrypt --config-file
--local-secrets-file
--remote-secrets-file
–-config
Of course, you must replace encrypt
with add
or update
, depending on
the configuration task you are performing.
This example shows the command and options for encrypting the password in a secrets JAAS configuration:
confluent secret file encrypt --config-file /etc/kafka/server.properties
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
–-config sasl.jaas.config/com.sun.security.auth.module.Krb5LoginModule/adminpassword
This example shows the command and options for adding a new secrets JAAS configuration in which the password is encrypted:
confluent secret file add --config-file /etc/kafka/server.properties
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
–-config sasl.jaas.config/com.sun.security.auth.module.Krb5LoginModule/adminpassword
This example shows the command and options for updating the encrypted password in the secrets JAAS configuration:
confluent secret file update --config-file /etc/kafka/server.properties
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
–-config sasl.jaas.config/com.sun.security.auth.module.Krb5LoginModule/adminpassword
The encrypted data that you see after executing any of the preceding commands should look similar to the following:
_metadata.master_key.0.salt = 3B++ViaAMaSwOOnxYI2bbeCtvZXRV5mxEOfb2FO3DnU=
_metadata.symmetric_key.0.created_at = 2020-03-06 14:07:34.248521 -0700 MST m=+0.011810143
_metadata.symmetric_key.0.envvar = CONFLUENT_SECURITY_MASTER_KEY
_metadata.symmetric_key.0.length = 32
_metadata.symmetric_key.0.iterations = 1000
_metadata.symmetric_key.0.salt = 4aqTrl8kdnQdVbGwkbeQHUiLA235/RKGC8zOXTHwQaI=
_metadata.symmetric_key.0.enc = ENC[AES/CBC/PKCS5Padding,data:bP93/lcsQVY5tzh4NWvD9tWO/yyTGwdAEgYHwpUokjiomma7QoH8X/jhlB7zibGd,iv:A7zk7hBwuataNy+ToT346w==,type:str]
server.properties/sasl.jaas.config/com.sun.security.auth.module.Krb5LoginModule/adminpassword = ENC[AES/CBC/PKCS5Padding,data:KdjkpudhWKoVe+6G35OYDw==,iv:5MgMsMT1o8d1JlXE0966Bg==,type:str]
Encrypt JSON configuration parameters¶
Secrets supports the encryption of JSON configuration parameters.
Before performing any operation on the JSON configuration parameters, you must first provide the configuration key in the path format:
{
"name": "security configuration",
"credentials": {
"password": "password",
"ssl.keystore.location": "/usr/ssl"
}
}
You can specify the path to ssl.keystore.password
as:
<entry-name>.<key> :
credentials.password
credentials.ssl\\.keystore\\.location
The standard CLI syntax for encrypting, adding, updating, or decrypting a secrets JSON configuration is:
confluent secret file encrypt --config-file
--local-secrets-file
--remote-secrets-file
–-config
Of course, you must replace encrypt
with add
, update
, or decrypt
,
depending on the configuration task you are performing.
This example shows the command and options for encrypting the password in the secrets JSON configuration:
./confluent secret file encrypt
--config-file /etc/kafka/sample.json
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
--config credentials.password
Note
The --config
key for JSON configurations must be separated by .
in the path syntax.
This example shows the command and options for adding a new secrets JSON configuration in which the password is encrypted:
./confluent secret file add
--config-file /etc/kafka/sample.json
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
--config credentials.password
This example shows the command and options for updating the encrypted password in a secrets JSON configuration:
./confluent secret file update
--config-file /etc/kafka/sample.json
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
--config credentials.password
The encrypted data that you see after executing the preceding commands should look similar to the following:
_metadata.master_key.0.salt = 3B++ViaAMaSwOOnxYI2bbeCtvZXRV5mxEOfb2FO3DnU=
_metadata.symmetric_key.0.created_at = 2020-03-06 14:07:34.248521 -0700 MST m=+0.011810143
_metadata.symmetric_key.0.envvar = CONFLUENT_SECURITY_MASTER_KEY
_metadata.symmetric_key.0.length = 32
_metadata.symmetric_key.0.iterations = 1000
_metadata.symmetric_key.0.salt = 4aqTrl8kdnQdVbGwkbeQHUiLA235/RKGC8zOXTHwQaI=
_metadata.symmetric_key.0.enc = ENC[AES/CBC/PKCS5Padding,data:bP93/lcsQVY5tzh4NWvD9tWO/yyTGwdAEgYHwpUokjiomma7QoH8X/jhlB7zibGd,iv:A7zk7hBwuataNy+ToT346w==,type:str]
server.properties/sasl.jaas.config/com.sun.security.auth.module.Krb5LoginModule/adminpassword = ENC[AES/CBC/PKCS5Padding,data:KdjkpudhWKoVe+6G35OYDw==,iv:5MgMsMT1o8d1JlXE0966Bg==,type:str]
sample.json/credentials.password = ENC[AES/CBC/PKCS5Padding,data:4cCPvtf9Sgpf6amU358NDw==,iv:Aq/OmYfGIdbyw78LRe5gHQ==,type:str]
This example shows the command and options for decrypting the password in a secrets JSON configuration:
./confluent secret file decrypt
--config-file /etc/kafka/sample.json
--local-secrets-file /usr/secrets/security.properties
--remote-secrets-file /usr/secrets/security.properties
--config credentials.password
Rotate keys¶
It is recommended that you rotate keys using a single properties file.
This command rotates the master key or data key.
Rotate master key (
--master-key
): Generates a new master key and re-encrypts the data with the new master key. The new master key is stored in an environment variable.confluent secret file rotate --master-key \ --local-secrets-file /usr/secrets/security.properties \ -–passphrase @/User/bob/secret.properties --passphrase-new @/User/bob/secretNew.properties
Rotate data key (
--data-key
): Generates a new data key and re-encrypts the file with the new data key.confluent secret file rotate --data-key \ --local-secrets-file /usr/secrets/security.properties \ -–passphrase @/User/bob/secret.properties
Docker configuration¶
When you enable security for the Confluent Platform, you must pass secrets (for example,
credentials, certificates, keytabs, Kerberos config) to the container. The images
handle this by using the credentials available in the secrets directory.
The containers specify a Docker volume for secrets, which the admin must map
to a directory on the host that contains the required secrets. For example, if
the securities.properties
file is located on the host at /scripts/security
,
and you want it mounted at /secrets
in the Docker container, then you would
specify:
volumes:
- ./scripts/security:/secrets
To configure secrets protection in Docker images, you must manually add the
following configuration to the docker-compose.yml
file:
CONFLUENT_SECURITY_MASTER_KEY: <your-master-key>
<COMPONENT>_CONFIG_PROVIDERS: "securepass"
<COMPONENT>_CONFIG_PROVIDERS_SECUREPASS_CLASS: "io.confluent.kafka.security.config.provider.SecurePassConfigProvider"
<COMPONENT>
can be any of the following:
KAFKA
KSQL
CONNECT
SCHEMA_REGISTRY
CONTROL_CENTER
For details about Docker configuration options, refer to Docker Configuration Parameters for Confluent Platform.
For a Kafka broker, your configuration should look like the following:
CONFLUENT_SECURITY_MASTER_KEY: <your-master-key>
KAFKA_CONFIG_PROVIDERS: "securepass"
KAFKA_CONFIG_PROVIDERS_SECUREPASS_CLASS: "io.confluent.kafka.security.config.provider.SecurePassConfigProvider"
Note
For each secret you want to use, you must specify where to find it. For
example, if you want to use a secure secret in the sasl.jaas.config
file,
and the secrets file is volume-mounted in your Docker container at the path
/secrets/security.properties
, and you want to use the
server.properties/sasl.jass.config
key in that file, you would use the following
configuration. Note that in the docker-compose.yml
file you must escape the
$
character by using $$
as shown here:
KAFKA_SASL_JAAS_CONFIG: $${securepass:/secrets/security.properties:server.properties/sasl.jass.config}
Script commands¶
You can script the commands by using stdin or from a file:
- To pipe from stdin use dash (
-
), for example--passphrase -
. - To read from a file use
@<path-to-file>
, for example--passphrase @/User/bob/secret.properties
.
confluent secret master-key generate \
--local-secrets-file /usr/demo/security.properties \
--passphrase @/Users/user.name/tmp/demo/masterkey.properties
echo -e "demoMasterKey" | confluent secret master-key generate --local-secrets-file
/usr/demo/security.properties --passphrase -
Fix corrupt master key¶
If your master key becomes corrupt or is lost, you must create new passwords in the properties file (e.g.
ssl.keystore.password
) and generate a new master key for the encrypted configuration parameters. The following
example updates the ssl.keystore.password
in the /etc/kafka/server.properties
file.
Create a new password in your properties file.
ssl.keystore.password=mynewpassword
Generate a new master key.
confluent secret master-key generate \ --local-secrets-file <path-to-file>/security.properties \ --passphrase @/User/bob/secret.properties
Encrypt the configs with new master key.
confluent secret file encrypt --config-file /etc/kafka/server.properties \ --local-secrets-file <path-to-file>/security.properties \ --remote-secrets-file <path-to-file> \ --config ssl.keystore.password
Properties file reference¶
The security.properties
looks like this:
1_metadata.master_key.0.salt = HfCqzb0CSXSo/mEx2Oc0lUqY5hP3vGa/SayR5wxQogI=
2_metadata.symmetric_key.0.created_at = 2019-07-16 16:36:05.480926 -0700 PDT m=+0.006733395
3_metadata.symmetric_key.0.envvar = CONFLUENT_SECURITY_MASTER_KEY
4_metadata.symmetric_key.0.length = 32
5_metadata.symmetric_key.0.iterations = 1000
6_metadata.symmetric_key.0.salt = hCC00OJG2VzhHhLMB6hZSuE9KBKutMK8BxFhq8OUirg=
7_metadata.symmetric_key.0.enc = ENC[AES/CBC/PKCS5Padding,data:TDSUb8f6IzUtgAffkQ8jZ55QU1sn+OTbvr2+FzX1bkjnrV4d6uwqtsTxzltiG8nO,iv:1ieTVqWxOC06rDcO9XQuOQ==,type:str]
8server.properties/ssl.keystore.password = ENC[AES/CBC/PKCS5Padding,data:jOGowFcgq4q1MqcJEGWCsg==,iv:3iqk+FJAbnW7MOYEiPkyFA==,type:str]
- Line 1:
_metadata.master_key.0.salt
is the salt used for generating the master key. - Line 2:
_metadata.symmetric_key.0.created_at
is the timestamp when data key is created. - Line 3:
_metadata.symmetric_key.0.envvar
is the master key environment variable. - Line 4:
_metadata.symmetric_key.0.length
is the length of data key in bytes. - Line 5:
_metadata.symmetric_key.0.iterations
is the number of iterations used for encryption. - Line 6:
_metadata.symmetric_key.0.salt
is the salt used for generating the data key. - Line 7:
_metadata.symmetric_key.0.enc
is the data key wrapped using master key in the following key:value format, where:ENC[<algorithm>,data:<data>:,iv:<initialization-vector>,type:<type>]
. The algorithm format used is:- Symmetric encryption algorithm: Advanced Encryption Standard (AES)
- Encryption mode: Cipher block chaining (CBC)
- Padding scheme: PKCS5Padding
Suggested reading¶
Blog post: Shoulder Surfers Beware: Confluent Now Provides Cross-Platform Secret Protection