Protect Data in Motion with TLS Encryption in Confluent Platform¶
TLS encryption overview¶
By default, Confluent Platform clusters communicate in PLAINTEXT
, meaning that all data is
sent in plain text (unencrypted). To encrypt data in motion (or data in transit)
between services and components in your Confluent Platform cluster, you should configure all
Confluent Platform services and components to use TLS encryption. TLS encryption is supported
for all Confluent Platform services and components, including Confluent Server brokers, Connect workers,
Kafka clients, and REST Proxy.
For details on configuring TLS encryption for Confluent Server brokers and Confluent Platform services, see the following sections:
Confluent Platform supports Transport Layer Security (TLS)
encryption based on OpenSSL, an open source
cryptography toolkit that provides an implementation of the Transport Layer
Security (TLS) and Secure Socket Layer (SSL) protocols. Secure Sockets Layer (SSL)
was the predecessor of Transport Layer Security (TLS), and has been deprecated
since June 2015. For TLS
configurations, properties often refer to SSL
.
Enabling TLS encryption might have a performance impact due to overhead of encrypting and decrypting data.
TLS uses private-key/certificate pairs, which are used during the TLS handshake process.
- Each Confluent Server broker needs its own private-key/certificate pair, and the Kafka client uses the certificate to authenticate to the Confluent Server broker.
- Each logical client needs a private-key/certificate pair if client authentication is enabled, and the Confluent Server broker uses the certificate to authenticate the Kafka client.
You can configure each Confluent Server broker and logical client with a truststore, which is used to determine which certificates (broker or logical client identities) to trust (authenticate). You can configure the truststore in many ways. Consider the following two examples:
- The truststore contains one or many certificates: the Confluent Server broker or logical Kafka client trusts any certificate listed in the truststore.
- The truststore contains a Certificate Authority (CA): the Confluent Server broker or logical Kafka client trusts any certificate that was signed by the CA in the truststore.
Using the CA method is more convenient because adding a new Confluent Server broker or Kafka client doesn’t require changes to the truststore. For an example of the CA method, see Single Trust Store Configuration.
However, with the CA method, Confluent Platform clusters do not conveniently support blocking authentication for individual Confluent Server brokers or Kafka clients that were previously trusted using this mechanism (certificate revocation is typically done using Certificate Revocation Lists or the Online Certificate Status Protocol), so you would have to rely on authorization to block access.
In contrast, if you use one or many certificates, blocking authentication is achieved by removing the Confluent Server broker certificate or Kafka client certificate from the truststore.
See also
For an example that shows how to set Docker environment variables for Confluent Platform running in ZooKeeper mode, see the Confluent Platform demo. Refer to the demo’s docker-compose.yml file for a configuration reference.
Mutual (mTLS) authentication¶
If you configure TLS encryption, you can optionally configure mutual (mTLS) authentication. You can configure just TLS encryption (by default, TLS encryption includes certificate authentication of the server) and use a separate mechanism for client authentication (for example, mTLS or SASL). By default, TLS encryption enables one-way authentication in which the client authenticates the server certificate. For bidirectional authentication, where the broker also authenticates the client certificate, you can use mTLS.
When you use mutual TLS (mTLS) authentication, the Confluent Server broker authenticates the Kafka client and the Kafka client also authenticates the Confluent Server broker. This bidirectional, or mutual, authentication provides an additional layer of security for your Confluent Platform cluster. Note that mTLS is not supported for Kafka clients authenticating to ksqlDB.
Create TLS keys and certificates¶
Refer to the Security Tutorial, which describes how to create TLS keys and certificates.
Configure TLS encryption for Confluent Server brokers¶
Configure all Confluent Server brokers in your Confluent Platform cluster to accept secure connections from clients. Any configuration changes made to brokers require a rolling restart.
Enable security for Confluent Server brokers as described in the section below. Additionally, if you are using Confluent Control Center, Auto Data Balancer, or Self-Balancing Clusters, configure your brokers for:
Configure the password, truststore, and keystore in the
server.properties
file of every Confluent Server broker. Because this configuration stores passwords directly in the broker configuration file, restrict access to these files using file system permissions.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks ssl.truststore.password=test1234 ssl.keystore.location=/var/ssl/private/kafka.server.keystore.jks ssl.keystore.password=test1234 ssl.key.password=test1234
To enable TLS for interbroker communication, add the following to the Confluent Server broker properties file (default is
PLAINTEXT
):security.inter.broker.protocol=SSL
Configure the Confluent Server brokers with the ports to listen for client and interbroker TLS (
SSL
) connections. If the value is different fromlisteners
, you must configurelisteners
, and optionallyadvertised.listeners
.listeners=SSL://kafka1:9093 advertised.listeners=SSL://<localhost>:9093
Configure both
SSL
ports andPLAINTEXT
ports if:- TLS is not enabled for interbroker communication
- Some Kafka clients connecting to the cluster do not use TLS
listeners=PLAINTEXT://kafka1:9092,SSL://kafka1:9093 advertised.listeners=PLAINTEXT://<localhost>:9092,SSL://<localhost>:9093
Note that
advertised.host.name
andadvertised.port
configure a singlePLAINTEXT
port and are incompatible with secure protocols. Useadvertised.listeners
instead.
Optional settings¶
Here are some optional settings for Confluent Server brokers:
ssl.cipher.suites
The named list of supported cipher suites, which are named combinations of authentication, encryption, MAC, and key exchange algorithms used to negotiate the security settings for a network connection using TLS.
- Type: list
- Default:
null
(by default, all supported cipher suites are enabled) - Importance: medium
ssl.enabled.protocols
The comma-separated list of protocols enabled for TLS connections. Kafka clients and Confluent Server brokers use TLSv1.3 if both support it, but fall back to TLSv1.2 assuming both support TLSv1.2.
- Type: list
- Default:
TLSv1.2,TLSv1.3
- Importance: medium
ssl.truststore.type
The file format of the truststore file.
- Type: string
- Default:
JKS
- Importance: medium
Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the JCE Unlimited Strength Jurisdiction Policy Files must be obtained and installed in the JDK or JRE. See JCA Providers Documentation for more information.
Configure TLS encryption for Kafka clients¶
The new Producer and Consumer clients support security for Kafka versions 0.9.0 and higher.
If you are using the Kafka Streams API, you can read on how to configure equivalent SSL and SASL parameters.
If client authentication is not required by the Confluent Server broker, the following is a minimal
configuration example that you can store in a client properties file
client-ssl.properties
.
Because this configuration stores passwords directly in the Kafka client configuration file, it is important to restrict access to these files via file system permissions.
bootstrap.servers=kafka1:9093
security.protocol=SSL
ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks
ssl.truststore.password=test1234
If client authentication requires TLS, the client must provide the keystore as well. You can read about the additional configurations required in mTLS authentication.
Here are examples using Kafka tools, kafka-console-producer
and
kafka-console-consumer
, to pass in the client-ssl.properties
file with
the properties specified above:
kafka-console-producer --bootstrap-server kafka1:9093 \
--topic test \
--producer.config client-ssl.properties
kafka-console-consumer --bootstrap-server kafka1:9093 \
--topic test \
--consumer.config client-ssl.properties \
--from-beginning
Optional settings¶
Here are some commonly used optional settings for Kafka clients:
ssl.provider
The name of the security provider used for TLS connections. Default value is the default security provider of the JVM.
- Type: string
- Default:
null
- Importance: medium
ssl.cipher.suites
The named list of supported cipher suites, which are named combinations of authentication, encryption, MAC, and key exchange algorithms used to negotiate the security settings for a network connection using TLS.
- Type: list
- Default:
null
(by default, all supported cipher suites are enabled) - Importance: medium
ssl.enabled.protocols
The comma-separated list of protocols enabled for TLS connections. The default value is
TLSv1.2,TLSv1.3
when running with Java 11 or later,TLSv1.2
otherwise. With the default value for Java 11 (TLSv1.2,TLSv1.3
), Kafka clients and brokers prefer TLSv1.3 if both support it, but fall back to TLSv1.2 (assuming both support at least TLSv1.2).- Type: list
- Default:
TLSv1.2,TLSv1.3
- Importance: medium
ssl.truststore.type
The file format of the truststore file.
- Type: string
- Default:
JKS
- Importance: medium
Configure TLS encryption for ZooKeeper¶
The version of ZooKeeper bundled with Kafka supports TLS encryption. For details, see Add Security to Running Clusters in Confluent Platform.
Configure TLS encryption for Connect workers¶
This section describes how to enable security for Kafka Connect. Securing Kafka Connect requires that you configure security for:
- Kafka Connect workers: part of the Kafka Connect API, a worker is really just an advanced client, underneath the covers
- Kafka Connect connectors: connectors may have embedded producers or consumers, so you must override the default configurations for Connect producers used with source connectors and Connect consumers used with sink connectors
- Kafka Connect REST: Kafka Connect exposes a REST API that can be configured to use TLS/SSL using additional properties
Configure security for Kafka Connect as described in the section below. Additionally, if you are using Confluent Control Center streams monitoring for Kafka Connect, configure security for:
Configure the top-level settings in the Connect workers to use TLS by adding these
properties in connect-distributed.properties
. These top-level settings are
used by the Connect worker for group coordination and to read and write to the
internal topics which are used to track the cluster’s state (for example, configurations
and offsets).
bootstrap.servers=kafka1:9093
security.protocol=SSL
ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks
ssl.truststore.password=test1234
Connect workers manage the producers used by source connectors and the consumers used by sink connectors. So, for the connectors to leverage security, you also have to override the default producer or consumer configuration that the worker uses. Depending on whether the connector is a source or sink connector:
For source connectors, configure the same properties, but add the
producer
prefix.producer.bootstrap.servers=kafka1:9093 producer.security.protocol=SSL producer.ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks producer.ssl.truststore.password=test1234
For sink connectors, configure the same properties, but add the
consumer
prefix.consumer.bootstrap.servers=kafka1:9093 consumer.security.protocol=SSL consumer.ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks consumer.ssl.truststore.password=test1234
Configure TLS encryption for REST Proxy¶
For TLS encryption with the REST API, configure the following additional properties:
Property | Note |
---|---|
listeners |
List of REST listeners in the format
protocol://host:port,protocol2://host2:port ,where the protocol is either
http or https . |
rest.advertised.listener |
Configures the listener used for communication between workers.
Valid values are either
http or https .If the
listeners property is not defined or if it contains an http listener, the default value for this field is
http . When the listeners property is defined and contains only
https listeners, the default value ishttps . |
ssl.client.auth |
Valid values are
none , requested , and required . It controls whether:1. the client is required to do TLS client authentication (
required )2. it can decide to skip the TLS client authentication (
requested )3. the TLS client authentication will be disabled (
none ) |
listeners.https.ssl.* |
You can use the
listeners.https. prefix with an TLS configuration parameterto override the default TLS configuration that is shared with the connections
to the Confluent Server broker. If at least one parameter with this prefix exists, the
implementation uses only the TLS parameters with this prefix and ignores all TLS
parameters without this prefix. If no parameter with prefix
listeners.https. exists, the parameters without a prefix are used.
|
Note that if the listeners.https.ssl.*
properties are not defined then the ssl.*
properties will be used.
For a list of all REST API ssl.*
properties, see Standalone REST Proxy Configuration Options.
Here is an example that sets the ssl.*
properties to use TLS connections to the Confluent Server broker,
and since listeners
includes https
these same settings are used to configure
the Connect TLS endpoint:
listeners=https://myhost:8443
rest.advertised.listener=https
rest.advertised.host.name=<localhost>
rest.advertised.host.port=8083
ssl.client.auth=requested
ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
ssl.truststore.password=test1234
ssl.keystore.location=/var/ssl/private/kafka.server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
To configure the Connect TLS endpoint differently than the TLS connections to
the Confluent Server broker, simply define the listeners.https.ssl.*
properties with the
correct settings. Note that as soon as any listeners.https.ssl.*
properties
are specified, none of the top level ssl.*
properties will apply, so be sure
to define all of the necessary listeners.https.ssl.*
properties:
listeners=https://myhost:8443
rest.advertised.listener=https
rest.advertised.host.name=<localhost>
rest.advertised.host.port=8083
listeners.https.ssl.client.authentication=requested
listeners.https.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
listeners.https.ssl.truststore.password=test1234
listeners.https.ssl.keystore.location=/var/ssl/private/kafka.server.keystore.jks
listeners.https.ssl.keystore.password=test1234
listeners.https.ssl.key.password=test1234
listeners.https.ssl.endpoint.identification.algorithm=HTTPS
Configure TLS encryption for Replicator¶
Confluent Replicator is a type of Kafka source connector that replicates data from a source to destination Kafka cluster. An embedded consumer inside Replicator consumes data from the source cluster, and an embedded producer inside the Kafka Connect worker produces data to the destination cluster.
Replicator version 4.0 and earlier requires a connection to ZooKeeper in the origin and destination Kafka clusters. If ZooKeeper is configured for authentication, the client configures the ZooKeeper security credentials via the global JAAS configuration setting -Djava.security.auth.login.config
on the Connect workers, and the ZooKeeper security credentials in the origin and destination clusters must be the same.
To configure Confluent Replicator security, you must configure the Replicator connector as shown below and additionally you must configure:
To add TLS encryption to the Confluent Replicator embedded consumer, modify the Replicator JSON properties file.
Here is an example subset of configuration properties to add for TLS encryption:
{
"name":"replicator",
"config":{
....
"src.kafka.ssl.truststore.location":"/etc/kafka/secrets/kafka.connect.truststore.jks",
"src.kafka.ssl.truststore.password":"confluent",
"src.kafka.security.protocol":"SSL"
....
}
}
}
See also
To see an example Confluent Replicator configuration, see the TLS source demo script. For demos of common security configurations see: Replicator security demos
To configure Confluent Replicator for a destination cluster with TLS encryption, modify the Replicator JSON configuration to include the following:
{
"name":"replicator",
"config":{
....
"dest.kafka.ssl.truststore.location":"/etc/kafka/secrets/kafka.connect.truststore.jks",
"dest.kafka.ssl.truststore.password":"confluent",
"dest.kafka.security.protocol":"SSL"
....
}
}
}
Additionally the following properties are required in the Connect worker:
security.protocol=SSL
ssl.truststore.location=/etc/kafka/secrets/kafka.connect.truststore.jks
ssl.truststore.password=confluent
producer.security.protocol=SSL
producer.ssl.truststore.location=/etc/kafka/secrets/kafka.connect.truststore.jks
producer.ssl.truststore.password=confluent
For details, see the general security configuration for Connect workers.
See also
For an example Confluent Replicator configuration, see TLS destination demo script.
For demos of common security configurations, see: Replicator security demos.
Configure TLS encryption for Confluent Control Center¶
Confluent Control Center uses Kafka Streams as a state store, so if all the Kafka brokers in the cluster backing Control Center are secured, then the Control Center application also needs to be secured.
Note
When RBAC is enabled, Control Center cannot be used in conjunction with Kerberos because Control Center cannot support any SASL mechanism other than OAUTHBEARER.
Enable security for the Control Center application as described in the section below. Additionally, configure security for the following components:
- Confluent Metrics Reporter: required on the production cluster being monitored
- Confluent Monitoring Interceptors: optional if you are using Control Center streams monitoring
Enable TLS encryption for Confluent Control Center in the etc/confluent-control-center/control-center.properties
file.
confluent.controlcenter.streams.security.protocol=SSL
confluent.controlcenter.streams.ssl.truststore.location=/var/ssl/private/kafka.client.truststore.jks
confluent.controlcenter.streams.ssl.truststore.password=test1234
Configure TLS encryption for Metrics Reporter¶
This section describes how to enable TLS encryption for Confluent Metrics Reporter, which is used for Confluent Control Center and Auto Data Balancer.
To add TLS encryption for the Confluent Metrics Reporter, add the following to the server.properties
file on the Confluent Server brokers in the Confluent Platform cluster being monitored.
confluent.metrics.reporter.bootstrap.servers=kafka1:9093
confluent.metrics.reporter.security.protocol=SSL
confluent.metrics.reporter.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
confluent.metrics.reporter.ssl.truststore.password=test1234
Configure TLS encryption for Confluent Monitoring Interceptors¶
Confluent Monitoring Interceptors are used for Confluent Control Center streams monitoring. This section describes how to enable security for Confluent Monitoring Interceptors in three places:
- General clients
- Kafka Connect
- Confluent Replicator
Important
The typical use case for Confluent Monitoring Interceptors is to provide monitoring
data to a separate monitoring cluster that most likely has different configurations.
Interceptor configurations do not inherit configurations for the monitored component.
If you wish to use configurations from the monitored component, you must add
the appropriate prefix. For example, the option confluent.monitoring.interceptor.security.protocol=SSL
,
if being used for a producer, must be prefixed with producer.
and would appear as
producer.confluent.monitoring.interceptor.security.protocol=SSL
.
Interceptors for general clients¶
For Confluent Control Center stream monitoring to work with Kafka clients, you must configure TLS encryption for the Confluent Monitoring Interceptors in each client.
Verify that the client has configured interceptors.
Producer:
interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
Consumer:
interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
Configure TLS encryption for the interceptor.
confluent.monitoring.interceptor.bootstrap.servers=kafka1:9093 confluent.monitoring.interceptor.security.protocol=SSL confluent.monitoring.interceptor.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks confluent.monitoring.interceptor.ssl.truststore.password=test1234
Interceptors for Kafka Connect¶
For Confluent Control Center stream monitoring to work with Kafka Connect, you must configure TLS
encryption for the Confluent Monitoring Interceptors in Kafka Connect. Configure
the Connect workers by adding the following properties in connect-distributed.properties
,
depending on whether the connectors are sources or sinks.
Source connector: configure the Confluent Monitoring Interceptors for TLS encryption, adding
producer
prefix.producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor producer.confluent.monitoring.interceptor.bootstrap.servers=kafka1:9093 producer.confluent.monitoring.interceptor.security.protocol=SSL producer.confluent.monitoring.interceptor.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks producer.confluent.monitoring.interceptor.ssl.truststore.password=test1234
Sink connector: configure the Confluent Monitoring Interceptors for TLS encryption, adding the
consumer
prefix.consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor consumer.confluent.monitoring.interceptor.bootstrap.servers=kafka1:9093 consumer.confluent.monitoring.interceptor.security.protocol=SSL consumer.confluent.monitoring.interceptor.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks consumer.confluent.monitoring.interceptor.ssl.truststore.password=test1234
Interceptors for Replicator¶
For Confluent Control Center stream monitoring to work with Replicator, you must configure TLS for the Confluent Monitoring Interceptors in the Replicator JSON configuration file. Here is an example subset of configuration properties to add for TLS encryption.
{
"name":"replicator",
"config":{
....
"src.consumer.group.id": "replicator",
"src.consumer.interceptor.classes": "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor",
"src.consumer.confluent.monitoring.interceptor.bootstrap.servers": "kafka1:9093",
"src.consumer.confluent.monitoring.interceptor.security.protocol": "SSL",
"src.consumer.confluent.monitoring.interceptor.ssl.truststore.location": "/var/ssl/private/kafka.client.truststore.jks",
"src.consumer.confluent.monitoring.interceptor.ssl.truststore.password": "test1234",
....
}
}
}
Enable TLS encryption in a Self-Balancing cluster¶
To enable TLS encryption in a Self-Balancing cluster, add the following to the
server.properties
file on the Confluent Server brokers in the Confluent Platform cluster.
confluent.rebalancer.metrics.security.protocol=SSL
confluent.rebalancer.metrics.ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks
confluent.rebalancer.metrics.ssl.truststore.password=confluent
confluent.rebalancer.metrics.ssl.keystore.location=/etc/kafka/secrets/kafka.client.keystore.jks
confluent.rebalancer.metrics.ssl.keystore.password=confluent
confluent.rebalancer.metrics.ssl.key.password=confluent
Schema Registry¶
Schema Registry uses Kafka to persist schemas, and so it acts as a client to write data to the Kafka cluster. Therefore, if the Kafka brokers are configured for security, you should also configure Schema Registry to use security. You may also refer to the complete list of Schema Registry configuration options.
Here is an example subset of schema-registry.properties
configuration
parameters to add for TLS encryption:
kafkastore.bootstrap.servers=SSL://kafka1:9093
kafkastore.security.protocol=SSL
kafkastore.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
kafkastore.ssl.truststore.password=test1234
Configure TLS encryption for REST Proxy¶
To use TLS encryption with Confluent REST Proxy, you need to configure security between:
- REST clients and the REST Proxy (HTTPS)
- REST Proxy and the Confluent Platform cluster
For the complete list of options, see REST Proxy configuration options.
First, configure HTTPS between REST clients and the REST Proxy. Here is an example
subset of kafka-rest.properties
configuration parameters to configure HTTPS:
ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
ssl.truststore.password=test1234
Then, configure TLS encryption between REST Proxy and the Confluent Platform cluster. Here is
an example subset of kafka-rest.properties
configuration parameters to add
for TLS encryption:
client.bootstrap.servers=kafka1:9093
client.security.protocol=SSL
client.ssl.truststore.location=/var/ssl/private/kafka.server.truststore.jks
client.ssl.truststore.password=test1234
Enable TLS logging¶
Enable TLS debug logging at the JVM level by starting the Confluent Server broker and clients
with the javax.net.debug
system property. For example:
export KAFKA_OPTS=-Djavax.net.debug=all
kafka-server-start etc/kafka/server.properties
Tip
These instructions are based on the assumption that you are installing Confluent Platform by using ZIP or TAR archives. For more information, see Install Confluent Platform On-Premises.
Once you start the Confluent Server broker, you should see the following in the server.log
:
with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)
To verify that the server keystore and truststore are configured correctly, run the
following openssl s_client
command:
openssl s_client -debug -connect localhost:9093 -tls1
Note: TLSv1 should be listed under ssl.enabled.protocols
.
In the output of this command you should see the server certificate:
-----BEGIN CERTIFICATE-----
{variable sized random bytes}
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Joe Smith
issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com
For details, see Debugging SSL/TLS Connections [Java Documentation].
If the certificate is not returned using the openssl
command, or if there are
other error messages, then your keys or certificates are not set up correctly.
Review your configurations.