Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Configuring security¶
The deployment example steps use SASL/PLAIN security (SASL with plain text authentication). This is allows you to get components deployed quickly for testing in a development environment. You need greater security for your production environment. The ascending authentication/encryption levels supported for a production environment are listed below:
- SASL with PLAIN authentication (default)
- SASL/SSL with PLAIN authentication
- TLS one-way (server) authentication
- TLS two-way (client) authentication
- TLS all-way authentication
Note
The helm bundle contains example YAML files that show you how to construct the parameters for each security type. Scripts that automate manual configuration steps are also provided.
This document provides instructions for configuring security. For security concepts, see the Confluent topic Security.
Tip
See Configuring ACLs to enable Access Control Lists (ACLs).
High-level workflow¶
Security in a Confluent Platform cluster revolves around the security configured for the Apache Kafka® brokers. For this reason, the workflow for setting up security starts with configuring security for the Kafka broker cluster, making sure it works, and then layering on additional security for the remaining components. The following list provides the typical order for adding security to your cluster:
- Configure Kafka broker security and validate accessibility.
- Configure security for applicable Confluent Platform components and validate accessibility.
- Configure security for external clients and validate accessibility.
Tip
One way to set up production-level security is to add and then validate increased security layered on top of a SASL/PLAIN development environment. Once this is configured properly, you can duplicate these settings when deploying your production environment.
Certificates and keys¶
The following describes certificates, keys, and Subject Alternative Name (SAN) requirements.
Certificates¶
Depending on the security configuration in place, you will likely need to generate certificate keys for each direction of communication and for access by remote clients. For example, one-way (server) certificates need to be created for access to a Kafka broker (server). For two-way communication, both a server and client certificate is required. You can use the same certificate for both client and server authentication.
cacerts must be configured to trust the Kafka broker
certificate when TLS is enabled. Generally, the cacerts
section of the
security configuration contains all Certificate Authority (CA) keys to allow
connections from trusted remote clients. The instructions in
Kafka cluster security show how these certificate keys are structured
in the <provider.yaml>
file.
pem files¶
The following define certificate configurations.
- cacerts provides a list of certificates issued by a Certificate Authority (CA). A broker or client trusts any certificate signed by the CA. Certificate keys are placed in the
<provider>.yaml
component locations shown in the examples. - fullchain provides a CA key, and an intermediate CA key (if available), and a TLS certificate key. These are placed, in order, in the
<provider>.yaml
component locations shown in the examples. - privkey contains a private key. The private key is placed in the
<provider.yaml
component locations shown in the examples.
You can pass the .pem
key contents when installing components, instead of
manually adding the keys to the <provider>.yaml
file. An example Helm
install command for Kafka with TLS two-way security
enabled is shown below:
helm install \
-f ./providers/gcp.yaml \
--name kafka \
--namespace operator \
--set kafka.enabled=true \
--set kafka.tls.enabled=true \
--set kafka.metricReporter.tls.enabled=true \
--set kafka.tls.cacerts="$(cat /tmp/client-ca.pem)" \
--set kafka.tls.fullchain="$(cat /tmp/server-bundle.pem)" \
--set kafka.tls.privkey="$(cat /tmp/server-key.pem)" \
./confluent-operator
SAN attributes¶
When creating certificates, make sure to configure the Subject Alternative Name (SAN) attribute. This allows a single certificate to support multiple hosts. The following show different ways to configure this attribute.
External access¶
The following SAN examples are based on the properties below:
## Kafka Cluster
##
kafka:
name: kafka
replicas: 3
resources:
requests:
cpu: 200m
memory: 1Gi
loadBalancer:
enabled: true
domain: "mydevplatform.gcp.cloud"
For the example above:
The SAN attribute is
*.mydefplatform.gcp.cloud
if a wildcard certificate can be used.If no wildcard certificate is allowed, the individual SAN attributes are based on the Kafka broker prefix/number and component name. For example:
b0.mydevplatform.gcp.cloud b1.mydevplatform.gcp.cloud b2.mydevplatform.gcp.cloud kafka.mydevplatform.gcp.cloud
Internal access only¶
The example below shows the SAN attribute examples for a Kafka cluster deployed without external access, using the defaults with the example name kafka and the namespace operator:
*.operator.svc.cluster.local
*.kafka.operator.svc.cluster.local
Kafka cluster security¶
The following provides security configuration information for Kafka.
Advertised listeners¶
Advertised listener configurations for Kafka brokers deployed by Confluent Operator are as follows:
- Port 9071 - Internal clients running inside Kubernetes cluster.
- Port 9072 - Inter-broker communication.
- Port 9092 - External clients running outside the Kubernetes cluster.
When an external load balancer is configured, the list of Kafka services and listener ports resembles the following:
kafka ClusterIP None <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-0-internal ClusterIP 10.47.247.181 <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-0-lb LoadBalancer 10.47.245.192 192.50.14.35 9092:32183/TCP 21h
kafka-1-internal ClusterIP 10.47.251.31 <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-1-lb LoadBalancer 10.47.251.8 192.50.28.28 9092:31028/TCP 21h
kafka-2-internal ClusterIP 10.47.242.124 <none> 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h
kafka-2-lb LoadBalancer 10.47.250.236 192.50.64.18 9092:32636/TCP 21h
kafka-bootstrap-lb LoadBalancer 10.47.250.151 192.50.34.20 9092:30840/TCP 21h
Security properties¶
Security for each component is configured in the <provider>.yaml
file based
on properties provided in the values.yaml
file for each component. For
example, here are the unmodified, default properties for the Kafka component in
the <provider>.yaml
file:
tls:
enabled: false
fullchain: |-
privkey: |-
cacerts: |-
This shows that TLS is disabled and that the cluster is using the default SASL/PLAIN security configuration.
SASL/PLAIN¶
SASL with PLAIN authentication (SASL/PLAIN) is the default security applied if you do not make any security modifications to the default YAML files provided by Confluent. SASL/PLAIN should only be used for development environments or for inter-broker or broker-to-ZooKeeper connections.
When SASL/PLAIN is configured, the advertised listener ports are configured with the following protocols:
- Port 9071 - SASL/PLAIN
- Port 9072 - SASL/PLAIN
- Port 9092 - SASL/PLAIN
You enter a username and password for internal/external component access with SASL/PLAIN. This is set up in the global: sasl:
section in the
<provider>.yaml
file as shown below:
global:
... omitted
sasl:
plain:
username: <username>
password: <password>
SASL/SSL with PLAIN authentication¶
When SASL/SSL is configured, the advertised listener ports are configured with the following protocols:
- External Listener:
SASL_SSL
, PORT 9092 - Internal Listener:
SASL_PLAINTEXT
, PORT 9071 - Inter-broker communication:
SASL_PLAINTEXT
, PORT 9072
Internal Clients
All Confluent components/clients running inside the Kubernetes cluster communicate with the Kafka cluster using the
SASL_PLAINTEXT
mechanism.All Confluent components/clients must configure Kafka endpoints as
kafka:9071
.You enter a SASL/PLAIN username and password in the following
<provider>.yaml
section for connections over internal ports 9071 and 9072.global: ... omitted sasl: plain: username: <username> password: <password>
External Clients
Configure the Load Balancer DNS using a wild-card certificate in the format
CN=*.<DOMAIN_NAME>
.Clients communicate with the
SASL_SSL
mechanism on port 9092.Clients use the Kafka broker endpoint format
kafka-<DOMAIN_NAME>:9092
.You add certificates in the
tls
section for each component as shown in the example below:tls: enabled: true authentication: type: "plain" fullchain: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC afaRd3LdUKjlLD1rMlyHyL1dwvmdk7duNuEFVbEavFaItIDv9YHJYVXUU+HraLk/ deV0pNzJwC651CXsW76y1QEVWxh4mw ... omitted -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIEBDCCAuygAwIBAgIUJjq3xmbvuLux+5KL4HPZcXeX1CwwDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC w6Jq+K9cV+WsgyBnraqrZZGJ/MwzLyEAEXKdNLAM8Zzw2b9nd50uwmK+WZN3do9N YVub8e1qjBG/N95da29XH1iJnhcLHkuz ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAwjCQDpZwnPumRqYAkUzAVf9J8bVYZC52iH7lyZXL4Yc7DK+b UytMH979UXMqzcgsPMnWULIfMTQJ0o4FWgujsGsH0iyzegO6deQXBPBWzRQmtFJG +QGE+sKc26HZyZwgbrpTUkKIDkOd3XDZmpi+Lcaw8m1zSEM6ww+wg+gqbJO6MQ+7 17EXU/66VnF5+P0wcNAButK8kAkWAe1rKKsjMm8pLe3E41LGF8G5gXI ... omitted -----END RSA PRIVATE KEY-----
TLS one-way server authentication¶
When TLS one-way server authentication is configured, the advertised listener ports are configured with the following protocols:
- External Listener:
SSL
, PORT 9092 - Internal Listener:
PLAINTEXT
, PORT 9071 - Inter-broker communication:
PLAINTEXT
, PORT 9072
Internal Clients
All Confluent components/clients running inside the Kubernetes cluster communicate with the Kafka cluster through the
`PLAINTEXT
mechanism.All Confluent components/clients must configure the Kafka endpoints as
kafka:9071
.The example below shows the
TLS
entries for this security configuration.tls: internal: false enabled: true bootstrapEndpoint: kafka:9071
External Clients
Configure Load Balancer DNS using a wild-card certificate in the format
CN=*.<DOMAIN_NAME>
.Clients communicate with the
SASL_SSL
mechanism on port 9092.Clients use the Kafka broker endpoint format
kafka-<DOMAIN_NAME>:9092
.You add certificates in the
tls
section for each component as shown in the example below:tls: enabled: true fullchain: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx ... omitted -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA7heoOiJpwzzqOLb8elUJlQWDqWjvPW0NagTlmp8uHmOLGaB6 lTppt9grn8ERy6qX+l6EbT452PyeKwA34wwae42rn9rgDY0v/0eFDJa0Wnht8FHE /CiwUscuVinH1S28KJ6B0xXMUe9r+XcR+h70/QmFhnj+SQf77tzoopiOzvZLI4+s 8OqSueDFkvo7VJtFVkIGZflwKRooeVxkB/IU3xoSftq9zH1hsDvWohhk2/Mforh1 ... omitted -----END RSA PRIVATE KEY-----
TLS two-way client (mutual) authentication¶
When TLS two-way authentication is configured, the advertised listener ports are configured with the following protocols:
- External Listener:
SSL
, PORT 9092 - Internal Listener:
PLAINTEXT
, PORT 9071 - Inter-broker communication:
PLAINTEXT
, PORT 9072
Components and Clients
Configure the Load Balancer DNS using a wild-card certificate in the format
CN=*.<DOMAIN_NAME>
.Clients communicate using the
SASL_SSL
mechanism on port 9092.Clients use the Kafka broker endpoint format
kafka-<DOMAIN_NAME>:9092
.All Confluent components running inside the Kubernetes cluster communicate with Kafka brokers using the
SSL
mechanism with authentication TLS.All components/clients must configure Kafka endpoints in the format
kafka-<DOMAIN_NAME>:9092
.The example below shows the
TLS
entries for this security configuration.tls: enabled: true internal: true authentication: type: "tls" bootstrapEndpoint: kafka.operator.svc.cluster.local:9092
You add certificates in the
tls
section for each component as shown in the example below:tls: enabled: true authentication: type: "tls" cacerts: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx ... omitted -----END CERTIFICATE----- fullchain: |- -----BEGIN CERTIFICATE----- MIIEQzCCAyugAwIBAgIUZQom3lnhPEaSCdFcPXzZK3ghN3cwDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDcyMjQzMDBaFw0yNDAyMDYyMjQzMDBaMC4x ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA7heoOiJpwzzqOLb8elUJlQWDqWjvPW0NagTlmp8uHmOLGaB6 lTppt9grn8ERy6qX+l6EbT452PyeKwA34wwae42rn9rgDY0v/0eFDJa0Wnht8FHE /CiwUscuVinH1S28KJ6B0xXMUe9r+XcR+h70/QmFhnj+SQf77tzoopiOzvZLI4+s 8OqSueDFkvo7VJtFVkIGZflwKRooeVxkB/IU3xoSftq9zH1hsDvWohhk2/Mforh1 ... omitted -----END RSA PRIVATE KEY-----
TLS all-way client authentication¶
When TLS all-way authentication is configured, the advertised listener ports are configured with the following protocols:
- External Listener:
SSL
, PORT 9092 - Internal Listener:
PLAINTEXT
, PORT 9071 - Inter-broker communication:
PLAINTEXT
, PORT 9072
Note
All Confluent Platform components use the same certificate for client/server authentication in this security configuration.
Components and Clients
Configure the Load Balancer DNS using a wild-card certificate in the format
CN=*.<DOMAIN_NAME>
.Clients communicate using the
SASL_SSL
mechanism on port 9092.Clients use the Kafka broker endpoint format
kafka-<DOMAIN_NAME>:9092
.All Confluent components running inside the Kubernetes cluster communicate with Kafka brokers using the
SSL
mechanism.All components/clients must configure Kafka endpoints in the format
kafka-<DOMAIN_NAME>:9092
.The example below shows the
TLS
entries for this security configuration.tls: enabled: true internal: true bootstrapEndpoint: kafka.operator.svc.cluster.local:9092
You add certificates in the
tls
section for each component as shown in the example below:tls: enabled: true cacerts: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx ... omitted -----END CERTIFICATE----- fullchain: |- -----BEGIN CERTIFICATE----- MIIEQzCCAyugAwIBAgIUZQom3lnhPEaSCdFcPXzZK3ghN3cwDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDcyMjQzMDBaFw0yNDAyMDYyMjQzMDBaMC4x ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA7heoOiJpwzzqOLb8elUJlQWDqWjvPW0NagTlmp8uHmOLGaB6 lTppt9grn8ERy6qX+l6EbT452PyeKwA34wwae42rn9rgDY0v/0eFDJa0Wnht8FHE /CiwUscuVinH1S28KJ6B0xXMUe9r+XcR+h70/QmFhnj+SQf77tzoopiOzvZLI4+s 8OqSueDFkvo7VJtFVkIGZflwKRooeVxkB/IU3xoSftq9zH1hsDvWohhk2/Mforh1 ... omitted -----END RSA PRIVATE KEY-----
Component and client security¶
The following sections provide information about setting up Confluent Platform component and client security.
Client security¶
All client security configurations depend on Kafka security dependencies in component configurations. Refer to the following when configuring clients:
- If the Kafka cluster is using TLS one-way server authentication, configure the client as described in Encryption with SSL.
- If the Kafka cluster is using TLS two-way or all-way client authentication, configure the client as described in Encryption and Authentication with SSL.
- If Kafka cluster using SASL/SSL with PLAIN authentication, configure the client as described in Encryption and Authentication with SASL.
Client configuration information can also be retrieved from the Kafka cluster using the following commands (using the example name kafka running on the namespace operator).
- For internal clients (running inside the Kubernetes cluster):
kubectl get kafka kafka -ojsonpath='{.status.internalClient}' -n operator
. - For external clients (running outside the Kubernetes cluster):
kubectl get kafka kafka -ojsonpath='{.status.externalClient}' -n operator
(only available if the Kafka cluster is deployed for external access).
The following provide configuration dependencies and additional component security requirements.
Important
The following are general guidelines and do not provide all security
dependencies. Review the component values.yaml
files before configuring
security. There are extensive comments that provide additional information
about component dependencies.
Kafka security dependencies in component configurations¶
Cluster components have a security dependency structure based on the security parameters created when setting up Kafka security. For example, if you configured TLS all-way client authentication for the Kafka brokers, the configuration parameters in the Replicator security section would resemble the following:
replicator:
name: replicator
replicas: 2
dependencies:
kafka:
tls:
enabled: true (Note 1)
internal: true (Note 2)
authentication:
type: "tls" (Note 3)
brokerCount: 3
bootstrapEndpoint: kafka.operator.svc.cluster.local:9092 (Note 4)
Kafka dependency notes identified above:
- Note 1: If true, the destination Kafka cluster is running in secure mode. If false, it is running in insecure mode.
- Note 2: If true, all communication to the destination Kafka cluster is encrypted. If false, communication to the destination Kafka cluster is not encrypted.
- Note 3: The supported authentication types are
tls
andplain
. - Note 4: The destination bootstrap endpoint for the Kafka cluster. If
internal: false
, always usekafka:9071
orkafka.operator.svc.cluster.local:9071
(for a Kafka cluster with the namekafka
deployed on namespaceoperator
). Ifenabled: true
andinternal: true
, configure using an internal or external domain name on port9092
. For example,kafka.<providermdomain>:9092
. Bootstrap values are defined as follows:kafka.operator.svc.cluster.local:9092
: The Kafka cluster is not accessible outside the Kubernetes cluster.kafka.<providermdomain>:9092
: The Kafka cluster is accessible outside the Kubernetes cluster.<myprefix>.<providermdomain>:9092
: The Kafka cluster is accessible outside the Kubernetes cluster using<myprefix>.<providermdomain>
.
Note
Make sure to create the appropriate SAN attributes for certificates.
Confluent Control Center dependencies¶
Control Center has several dependencies, given its role as the graphical user
interface to Confluent Platform. The following shows the values.yaml
file dependencies
that may need to be used, based on how Kafka security and other component
security parameters are configured.
dependencies:
## Kafka cluster C3 uses internally
##
c3KafkaCluster:
tls:
## If true, TLS configuration is enable
##
enabled: false
## Supported authentication types: plain, tls
##
authentication:
type: ""
## If true, inter-communication will be encrypted if TLS is enabled. The bootstrapEndpoint will have FQDN name.
## If false, the security setting is configured to use either SASL_PLAINTEXT or PLAINTEXT
internal: false
## If above tls.internal is true, configure with Kafka bootstrap DNS configuration on port 9092 e.g <kafka.name>.<domain>:9092
## If above tls.internal is false, configure with Kafka service name on port 9071 eg. <kafka.name>:9071 or FQDN name of Kafka serivce name e.g <name>.<namespace>.svc.cluster.local:9071
bootstrapEndpoint: ""
## Broker initial count configuration
##
brokerCount: 3
## Zookeeper service name with port 2181 eg zookeeper:2181
##
zookeeper:
endpoint: ""
## C3 monitoring clusters
##
monitoringKafkaClusters: []
# monitoringKafkaClusters:
# - name: kafka-test
# tls:
# enabled: true
# internal: false
# authentication:
# type: plain
# bootstrapEndpoint: "kafka-prod1:9071"
# ## Use if destination Kafka cluster have a different username/password
# ## then global username/password configured for SASL security protocol.
# username: test-demo
# password: test-demo-password
# - name: kafka-test1
# tls:
# enabled: true
# internal: false
# bootstrapEndpoint: "kafka-prod2:9071"
## KSQL configurations
##
ksql:
enabled: false
# this one should be reachable from the machine where C3 is installed (use internal ksql service name with port 9088, ef http://ksql:9088)
url: ""
# this one should be reachable from the machine where the browser using C3 is running
# e.g http|s://schemaregistry.example.com:|8088|8443
advertisedUrl: ""
tls:
enabled: false
authentication:
type: ""
##
## SchemaRegistry Configurations
schemaRegistry:
enabled: false
## e.g http|s://schemaregistry:8443|8083
##
url: ""
tls:
enabled: false
authentication:
type: ""
## Connect cluster configurations
##
connectCluster:
enabled: false
## connect cluster endpoint
## http|s://<connector-svc-name>:8083|8443
url: ""
tls:
enabled: false
authentication:
type: ""
# ZooKeeper connection string for the Kafka cluster backing the Connect cluster.
# If unspecified, falls back to the the c3KafkaCluster's zookeeper configurations
zookeeperEndpoint: ""
# Bootstrap servers for Kafka cluster backing the Connect cluster
# If unspecified, falls back to the the c3KafkaCluster's bootstrapEndpoint configurations
kafkaBootstrapEndpoint: ""
timeout: 15000
Other component dependencies¶
Other components have additional dependencies which depend on the Kafka security
configuration and how the Kafka security dependencies in component configurations are configured. Extensive
comments are supplied in the component YAML files. Review each component
values.yaml
file to better understand the dependencies that may need to be
used for each component. The following provides general information about each
dependency.
Schema Registry parameters: Adding these security parameters is based on the Kafka security configuration and if different endpoint security is required for Schema Registry. Read the comments in the component
values.yaml
file for additional details.## Schema Registry Configuration schemaRegistry: enabled: false ## Example http|s://schemaregistry:8443|8083 ## url : "" tls: enabled: false authentication: type: ""
Producer parameters: Adding these security parameters is based on the Kafka security configuration and if different endpoint security is required for producers. Read the comments in the component
values.yaml
file for additional details.producer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: ""
Consumer parameters: Adding these security parameters is based on the Kafka security configuration and if different endpoint security is required for consumers. Read the comments in the component
values.yaml
file for additional details.consumer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: ""
Monitoring Interceptor parameters: Adding these security parameters is based on the Kafka security configuration and if different endpoint security is required for interceptors. Read the comments in the component
values.yaml
file for additional details.interceptor: ## Enable Interceptor enabled: false ## Period the interceptor should use to publish messages in millisecond ## publishMs: 15000 ## Producer Interceptor ## producer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: "" ## Consumer Interceptor ## consumer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: ""
Replicator parameters¶
The source/destination Kafka cluster security for replication is set up through REST endpoint for the Replicator. Use the following commands to find the right security configuration for the source/destination Kafka cluster.
- Kafka cluster (source/destination) deployed with external access on the namespace operator for kafka-src and kafka-dest:
kubectl get kafka kafka-dest -ojsonpath='{.status.externalClient}' -n operator
. This is only available if the Kafka clusterkafka-dest
is deployed for replication and external access.kubectl get kafka kafka-src -ojsonpath='{.status.externalClient}' -n operator
. This is only available if the Kafka clusterkafka-src
is deployed for replication and external access.
- Kafka Cluster (source/destination) deployed for replication with internal-access only on the namespace operator for kafka-src and kafka-dest:
kubectl get kafka kafka-dest -ojsonpath='{.status.internalClient}' -n operator
kubectl get kafka kafka-src -ojsonpath='{.status.internalClient}' -n operator
Change the bootstrap endpoint if source/destination Kafka cluster is running in
a different namespace. You need to provide the full internal bootstrap DNS name
for a Kafka cluster. For example, kafka-src.operator.svc.cluster.local:9071
and kafka-dest.operator.svc.cluster.local:9071
.
Refer to the following for more information about Replicator security:
- TLS (1 way) : Confluent Replicator (Encryption with SSL).
- TLS (2 way) : Confluent Replicator (Encryption and Authentication with SSL).
- SASL/SSL with PLAIN: Confluent Replicator (Encryption and Authentication with SASL).
Note
Monitoring interceptors are configured through the REST endpoint for Replicator as described above.
Control Center authentication¶
Confluent Control Center supports a REST endpoint on HTTPS
over port 9021
. Confluent Control Center supports
either basic or LDAP authentication. The basic and LDAP authentication is
configured through the YAML file as shown below:
## C3 authentication
##
auth:
basic:
enabled: false
##
## map with key as user and value as password and role
property: {}
# property:
# admin: Developer1,Administrators
# disallowed: no_access
ldap:
enabled: false
nameOfGroupSearch: c3users
property: {}
... omitted property list
Configuring ACL Authorization for Operator¶
You modify the kafka/values.yaml
to deploy Kafka with ACLs. This file is available in the extracted bundle from Confluent in the path /helm/confluent-operator/charts/kafka
. The example configures a superuser test
.
Set the following in the Kafka values.yaml
section.
options:
acl:
enabled: true
# value for super.users server property, takes form User:UserName;User:UserTwo;
supers: "User:test"
In the example above, test
user is the superuser that all components use
when communicating with the Kafka cluster. This is required so that components
can create internal topics. If you do not set a superuser, a message similar to
the following is displayed:
[INFO] 2019-04-11 23:03:16,560 [kafka-request-handler-1] kafka.authorizer.logger logAuditMessage - Principal = User:test is Denied Operation = Describe from host = 10.20.4.73 on resource = Cluster:LITERAL:kafka-cluster
SSL Security with ACL enabled¶
A user name is retrieved through a certificate. The Subject
section of the cert is the user name. For example, for the following certificate’s subject (C=US, ST=CA, L=Palo Alto), the user name will be configured as User:L=Palo Alto,ST=CA,C=US.
Note
- Any changes to
options:acl:supers
invalues.yaml
triggers a Kafka cluster rolling upgrade. Users:ANONYMOUS
is the default user for internal clients and inter-broker communications.
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
omitted...
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=Palo Alto, L=CA, O=Company, OU=Engineering, CN=TestCA
Validity
Not Before: Mar 28 16:37:00 2019 GMT
Not After : Mar 26 16:37:00 2024 GMT
Subject: C=US, ST=CA, L=Palo Alto
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
omitted...
For SSL configurations, PLAINTEXT protocol is used for ports 9071 (internal
client) and 9072 (inter-broker communication). If Kafka is configured with
TLS mutual authentication, in addition to
Users:ANONYMOUS
, the cert user name is required. If the cert user name is
not configured, the following error will be displayed:
[INFO] 2019-04-11 23:03:16,560 [kafka-request-handler-1] kafka.authorizer.logger logAuditMessage - Principal = User:L=Palo Alto,ST=CA,C=US is Denied Operation = Describe from host = 10.20.4.73 on resource = Cluster:LITERAL:kafka-cluster
Once ACLs are configured for Operator, follow the Authorization using ACLs instructions to enable ACL authorization for Kafka objects.