.. _co-security: ==================== Configuring security ==================== The :ref:`deployment example steps ` use SASL/PLAIN security (SASL with plain text authentication). This is allows you to get components deployed quickly for testing in a development environment. You need greater security for your production environment. The ascending authentication/encryption levels supported for a production environment are listed below: #. :ref:`SASL with PLAIN authentication (default) ` #. :ref:`SASL/SSL with PLAIN authentication ` #. :ref:`TLS one-way (server) authentication ` #. :ref:`TLS two-way (client) authentication ` #. :ref:`TLS all-way authentication ` .. note:: The helm bundle contains example YAML files that show you how to construct the parameters for each security type. Scripts that automate manual configuration steps are also provided. This document provides instructions for configuring security. For security concepts, see the Confluent topic :ref:`security`. .. tip:: See :ref:`Configuring ACLs ` to enable **Access Control Lists (ACLs)**. .. _co-high-level-security-workflow: High-level workflow ------------------- Security in a |cp| cluster revolves around the security configured for the |ak-tm| brokers. For this reason, the workflow for setting up security starts with :ref:`configuring security ` for the |ak| broker cluster, making sure it works, and then layering on additional security for the remaining components. The following list provides the typical order for adding security to your cluster: #. Configure |ak| broker security and validate accessibility. #. Configure security for applicable |cp| components and validate accessibility. #. Configure security for external clients and validate accessibility. .. tip:: One way to set up production-level security is to add and then validate increased security layered on top of a SASL/PLAIN development environment. Once this is configured properly, you can duplicate these settings when deploying your production environment. .. _co-certificates-keys: Certificates and keys --------------------- The following describes certificates, keys, and Subject Alternative Name (SAN) requirements. ------------ Certificates ------------ Depending on the security configuration in place, you will likely need to generate certificate keys for each direction of communication and for access by remote clients. For example, one-way (server) certificates need to be created for access to a |ak| broker (server). For two-way communication, both a server and client certificate is required. You can use the same certificate for both client and server authentication. :ref:`cacerts ` must be configured to trust the |ak| broker certificate when TLS is enabled. Generally, the ``cacerts`` section of the security configuration contains all Certificate Authority (CA) keys to allow connections from trusted remote clients. The instructions in :ref:`co_kafka_security_config` show how these certificate keys are structured in the ```` file. .. _co-pem-files: --------- pem files --------- The following define certificate configurations. * **cacerts** provides a list of certificates issued by a Certificate Authority (CA). A broker or client trusts any certificate signed by the CA. Certificate keys are placed in the ``.yaml`` component locations shown in the examples. * **fullchain** provides a CA key, and an intermediate CA key (if available), and a TLS certificate key. These are placed, in order, in the ``.yaml`` component locations shown in the examples. * **privkey** contains a private key. The private key is placed in the ``.yaml`` file. An example Helm install command for |ak| with :ref:`TLS two-way security ` enabled is shown below: :: helm install \ -f ./providers/gcp.yaml \ --name kafka \ --namespace operator \ --set kafka.enabled=true \ --set kafka.tls.enabled=true \ --set kafka.metricReporter.tls.enabled=true \ --set kafka.tls.cacerts="$(cat /tmp/client-ca.pem)" \ --set kafka.tls.fullchain="$(cat /tmp/server-bundle.pem)" \ --set kafka.tls.privkey="$(cat /tmp/server-key.pem)" \ ./confluent-operator .. _co-san-attributes: -------------- SAN attributes -------------- When creating certificates, make sure to configure the Subject Alternative Name (SAN) attribute. This allows a single certificate to support multiple hosts. The following show different ways to configure this attribute. External access ^^^^^^^^^^^^^^^ The following SAN examples are based on the properties below: :: ## Kafka Cluster ## kafka: name: kafka replicas: 3 resources: requests: cpu: 200m memory: 1Gi loadBalancer: enabled: true domain: "mydevplatform.gcp.cloud" For the example above: * The SAN attribute is ``*.mydefplatform.gcp.cloud`` if a wildcard certificate can be used. * If no wildcard certificate is allowed, the individual SAN attributes are based on the |ak| broker prefix/number and component name. For example: :: b0.mydevplatform.gcp.cloud b1.mydevplatform.gcp.cloud b2.mydevplatform.gcp.cloud kafka.mydevplatform.gcp.cloud Internal access only ^^^^^^^^^^^^^^^^^^^^ The example below shows the SAN attribute examples for a |ak| cluster deployed without external access, using the defaults with the example name *kafka* and the namespace *operator*: * ``*.operator.svc.cluster.local`` * ``*.kafka.operator.svc.cluster.local`` .. _co_kafka_security_config: |ak| cluster security --------------------- The following provides security configuration information for |ak|. -------------------- Advertised listeners -------------------- Advertised listener configurations for |ak| brokers deployed by |co-long| are as follows: * **Port 9071** - Internal clients running inside Kubernetes cluster. * **Port 9072** - Inter-broker communication. * **Port 9092** - External clients running outside the Kubernetes cluster. When an :ref:`external load balancer ` is configured, the list of |ak| services and listener ports resembles the following: :: kafka ClusterIP None 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h kafka-0-internal ClusterIP 10.47.247.181 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h kafka-0-lb LoadBalancer 10.47.245.192 192.50.14.35 9092:32183/TCP 21h kafka-1-internal ClusterIP 10.47.251.31 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h kafka-1-lb LoadBalancer 10.47.251.8 192.50.28.28 9092:31028/TCP 21h kafka-2-internal ClusterIP 10.47.242.124 9071/TCP,9072/TCP,9092/TCP,7203/TCP,7777/TCP 21h kafka-2-lb LoadBalancer 10.47.250.236 192.50.64.18 9092:32636/TCP 21h kafka-bootstrap-lb LoadBalancer 10.47.250.151 192.50.34.20 9092:30840/TCP 21h ------------------- Security properties ------------------- Security for each component is configured in the ``.yaml`` file based on properties provided in the ``values.yaml`` file for each component. For example, here are the unmodified, default properties for the |ak| component in the ``.yaml`` file: :: tls: enabled: false fullchain: |- privkey: |- cacerts: |- This shows that TLS is disabled and that the cluster is using the default SASL/PLAIN security configuration. .. _co-sasl-plain: SASL/PLAIN ^^^^^^^^^^ SASL with PLAIN authentication (SASL/PLAIN) is the default security applied if you do not make any security modifications to the default YAML files provided by Confluent. SASL/PLAIN should only be used for development environments or for inter-broker or broker-to-|zk| connections. When SASL/PLAIN is configured, the advertised listener ports are configured with the following protocols: * **Port 9071** - SASL/PLAIN * **Port 9072** - SASL/PLAIN * **Port 9092** - SASL/PLAIN .. _co-global-sasl: You enter a username and password for internal/external component access with SASL/PLAIN. This is set up in the ``global: sasl:`` section in the ``.yaml`` file as shown below: :: global: ... omitted sasl: plain: username: password: .. _co-sasl-ssl: SASL/SSL with PLAIN authentication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When SASL/SSL is configured, the advertised listener ports are configured with the following protocols: * External Listener: ``SASL_SSL``, PORT 9092 * Internal Listener: ``SASL_PLAINTEXT``, PORT 9071 * Inter-broker communication: ``SASL_PLAINTEXT``, PORT 9072 **Internal Clients** * All Confluent components/clients running inside the Kubernetes cluster communicate with the |ak| cluster using the ``SASL_PLAINTEXT`` mechanism. * All Confluent components/clients must configure |ak| endpoints as ``kafka:9071``. You enter a SASL/PLAIN username and password in the following ``.yaml`` section for connections over internal ports 9071 and 9072. :: global: ... omitted sasl: plain: username: password: **External Clients** * Configure the Load Balancer DNS using a wild-card certificate in the format ``CN=*.``. * Clients communicate with the ``SASL_SSL`` mechanism on port 9092. * Clients use the |ak| broker endpoint format ``kafka-:9092``. You add certificates in the ``tls`` section for each component as shown in the example below: :: tls: enabled: true authentication: type: "plain" fullchain: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC afaRd3LdUKjlLD1rMlyHyL1dwvmdk7duNuEFVbEavFaItIDv9YHJYVXUU+HraLk/ deV0pNzJwC651CXsW76y1QEVWxh4mw ... omitted -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIEBDCCAuygAwIBAgIUJjq3xmbvuLux+5KL4HPZcXeX1CwwDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC w6Jq+K9cV+WsgyBnraqrZZGJ/MwzLyEAEXKdNLAM8Zzw2b9nd50uwmK+WZN3do9N YVub8e1qjBG/N95da29XH1iJnhcLHkuz ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAwjCQDpZwnPumRqYAkUzAVf9J8bVYZC52iH7lyZXL4Yc7DK+b UytMH979UXMqzcgsPMnWULIfMTQJ0o4FWgujsGsH0iyzegO6deQXBPBWzRQmtFJG +QGE+sKc26HZyZwgbrpTUkKIDkOd3XDZmpi+Lcaw8m1zSEM6ww+wg+gqbJO6MQ+7 17EXU/66VnF5+P0wcNAButK8kAkWAe1rKKsjMm8pLe3E41LGF8G5gXI ... omitted -----END RSA PRIVATE KEY----- .. _co-tls-one-way: TLS one-way server authentication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When TLS one-way server authentication is configured, the advertised listener ports are configured with the following protocols: * External Listener: ``SSL``, PORT 9092 * Internal Listener: ``PLAINTEXT``, PORT 9071 * Inter-broker communication: ``PLAINTEXT``, PORT 9072 **Internal Clients** * All Confluent components/clients running inside the Kubernetes cluster communicate with the |ak| cluster through the ```PLAINTEXT`` mechanism. * All Confluent components/clients must configure the |ak| endpoints as ``kafka:9071``. The example below shows the ``TLS`` entries for this security configuration. :: tls: internal: false enabled: true bootstrapEndpoint: kafka:9071 **External Clients** * Configure Load Balancer DNS using a wild-card certificate in the format ``CN=*.``. * Clients communicate with the ``SASL_SSL`` mechanism on port 9092. * Clients use the |ak| broker endpoint format ``kafka-:9092``. You add certificates in the ``tls`` section for each component as shown in the example below: :: tls: enabled: true fullchain: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx ... omitted -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA7heoOiJpwzzqOLb8elUJlQWDqWjvPW0NagTlmp8uHmOLGaB6 lTppt9grn8ERy6qX+l6EbT452PyeKwA34wwae42rn9rgDY0v/0eFDJa0Wnht8FHE /CiwUscuVinH1S28KJ6B0xXMUe9r+XcR+h70/QmFhnj+SQf77tzoopiOzvZLI4+s 8OqSueDFkvo7VJtFVkIGZflwKRooeVxkB/IU3xoSftq9zH1hsDvWohhk2/Mforh1 ... omitted -----END RSA PRIVATE KEY----- .. _co-tls-two-way: TLS two-way client (mutual) authentication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When TLS two-way authentication is configured, the advertised listener ports are configured with the following protocols: * External Listener: ``SSL``, PORT 9092 * Internal Listener: ``PLAINTEXT``, PORT 9071 * Inter-broker communication: ``PLAINTEXT``, PORT 9072 **Components and Clients** * Configure the Load Balancer DNS using a wild-card certificate in the format ``CN=*.``. * Clients communicate using the ``SASL_SSL`` mechanism on port 9092. * Clients use the |ak| broker endpoint format ``kafka-:9092``. * All Confluent components running inside the Kubernetes cluster communicate with |ak| brokers using the ``SSL`` mechanism with authentication TLS. * All components/clients must configure |ak| endpoints in the format ``kafka-:9092``. The example below shows the ``TLS`` entries for this security configuration. :: tls: enabled: true internal: true authentication: type: "tls" bootstrapEndpoint: kafka.operator.svc.cluster.local:9092 You add certificates in the ``tls`` section for each component as shown in the example below: :: tls: enabled: true authentication: type: "tls" cacerts: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx ... omitted -----END CERTIFICATE----- fullchain: |- -----BEGIN CERTIFICATE----- MIIEQzCCAyugAwIBAgIUZQom3lnhPEaSCdFcPXzZK3ghN3cwDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDcyMjQzMDBaFw0yNDAyMDYyMjQzMDBaMC4x ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA7heoOiJpwzzqOLb8elUJlQWDqWjvPW0NagTlmp8uHmOLGaB6 lTppt9grn8ERy6qX+l6EbT452PyeKwA34wwae42rn9rgDY0v/0eFDJa0Wnht8FHE /CiwUscuVinH1S28KJ6B0xXMUe9r+XcR+h70/QmFhnj+SQf77tzoopiOzvZLI4+s 8OqSueDFkvo7VJtFVkIGZflwKRooeVxkB/IU3xoSftq9zH1hsDvWohhk2/Mforh1 ... omitted -----END RSA PRIVATE KEY----- .. _co-tls-all-way: TLS all-way client authentication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When TLS all-way authentication is configured, the advertised listener ports are configured with the following protocols: * External Listener: ``SSL``, PORT 9092 * Internal Listener: ``PLAINTEXT``, PORT 9071 * Inter-broker communication: ``PLAINTEXT``, PORT 9072 .. note:: All |cp| components use the same certificate for client/server authentication in this security configuration. **Components and Clients** * Configure the Load Balancer DNS using a wild-card certificate in the format ``CN=*.``. * Clients communicate using the ``SASL_SSL`` mechanism on port 9092. * Clients use the |ak| broker endpoint format ``kafka-:9092``. * All Confluent components running inside the Kubernetes cluster communicate with |ak| brokers using the ``SSL`` mechanism. * All components/clients must configure |ak| endpoints in the format ``kafka-:9092``. The example below shows the ``TLS`` entries for this security configuration. :: tls: enabled: true internal: true bootstrapEndpoint: kafka.operator.svc.cluster.local:9092 You add certificates in the ``tls`` section for each component as shown in the example below: :: tls: enabled: true cacerts: |- -----BEGIN CERTIFICATE----- MIIDojCCAoqgAwIBAgIUP7i11Hoa6d1fDRT2LlTm307kkQ4wDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDIwNDUxMDBaFw0yNDAyMDEwNDUxMDBaMGkx ... omitted -----END CERTIFICATE----- fullchain: |- -----BEGIN CERTIFICATE----- MIIEQzCCAyugAwIBAgIUZQom3lnhPEaSCdFcPXzZK3ghN3cwDQYJKoZIhvcNAQEL BQAwaTELMAkGA1UEBhMCVVMxEjAQBgNVBAgTCVBhbG8gQWx0bzELMAkGA1UEBxMC Q0ExEjAQBgNVBAoTCUNvbmZsdWVudDEUMBIGA1UECxMLRW5naW5lZXJpbmcxDzAN BgNVBAMTBlRlc3RDQTAeFw0xOTAyMDcyMjQzMDBaFw0yNDAyMDYyMjQzMDBaMC4x ... omitted -----END CERTIFICATE----- privkey: |- -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA7heoOiJpwzzqOLb8elUJlQWDqWjvPW0NagTlmp8uHmOLGaB6 lTppt9grn8ERy6qX+l6EbT452PyeKwA34wwae42rn9rgDY0v/0eFDJa0Wnht8FHE /CiwUscuVinH1S28KJ6B0xXMUe9r+XcR+h70/QmFhnj+SQf77tzoopiOzvZLI4+s 8OqSueDFkvo7VJtFVkIGZflwKRooeVxkB/IU3xoSftq9zH1hsDvWohhk2/Mforh1 ... omitted -----END RSA PRIVATE KEY----- .. _co_component_security_config: Component and client security ----------------------------- The following sections provide information about setting up |cp| component and client security. --------------- Client security --------------- All client security configurations depend on :ref:`co_kafka-dependencies`. Refer to the following when configuring clients: * If the |ak| cluster is using **TLS one-way server authentication**, configure the client as described in `Encryption with SSL `_. * If the |ak| cluster is using **TLS two-way or all-way client authentication**, configure the client as described in `Encryption and Authentication with SSL `_. * If Kafka cluster using **SASL/SSL with PLAIN authentication**, configure the client as described in `Encryption and Authentication with SASL `_. Client configuration information can also be retrieved from the |ak| cluster using the following commands (using the example name *kafka* running on the namespace *operator*). * For internal clients (running inside the Kubernetes cluster): ``kubectl get kafka kafka -ojsonpath='{.status.internalClient}' -n operator``. * For external clients (running outside the Kubernetes cluster): ``kubectl get kafka kafka -ojsonpath='{.status.externalClient}' -n operator`` (only available if the |ak| cluster is deployed for external access). The following provide configuration dependencies and additional component security requirements. .. important:: The following are general guidelines and do not provide all security dependencies. Review the component ``values.yaml`` files before configuring security. There are extensive comments that provide additional information about component dependencies. .. _co_kafka-dependencies: ------------------------------------------------------ |ak| security dependencies in component configurations ------------------------------------------------------ Cluster components have a security dependency structure based on the security parameters created when setting up :ref:`Kafka security `. For example, if you configured :ref:`co-tls-all-way` for the |ak| brokers, the configuration parameters in the |crep| security section would resemble the following: :: replicator: name: replicator replicas: 2 dependencies: kafka: tls: enabled: true (Note 1) internal: true (Note 2) authentication: type: "tls" (Note 3) brokerCount: 3 bootstrapEndpoint: kafka.operator.svc.cluster.local:9092 (Note 4) **Kafka dependency notes identified above:** * Note 1: If true, the destination |ak| cluster is running in secure mode. If false, it is running in insecure mode. * Note 2: If true, all communication to the destination |ak| cluster is encrypted. If false, communication to the destination |ak| cluster is not encrypted. * Note 3: The supported authentication types are ``tls`` and ``plain``. * Note 4: The destination bootstrap endpoint for the |ak| cluster. If ``internal: false``, always use ``kafka:9071`` or ``kafka.operator.svc.cluster.local:9071`` (for a |ak| cluster with the name ``kafka`` deployed on namespace ``operator``). If ``enabled: true`` and ``internal: true``, configure using an internal or external domain name on port ``9092``. For example, ``kafka.:9092``. Bootstrap values are defined as follows: - ``kafka.operator.svc.cluster.local:9092``: The |ak| cluster is not accessible outside the Kubernetes cluster. - ``kafka.:9092``: The |ak| cluster is accessible outside the Kubernetes cluster. - ``.:9092``: The |ak| cluster is accessible outside the Kubernetes cluster using ``.``. .. note:: Make sure to create the appropriate :ref:`co-san-attributes` for certificates. ----------------- |c3| dependencies ----------------- |c3-short| has several dependencies, given its role as the graphical user interface to |cp|. The following shows the ``values.yaml`` file dependencies that may need to be used, based on how |ak| security and other component security parameters are configured. :: dependencies: ## Kafka cluster C3 uses internally ## c3KafkaCluster: tls: ## If true, TLS configuration is enable ## enabled: false ## Supported authentication types: plain, tls ## authentication: type: "" ## If true, inter-communication will be encrypted if TLS is enabled. The bootstrapEndpoint will have FQDN name. ## If false, the security setting is configured to use either SASL_PLAINTEXT or PLAINTEXT internal: false ## If above tls.internal is true, configure with Kafka bootstrap DNS configuration on port 9092 e.g .:9092 ## If above tls.internal is false, configure with Kafka service name on port 9071 eg. :9071 or FQDN name of Kafka serivce name e.g ..svc.cluster.local:9071 bootstrapEndpoint: "" ## Broker initial count configuration ## brokerCount: 3 ## Zookeeper service name with port 2181 eg zookeeper:2181 ## zookeeper: endpoint: "" ## C3 monitoring clusters ## monitoringKafkaClusters: [] # monitoringKafkaClusters: # - name: kafka-test # tls: # enabled: true # internal: false # authentication: # type: plain # bootstrapEndpoint: "kafka-prod1:9071" # ## Use if destination Kafka cluster have a different username/password # ## then global username/password configured for SASL security protocol. # username: test-demo # password: test-demo-password # - name: kafka-test1 # tls: # enabled: true # internal: false # bootstrapEndpoint: "kafka-prod2:9071" ## KSQL configurations ## ksql: enabled: false # this one should be reachable from the machine where C3 is installed (use internal ksql service name with port 9088, ef http://ksql:9088) url: "" # this one should be reachable from the machine where the browser using C3 is running # e.g http|s://schemaregistry.example.com:|8088|8443 advertisedUrl: "" tls: enabled: false authentication: type: "" ## ## SchemaRegistry Configurations schemaRegistry: enabled: false ## e.g http|s://schemaregistry:8443|8083 ## url: "" tls: enabled: false authentication: type: "" ## Connect cluster configurations ## connectCluster: enabled: false ## connect cluster endpoint ## http|s://:8083|8443 url: "" tls: enabled: false authentication: type: "" # ZooKeeper connection string for the Kafka cluster backing the Connect cluster. # If unspecified, falls back to the the c3KafkaCluster's zookeeper configurations zookeeperEndpoint: "" # Bootstrap servers for Kafka cluster backing the Connect cluster # If unspecified, falls back to the the c3KafkaCluster's bootstrapEndpoint configurations kafkaBootstrapEndpoint: "" timeout: 15000 ---------------------------- Other component dependencies ---------------------------- Other components have additional dependencies which depend on the |ak| security configuration and how the :ref:`co_kafka-dependencies` are configured. Extensive comments are supplied in the component YAML files. Review each component ``values.yaml`` file to better understand the dependencies that may need to be used for each component. The following provides general information about each dependency. * **Schema Registry parameters**: Adding these security parameters is based on the |ak| security configuration and if different endpoint security is required for |sr|. Read the comments in the component ``values.yaml`` file for additional details. :: ## Schema Registry Configuration schemaRegistry: enabled: false ## Example http|s://schemaregistry:8443|8083 ## url : "" tls: enabled: false authentication: type: "" * **Producer parameters**: Adding these security parameters is based on the |ak| security configuration and if different endpoint security is required for producers. Read the comments in the component ``values.yaml`` file for additional details. :: producer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: "" * **Consumer parameters**: Adding these security parameters is based on the |ak| security configuration and if different endpoint security is required for consumers. Read the comments in the component ``values.yaml`` file for additional details. :: consumer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: "" * **Monitoring Interceptor parameters**: Adding these security parameters is based on the |ak| security configuration and if different endpoint security is required for interceptors. Read the comments in the component ``values.yaml`` file for additional details. :: interceptor: ## Enable Interceptor enabled: false ## Period the interceptor should use to publish messages in millisecond ## publishMs: 15000 ## Producer Interceptor ## producer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: "" ## Consumer Interceptor ## consumer: tls: enabled: false internal: false authentication: type: "" bootstrapEndpoint: "" .. _co_replicator_security: ----------------- |crep| parameters ----------------- The source/destination |ak| cluster security for replication is set up through REST endpoint for the |crep|. Use the following commands to find the right security configuration for the source/destination |ak| cluster. * Kafka cluster (source/destination) deployed with external access on the namespace *operator* for *kafka-src* and *kafka-dest*: * ``kubectl get kafka kafka-dest -ojsonpath='{.status.externalClient}' -n operator``. This is only available if the |ak| cluster ``kafka-dest`` is deployed for replication and external access. * ``kubectl get kafka kafka-src -ojsonpath='{.status.externalClient}' -n operator``. This is only available if the |ak| cluster ``kafka-src`` is deployed for replication and external access. * Kafka Cluster (source/destination) deployed for replication with internal-access only on the namespace *operator* for *kafka-src* and *kafka-dest*: * ``kubectl get kafka kafka-dest -ojsonpath='{.status.internalClient}' -n operator`` * ``kubectl get kafka kafka-src -ojsonpath='{.status.internalClient}' -n operator`` Change the bootstrap endpoint if source/destination Kafka cluster is running in a different namespace. You need to provide the full internal bootstrap DNS name for a |ak| cluster. For example, ``kafka-src.operator.svc.cluster.local:9071`` and ``kafka-dest.operator.svc.cluster.local:9071``. Refer to the following for more information about |crep| security: * TLS (1 way) : `Confluent Replicator (Encryption with SSL) `_. * TLS (2 way) : `Confluent Replicator (Encryption and Authentication with SSL) `_. * SASL/SSL with PLAIN: `Confluent Replicator (Encryption and Authentication with SASL) `_. .. note:: Monitoring interceptors are configured through the REST endpoint for |crep| as described above. .. _co_c3_security: ------------------------- |c3-short| authentication ------------------------- |c3| supports a REST endpoint on ``HTTPS`` over port ``9021``. |c3| supports either basic or LDAP authentication. The basic and LDAP authentication is configured through the YAML file as shown below: :: ## C3 authentication ## auth: basic: enabled: false ## ## map with key as user and value as password and role property: {} # property: # admin: Developer1,Administrators # disallowed: no_access ldap: enabled: false nameOfGroupSearch: c3users property: {} ... omitted property list .. _co-configure-acls: Configuring ACL Authorization for |co| -------------------------------------- You modify the ``kafka/values.yaml`` to deploy |ak| with ACLs. This file is available in the extracted bundle from Confluent in the path ``/helm/confluent-operator/charts/kafka``. The example configures a superuser ``test``. Set the following in the |ak| ``values.yaml`` section. :: options: acl: enabled: true # value for super.users server property, takes form User:UserName;User:UserTwo; supers: "User:test" In the example above, ``test`` user is the superuser that all components use when communicating with the |ak| cluster. This is required so that components can create internal topics. If you do not set a superuser, a message similar to the following is displayed: :: [INFO] 2019-04-11 23:03:16,560 [kafka-request-handler-1] kafka.authorizer.logger logAuditMessage - Principal = User:test is Denied Operation = Describe from host = 10.20.4.73 on resource = Cluster:LITERAL:kafka-cluster ----------------------------- SSL Security with ACL enabled ----------------------------- A user name is retrieved through a certificate. The ``Subject`` section of the cert is the user name. For example, for the following certificate’s subject (C=US, ST=CA, L=Palo Alto), the user name will be configured as User:L=Palo Alto,ST=CA,C=US. .. note:: - Any changes to ``options:acl:supers`` in ``values.yaml`` triggers a |ak| cluster rolling upgrade. - ``Users:ANONYMOUS`` is the default user for internal clients and inter-broker communications. :: Certificate: Data: Version: 3 (0x2) Serial Number: omitted... Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, ST=Palo Alto, L=CA, O=Company, OU=Engineering, CN=TestCA Validity Not Before: Mar 28 16:37:00 2019 GMT Not After : Mar 26 16:37:00 2024 GMT Subject: C=US, ST=CA, L=Palo Alto Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: omitted... For SSL configurations, PLAINTEXT protocol is used for ports 9071 (internal client) and 9072 (inter-broker communication). If |ak| is configured with :ref:`TLS mutual authentication `, in addition to ``Users:ANONYMOUS``, the cert user name is required. If the cert user name is not configured, the following error will be displayed: :: [INFO] 2019-04-11 23:03:16,560 [kafka-request-handler-1] kafka.authorizer.logger logAuditMessage - Principal = User:L=Palo Alto,ST=CA,C=US is Denied Operation = Describe from host = 10.20.4.73 on resource = Cluster:LITERAL:kafka-cluster Once ACLs are configured for |co|, follow the :ref:`kafka_authorization` instructions to enable ACL authorization for Kafka objects.