.. _schemaregistry_security: Secure |sr| for |cp| -------------------- This page describes security considerations, configuration, and management details for |sr| on |cp|. Features ~~~~~~~~ |sr-long| currently supports all Kafka security features, including: * Encryption * :ref:`TLS/SSL encryption ` with a secure Kafka cluster * :ref:`End-user REST API calls over HTTPS` * Authentication * :ref:`TLS/SSL authentication` with a secure Kafka Cluster * :ref:`SASL authentication` with a secure Kafka Cluster * :ref:`Authentication with ZooKeeper over SASL` * Jetty authentication as described in :ref:`Role-Based Access Control` steps * Authorization (provided through the :ref:`confluentsecurityplugins_schema_registry_security_plugin`) * :ref:`Role-Based Access Control` * :ref:`confluentsecurityplugins_sracl_authorizer` * :ref:`confluentsecurityplugins_topicacl_authorizer` * :ref:`Schema Registry Authorization (reference of supported operations and resource URIs) ` For configuration details, check the :ref:`configuration options`. .. seealso:: For a configuration example that uses |sr| configured with security to a secure Kafka cluster, see the :ref:`Confluent Platform demo `. Schema Registry to Kafka Cluster ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Kafka Store ^^^^^^^^^^^ .. include:: ../includes/backend.rst All Kafka security features are supported by |sr|. Relatively few services need access to |sr|, and they are likely internal, so you can restrict access to the |sr| itself via firewall rules and/or network segmentation. .. seealso:: :ref:`config-acls-schemas-topic`. .. _schema_registry_zk_sasl_auth: |zk| ^^^^ .. important:: - |zk| leader election was removed in |cp| 7.0.0. :ref:`Kafka leader election should be used instead `. To learn more, see the |zk| sections in :ref:`kafka_adding_security`, especially the |zk| section, which describes how to enable security between |ak| brokers and |zk|. - On older versions of |cp| (5.4.x and earlier), if both both ``kafkastore.connection.url`` (|zk|) and :ref:`kafka-bootstrap-servers` (|ak|) configured, |zk| was used for leader election, and also needed for the Topic ACL authorizer |sr| security plugin. With :ref:`confluentsecurityplugins_sracl_authorizer`, the plugin no longer requires ``kafkastore.connection.url``. |sr| supports both unauthenticated and SASL authentication to |zk|. Setting up |zk| SASL authentication for |sr| is similar to |ak|'s setup. Namely, create a keytab for |sr|, create a JAAS configuration file, and set the appropriate JAAS Java properties. In addition to the keytab and JAAS setup, be aware of the :ref:`zookeeper-set-acl` setting. This setting, when set to ``true``, enables |zk| ACLs, which limits access to znodes. Important: if ``zookeeper-set-acl`` is set to ``true``, |sr|'s service name must be the same as |ak|'s, which is ``kafka`` by default. Otherwise, |sr| will fail to create the ``_schemas`` topic, which will cause a "leader not available" error in the DEBUG log. |sr| log will show ``org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata`` when |ak| does not set |zk| ACLs but |sr| does. |sr|'s service name can be set either with ``kafkastore.sasl.kerberos.service.name`` or in the JAAS file. If |sr| has a different service name tha |ak|, ``zookeeper.set.acl`` must be set to ``false`` in both |sr| and |ak|. .. _clients-to-sr-security-configs: Clients to |sr| ~~~~~~~~~~~~~~~ .. _schema_registry_http_https: Configuring the REST API for HTTP or HTTPS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default |sr| allows clients to make REST API calls over HTTP. You may configure |sr| to allow either HTTP or HTTPS or both at the same time. The following configuration determines the protocol used by |sr|: ``listeners`` Comma-separated list of listeners that listen for API requests over HTTP or HTTPS or both. If a listener uses HTTPS, the appropriate TLS/SSL configuration parameters need to be set as well. * Type: list * Default: "http://0.0.0.0:8081" * Importance: high On the clients, configure ``schema.registry.url`` to match the configured |sr| listener. .. _sr-https-additional: Additional configurations for HTTPS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you configure an HTTPS listener, there are several additional configurations for |sr|. First, configure the appropriate TLS/SSL configurations for the keystore and optionally truststore for the |sr| cluster (for example, in ``schema-registry.properties``). The truststore is required only when ``ssl.client.auth`` is set to true. .. sourcecode:: bash ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks ssl.truststore.password= ssl.keystore.location=/etc/kafka/secrets/kafka.client.keystore.jks ssl.keystore.password= ssl.key.password= You may specify which protocol to use while making calls between the instances of |sr|. The secondary to primary node calls for writes and deletes will use the specified protocol. ``inter.instance.protocol`` The protocol used while making calls between the instances of |sr|. The secondary to primary node calls for writes and deletes will use the specified protocol. The default value is ``http``. When ``https`` is set, ``ssl.keystore.`` and ``ssl.truststore.`` configs are used while making the call. The ``schema.registry.inter.instance.protocol`` name is deprecated; use ``inter.instance.protocol`` instead. * Type: string * Default: "http" * Importance: low Starting with 5.4, |cp| provides the |sr| dedicated client configuration properties, as shown in the :devx-cp-demo:`example|docker-compose.yml`. .. tip:: Clients to |sr| include both: - Client applications created or used by developers. - |cp| components such as |c3|, |kconnect-long|, |ksqldb|, and so forth. To configure clients to use HTTPS to |sr|, set the following properties or environment variables: 1. On the client, configure the ``schema.registry.url`` to match the configured listener for HTTPS. .. sourcecode:: bash .schema.registry.url: ":" 2. On the client, configure the environment variables to set the TLS/SSL keystore and truststore in one of two ways: - (Recommended) Use the |sr| dedicated properties to configure the client: .. sourcecode:: bash .schema.registry.ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks .schema.registry.ssl.truststore.password= .schema.registry.ssl.keystore.location=/etc/kafka/secrets/kafka.client.keystore.jks .schema.registry.ssl.keystore.password= .schema.registry.ssl.key.password= The naming conventions for |c3| configuration differ slightly from the other clients. To configure |c3-short| as an HTTPS client to |sr|, specify these dedicated properties in the |c3-short| config file: .. sourcecode:: bash confluent.controlcenter.schema.registry.schema.registry.ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks confluent.controlcenter.schema.registry.schema.registry.ssl.truststore.password= confluent.controlcenter.schema.registry.schema.registry.ssl.keystore.location=/etc/kafka/secrets/kafka.client.keystore.jks confluent.controlcenter.schema.registry.schema.registry.ssl.keystore.password= confluent.controlcenter.schema.registry.schema.registry.ssl.key.password= .. seealso:: :ref:`controlcenter-ssl-sr` under :ref:`Configuring TLS/SSL for Control Center ` provides a detailed explanation of the naming conventions used in this configuration. - (Legacy, on client) Set environment variables depending on the client (one of ``KAFKA_OPTS``, ``SCHEMA_REGISTRY_OPTS``, ``KSQL_OPTS``): .. sourcecode:: bash export JAVA_OPTS: "-Djavax.net.ssl.trustStore=/etc/kafka/secrets/kafka.client.truststore.jks \ -Djavax.net.ssl.trustStorePassword= \ -Djavax.net.ssl.keyStore=/etc/kafka/secrets/kafka.client.keystore.jks \ -Djavax.net.ssl.keyStorePassword=" .. important:: - If you use the legacy method of defining TLS/SSL values in system environment variables, TLS/SSL settings will apply to every Java component running on this JVM. For example on |kconnect|, every :connect-common:`connector|overview.html` will use the given truststore. Consider a scenario where you are using an Amazon Web Servces (AWS) connector such as S3 or Kinesis, and do not have the AWS certificate chain in the given truststore. The connector will fail with the following error: .. sourcecode:: bash com.amazonaws.SdkClientException: Unable to execute HTTP request: sun.security.validator.ValidatorException: PKIX path building failed This does not apply if you use the dedicated |sr| client configurations. - For the ``kafka-avro-console-producer`` and ``kafka-avro-console-consumer``, you must pass the |sr| properties on the command line. Here is an example for the producer: .. code:: bash ./kafka-avro-console-producer --broker-list localhost:9093 --topic myTopic \ --producer.config ~/ect/kafka/producer.properties --property value.schema=‘{“type”:“record”,“name”:“myrecord”,“fields”:[{“name”:“f1”,“type”:“string”}]}’ \ --property schema.registry.url=https://localhost:8081 --property schema.registry.ssl.truststore.location=/etc/kafka/security/schema.registry.client.truststore.jks --property schema.registry.ssl.truststore.password=myTrustStorePassword For more examples of using the producer and consumer command line utilities, see :ref:`sr-test-drive-avro`, :ref:`sr-test-drive-json-schema`, :ref:`sr-test-drive-protobuf`, and the demo in :ref:`schema_validation`. .. seealso:: To learn more, see these demos and examples: :ref:`cp-demo`, :devx-examples:`Kafka Client Application Examples|clients`, and :ref:`sr-over-https-api-examples` in the API Usage Examples. Migrating from HTTP to HTTPS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To upgrade |sr| to allow REST API calls over HTTPS in an existing cluster: - Add/Modify the ``listeners`` config to include HTTPS. For example: http://0.0.0.0:8081,https://0.0.0.0:8082 - Configure |sr| with appropriate TLS/SSL configurations to setup the keystore and optionally truststore - Do a rolling bounce of the cluster This process enables HTTPS, but still defaults to HTTP so |sr| instances can still communicate before all nodes have been restarted. They will continue to use HTTP as the default until configured not to. To switch to HTTPS as the default and disable HTTP support, perform the following steps: - Enable HTTPS as mentioned in first section of upgrade (both HTTP & HTTPS will be enabled) - Configure ``inter.instance.protocol`` to `https` in all the nodes - Do a rolling bounce of the cluster - Remove http listener from the ``listeners`` in all the nodes - Do a rolling bounce of the cluster .. _schema_registry_basic_http_auth: Configuring the REST API for Basic HTTP Authentication ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/basic-auth.rst Governance ~~~~~~~~~~ To provide data governance with the |sr-long| : #. disable auto schema registration #. restrict access to the `_schemas` topic #. restrict access to |sr| operations Disabling Auto Schema Registration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. include:: ../includes/auto-schema-registration.rst Once a client application disables automatic schema registration, it will no longer be able to dynamically register new schemas from within the application. However, it will still be able to retrieve existing schemas from the |sr|, assuming proper authorization. .. _config-acls-schemas-topic: Authorizing Access to the Schemas Topic ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you enable :ref:`Kafka authorization `, you must grant the |sr| service principal the ability to perform the following :ref:`operations on the specified resources`: - ``Read`` and ``Write`` access to the internal **_schemas** topic. This ensures that only authorized users can make changes to the topic. - ``DescribeConfigs`` on the schemas topic to verify that the topic exists - ``describe topic`` on the schemas topic, giving the |sr| service principal the ability to list the schemas topic - ``DescribeConfigs`` on the internal consumer offsets topic - Access to the |sr| cluster (``group``) - ``Create`` permissions on the |ak| cluster .. sourcecode:: bash export KAFKA_OPTS="-Djava.security.auth.login.config=" bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --producer --consumer --topic _schemas --group schema-registry bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation DescribeConfigs --topic _schemas bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Describe --topic _schemas bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Read --topic _schemas bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Write --topic _schemas bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Describe --topic __consumer_offsets bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Create --cluster kafka-cluster If you are using the :ref:`confluentsecurityplugins_sracl_authorizer`, you also need permissions to ``Read``, ``Write``, and ``DescribeConfigs`` on the internal **_schemas_acl** topic: .. sourcecode:: bash bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --producer --consumer --topic _schemas_acl --group schema-registry bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Read --topic _schemas_acl bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation Write --topic _schemas_acl bin/kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf --add \ --allow-principal 'User:' --allow-host '*' \ --operation DescribeConfigs --topic _schemas_acl .. tip:: - The group, which serves at the |sr| cluster ID, defaults to **schema-registry**. To customize, specify a value for ``schema-registry-group-id``. - The internal topic that holds schemas defaults to topic name **_schemas**. To customize, specify a value for ``kafkastore.topic``. - The internal topic that holds ACL schemas defaults to topic name **_schemas_acl** (or, ``{{kafkastore.topic}}_acls``). To customize, specify a value for ``confluent.schema.registry.acl.topic``. .. note:: - **Removing world-level permissions:** In previous versions of |sr|, we recommended making the **_schemas** topic world readable and writable. Now that |sr| supports SASL, the world-level permissions can be dropped. Authorizing Schema Registry Operations with the Security Plugin ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :ref:`Schema Registry security plugin ` provides authorization for various Schema Registry operations. It authenticates the incoming requests and authorizes them via the configured authorizer. This allows schema evolution management to be restricted to administrative users, with application users provided with read-only access only. Related Content ~~~~~~~~~~~~~~~ - Blog post: `Ensure Data Quality and Data Evolvability with a Secured Schema Registry `_