|sr| Deployment Architectures for |cp| ====================================== |sr| on |cp| can be deployed using a single primary source, with either |ak| or |zk| leader election. You can also set up multiple |sr| servers for high availability deployments, where you switch to a secondary |sr| cluster if the primary goes down, and for data migration, one time or as a continuous feed. This section describes these architectures. .. _schemaregistry_single_master: Single Primary Architecture --------------------------- |sr| is designed to work as a distributed service using single primary architecture. In this configuration, at most one |sr| instance is the primary at any given moment (ignoring pathological 'zombie primaries'). Only the primary is capable of publishing writes to the underlying |ak| log, but all nodes are capable of directly serving read requests. Secondary nodes serve registration requests indirectly by simply forwarding them to the current primary, and returning the response supplied by the primary. Starting with |cp| 4.0, primary election is accomplished with the |ak| group protocol. (|zk| based primary election was removed in |cp| 7.0.0. ) .. note:: Make sure not to mix up the election modes amongst the nodes in same cluster. This will lead to multiple primaries and issues with your operations. --------------------------------- |ak| Coordinator Primary Election --------------------------------- .. figure:: ../images/schema-registry-design-kafka.png :align: center |ak| based Schema Registry |ak| based primary election is chosen when :ref:`kafkastore-connection-url` is not configured and has the |ak| bootstrap brokers ```` specified. The |ak| group protocol, chooses one amongst the primary eligible nodes ``leader.eligibility=true`` as the primary. |ak| based primary election should be used in all cases. (|zk| based leader election was removed in |cp| 7.0.0. See :ref:`schemaregistry_zk_migration`.) |sr| is also designed for multi-colocated configuration. See :ref:`schemaregistry_mirroring` for more details. --------------------- |zk| Primary Election --------------------- .. important:: |zk| leader election was removed in |cp| 7.0.0. |ak| leader election should be used instead. See :ref:`schemaregistry_zk_migration` for full details. .. _sr-high-availability-single-primary: ------------------------------------------ High Availability for Single Primary Setup ------------------------------------------ Many services in |cp| are effectively stateless (they store state in |ak| and load it on demand at start-up) and can redirect requests automatically. You can treat these services as you would deploying any other stateless application and get high availability features effectively for free by deploying multiple instances. Each instance loads all of the |sr| state so any node can serve a READ, and all nodes know how to forward requests to the primary for WRITEs. A recommended approach is to define multiple |sr| servers for high availability clients using the |sr| instance URLs in the |sr| client ``schema.registry.url`` property, thereby using the entire cluster of |sr| instances and providing a method for failover. This also makes it easy to handle changes to the set of servers without having to reconfigure and restart all of your applications. The same strategy applies to REST proxy or |kconnect-long|. A simple setup with just a few nodes means |sr| can fail over easily with a simple multi-node deployment and single primary election protocol. Alternatively, you can use a single URL for ``schema.registry.url`` and still use the entire cluster of |sr| instances. However, this configuration does not support failover to different |sr| instances in a dynamic DNS or virtual IP setup because only one ``schema.registry.url`` is surfaced to schema registry clients. Examples and Runbooks --------------------- The following sections describe single and multi datacenter deployments using |ak| leader election. Note that |zk| leader election was removed in |cp| 7.0.0. Use |ak| leader election instead. To upgrade to |ak| leader election, see :ref:`schemaregistry_zk_migration`. .. _schemaregistry_single-dc: ----------------------- Single Datacenter Setup ----------------------- Within a single datacenter or location, a multi-node, multi-broker cluster provides |ak| data replication across the nodes. Producers write and consumers read data to/from topic partition leaders. Leaders replicate data to followers so that messages are copied to more than one broker. You can configure parameters on producers and consumers to optimize your single cluster deployment for various goals, including message durability and high availability. |ak| :ref:`producers can set the acks configuration parameter` to control when a write is considered successful. For example, setting producers to ``acks=all`` requires other brokers in the cluster acknowledge receiving the data before the leader broker responds to the producer. If a leader broker fails, the |ak| cluster recovers when a follower broker is elected leader and client applications can continue to write and read messages through the new leader. |ak| Election ^^^^^^^^^^^^^ Recommended Deployment ###################### .. figure:: ../images/single-dc-setup.png :align: center :scale: 50% Single datacenter with |ak| intra-cluster replication The image above shows a single data center - DC A. For this example, |ak| is used for leader election, which is recommended. .. note:: You can also set up a single cluster with |zk|, but this configuration is deprecated in favor of |ak| leader election. Important Settings ################## - ``kafkastore.bootstrap.servers`` - This should point to the primary |ak| cluster (DC A in this example). - ``schema.registry.group.id`` - The ``schema.registry.group.id`` is used as the consumer ``group.id``. For single datacenter setup, make this setting the same for all nodes in the cluster. When set, ``schema.registry.group.id`` overrides ``group.id`` for the |ak| group when |ak| is used for leader election. (Without this configuration, ``group.id`` will be "schema-registry".) - ``leader.eligibility`` - In a single datacenter setup, all |sr| instances will be local to the |ak| cluster and should have ``leader.eligibility`` set to true. Run Book ######### If you have |sr| running in a single datacenter , and the primary node goes down; what do you do? First, note that the remaining |sr| instances can continue to serve requests. - If one |sr| node goes down, another node is elected leader and the cluster auto-recovers. - Restart the node, and it will come back as a follower (since a new leader was elected in the meantime). .. _schemaregistry_mirroring: ---------------------- Multi-Datacenter Setup ---------------------- Spanning multiple datacenters (DCs) with your |sr-long| synchronizes data across sites, further protects against data loss, and reduces latency. The recommended multi-datacenter deployment designates one datacenter as "primary" and all others as "secondary". If the "primary" datacenter fails and is unrecoverable, you must manually designate what was previously a "secondary" datacenter as the new "primary" per the steps in the Run Books below. |ak| Election ^^^^^^^^^^^^^ Recommended Deployment ###################### .. figure:: ../images/multi-dc-setup-kafka.png :align: center Multi datacenter with |ak| based primary election The image above shows two datacenters - DC A, and DC B. Either could be on-premises, in :cloud:`Confluent Cloud|index.html`, or part of a :ref:`bridge to cloud ` solution. Each of the two datacenters has its own |ak-tm| cluster, |zk| cluster, and |sr|. The |sr| nodes in both datacenters link to the primary |ak| cluster in DC A, and the secondary datacenter (DC B) forwards |sr| writes to the primary (DC A). Note that |sr| nodes and hostnames must be addressable and routable across the two sites to support this configuration. |sr| instances in DC B have ``leader.eligibility`` set to false, meaning that none can be elected leader during steady state operation with both datacenters online. To protect against complete loss of DC A, |ak| cluster A (the source) is replicated to |ak| cluster B (the target). This is achieved by running the :ref:`Replicator ` local to the target cluster (DC B). In this active-passive setup, |crep| runs in one direction, copying |ak| data and configurations from the active DC A to the passive DC B. The |sr| instances in both data centers point to the internal ``_schemas`` topic in DC A. For the purposes of disaster recovery, you must replicate the :ref:`internal schemas topic ` itself. If DC A goes down, the system will failover to DC B. Therefore, DC B needs a copy of the ``_schemas`` topic for this purpose. .. tip:: Keep in mind, this failover scenario does not require the same overall configuration needed to :ref:`migrate schemas `. So, do not set ``schema.registry.topic`` or ``schema.subject.translator.class``, as you would for a schema migration. Producers write data to just the active cluster. Depending on the overall design, consumers can read data from the active cluster only, leaving the passive cluster for disaster recovery, or from both clusters to optimize reads on a geo-local cache. In the event of a partial or complete disaster in one datacenter, applications can failover to the secondary datacenter. ACLs and Security ################# In a multi-DC setup with ACLs enabled, the schemas ACL topic must be replicated. In the case of an outage, the ACLs will be cached along with the schemas. |sr| will continue to run READs with ACLs if the primary |ak| cluster goes down. - For an overview of security strategies and protocols for |sr|, see :ref:`schemaregistry_security`. - To learn how to configure ACLs on roles related to |sr|, see :ref:`confluentsecurityplugins_sracl_authorizer`. - To learn how to define |ak| topic based ACLs, see :ref:`confluentsecurityplugins_topicacl_authorizer`. - To learn about using role-based authorization with |sr|, see :ref:`schemaregistry_rbac`. - To learn more about |crep| security, see :ref:`replicator_security_overview` in the |crep| documentation. Important Settings ################## - ``kafkastore.bootstrap.servers`` - This should point to the primary |ak| cluster (DC A in this example). - ``schema.registry.group.id`` - Use this setting to override the ``group.id`` for the |ak| group used when |ak| is used for leader election. Without this configuration, ``group.id`` will be "schema-registry". If you want to run more than one |sr| cluster against a single |ak| cluster you, should make this setting unique for each cluster. - ``leader.eligibility`` - A |sr| server with ``leader.eligibility`` set to false is guaranteed to remain a follower during any leader election. |sr| instances in a "secondary" datacenter should have this set to false, and |sr| instances local to the shared |ak| (primary) cluster should have this set to true. Hostnames must be reachable and resolve across datacenters to support forwarding of new schemas from DC B to DC A. Setup ##### Assuming you have |sr| running, here are the recommended steps to add |sr| instances in a new "secondary" datacenter (call it DC B): #. In DC B, make sure |ak| has ``unclean.leader.election.enable`` set to false. #. In DC B, run |crep| with |ak| in the "primary" datacenter (DC A) as the source and |ak| in DC B as the target. #. In |sr| config files in DC B, set the ``kafkastore.bootstrap.servers`` to point to |ak| cluster in DC A and set ``leader.eligibility`` to false. #. Start your new |sr| instances with these configs. Run Book ######## If you have |sr| running in multiple datacenters, and you lose your "primary" datacenter; what do you do? First, note that the remaining |sr| instances running on the "secondary" can continue to serve any request that does not result in a write to |ak|. This includes GET requests on existing IDs and POST requests on schemas already in the registry. They will be unable to register new schemas. - If possible, revive the "primary" datacenter by starting |ak| and |sr| as before. - If you must designate a new datacenter (call it DC B) as "primary", reconfigure the ``kafkastore.bootstrap.servers`` in DC B to point to its local |ak| cluster and update |sr| config files to set ``leader.eligibility`` to true. - Restart your |sr| instances with these new configs in a rolling fashion. Multi-Cluster |sr| ------------------ In the previous disaster recovery scenarios, a single |sr| typically serves multiple environments, with each environment potentially containing multiple |ak| clusters. Starting with version 5.4.1, |cp| supports the ability to run multiple schema registries and associate a unique |sr| to each |ak| cluster in multi- cluster environments. Rather than disaster recovery, the primary goal of these types of deployments is the ability to scale by adding special purpose registries to support governance across diverse and massive datasets in large organizations. To learn more about this configuration, see :ref:`multi-cluster-sr`. Related Content --------------- - For information about multi-cluster and multi-datacenter deployments in general, see :ref:`multi_dc`. - For a broader explanation of disaster recovery design configurations and use cases, see the whitepaper on `Disaster Recovery for Multi-Datacenter Apache Kafka Deployments `_. - For an overview of schema management in |cp|, including details of single primary architecture, see :ref:`schemaregistry_intro` and :ref:`sr-high-availability-single-primary`. - |sr| is also available in |ccloud|; for details on how to lift and shift or extend existing clusters to cloud, see :ref:`schemaregistry_migrate`.