Schema Registry Multi Datacenter Setup

Spanning multiple datacenters (DCs) with your Confluent Schema Registry synchronizes data across sites, further protects against data loss, and reduces latency. The recommended multi-datacenter deployment designates one datacenter as “primary” and all others as “secondary”. If the “primary” datacenter fails and is unrecoverable, you must manually designate what was previously a “secondary” datacenter as the new “primary” per the steps in the Run Books below.

Kafka Election

Important Settings

kafkastore.bootstrap.servers This should point to the primary Kafka cluster (DC A in this example).

schema.registry.group.id Use this setting to override the group.id for the Kafka group used when Kafka is used for primary election. Without this configuration, group.id will be “schema-registry”. If you want to run more than one Schema Registry cluster against a single Kafka cluster you, should make this setting unique for each cluster.

master.eligibility A Schema Registry server with master.eligibility set to false is guaranteed to remain a secondary during any primary election. Schema Registry instances in a “secondary” datacenter should have this set to false, and Schema Registry instances local to the shared Kafka (primary) cluster should have this set to true.

Hostnames must be reachable and resolve across datacenters to support forwarding of new schemas from DC B to DC A.

Setup

Assuming you have Schema Registry running, here are the recommended steps to add Schema Registry instances in a new “secondary” datacenter (call it DC B):

  1. In DC B, make sure Kafka has unclean.leader.election.enable set to false.
  2. In DC B, run Replicator with Kafka in the “primary” datacenter (DC A) as the source and Kafka in DC B as the target.
  3. In Schema Registry config files in DC B, set the kafkastore.bootstrap.servers to point to Kafka cluster in DC A and set master.eligibility to false.
  4. Start your new Schema Registry instances with these configs.

Run Book

Let’s say you have Schema Registry running in multiple datacenters, and you lose your “primary” datacenter; what do you do? First, note that the remaining Schema Registry instances running on the “secondary” can continue to serve any request that does not result in a write to Kafka. This includes GET requests on existing IDs and POST requests on schemas already in the registry. They will be unable to register new schemas.

  • If possible, revive the “primary” datacenter by starting Kafka and Schema Registry as before.
  • If you must designate a new datacenter (call it DC B) as “primary”, reconfigure the kafkastore.bootstrap.servers in DC B to point to its local Kafka cluster and update Schema Registry config files to set master.eligibility to true.
  • Restart your Schema Registry instances with these new configs in a rolling fashion.

ZooKeeper Election

Recommended Deployment

../_images/multi-dc-setup.png

Multi datacenter with Zookeeper based primary election

The image above shows two datacenters - DC A, and DC B. Each of the two data centers has its own ZooKeeper cluster, Kafka cluster, and Schema Registry cluster. Both Schema Registry clusters link to Kafka and ZooKeeper in DC A, and the secondary datacenter (DC B) forwards Schema Registry writes to the primary (DC A). The Schema Registry nodes and hostnames must be addressable and routable across the two sites to support this configuration.

The Schema Registry instances in DC B have master.eligibility set to false, meaning that none can ever be elected primary.

In this active-passive setup, Replicator runs in one direction, copying Kafka data and configurations from the active DC A to the passive DC B.

To protect against complete loss of DC A, Kafka cluster A (the source) is replicated to Kafka cluster B (the target). This is achieved by running the Replicator local to the target cluster.

In the event of a partial or complete disaster in one datacenter, applications can failover to the secondary datacenter.

Important Settings

kafkastore.connection.url kafkastore.connection.url should be identical across all Schema Registry nodes. By sharing this setting, all Schema Registry instances will point to the same ZooKeeper cluster.

schema.registry.zk.namespace Namespace under which Schema Registry related metadata is stored in ZooKeeper. This setting should be identical across all nodes in the same Schema Registry.

master.eligibility A Schema Registry server with master.eligibility set to false is guaranteed to remain a secondary during any primary election. Schema Registry instances in a “secondary” datacenter should have this set to false, and Schema Registry instances local to the shared Kafka cluster should have this set to true.

Hostnames must be reachable and resolve across datacenters to support forwarding of new schemas from DC B to DC A.

Setup

Assuming you have Schema Registry running, here are the recommended steps to add Schema Registry instances in a new “secondary” datacenter (call it DC B):

  1. In DC B, make sure Kafka has unclean.leader.election.enable set to false.
  2. In DC B, run Replicator with Kafka in the “primary” datacenter (DC A) as the source and Kafka in DC B as the target.
  3. In Schema Registry config files in DC B, set kafkastore.connection.url and schema.registry.zk.namespace to match the instances already running, and set master.eligibility to false.
  4. Start your new Schema Registry instances with these configs.

Run Book

Let’s say you have Schema Registry running in multiple datacenters, and you have lost your “primary” datacenter; what do you do? First, note that the remaining Schema Registry instances will continue to be able to serve any request which does not result in a write to Kafka. This includes GET requests on existing IDs and POST requests on schemas already in the registry.

  • If possible, revive the “primary” datacenter by starting Kafka and Schema Registry as before.
  • If you must designate a new datacenter (call it DC B) as “primary”, update Schema Registry config files so that kafkastore.connection.url points to the local ZooKeeper, and change master.eligibility to true. Then restart your Schema Registry instances with these new configs in a rolling fashion.

Suggested Reading