Multi Data-Center Deployment¶
Confluent Platform can be deployed in multiple data-centers. Multi Data-Center deployments enable use-cases such as:
- Active-active geo-localized deployments: allows users to access a near-by data center to optimize their architecture for low latency and high performance
- Active-passive disaster recover (DR) deployments: in an event of a partial or complete data-center disaster, allow failing over applications to use Confluent Platform in a different data-center.
- Centralized analytics: Aggregate data from multiple Kafka clusters into one location for organization-wide analytics
- Cloud migration: Use Kafka to synchronize data between on-prem applications and cloud deployments
Replication of events in Apache Kafka topics from one cluster to another is the foundation of Confluent’s multi data-center architecture. Replication can be done with Confluent Enterprise Replicator or using the open source MirrorMaker.
Confluent Replicator allows you to easily and reliably replicate topics from one Kafka cluster to another. In addition to copying the messages, Replicator will create topics as needed preserving the topic configuration in the source cluster. This includes preserving the number of partitions, the replication factor, and any configuration overrides specified for individual topics. The diagram below shows the Replicator architecture - note how Replicator uses the Kafka Connect APIs and Workers to provide high availability, load-balancing and centralized management.
MirrorMaker is a stand-alone tool for copying data between two Apache Kafka clusters. Confluent’s Replicator is a more complete solution that handles topic configuration as well as data and integrates with Kafka Connect and Confluent Control Center to improve availability, scalability and ease of use. See the section on comparing MirrorMaker to Confluent Replicator for more detail.
We recommend starting your multi data-center journey by following the quick start guide to set up replication between two Kafka clusters. You can then proceed to learn how to install and configure Replicator and other Confluent Platform components in multi data-center environments. Before running Replicator in production, make sure you read the monitoring and tuning guide. Refer to our Disaster Recovery white paper for a practical guide to designing and configuring multiple Apache Kafka clusters so that if a disaster scenario strikes, you have a plan for failover, failback, and ultimately successful recovery.
- Replicator Quick Start
- Installing and Configuring Replicator
- Tuning and Monitoring Replicator
- Configuration Options
- Apache Kafka’s Mirror Maker