.. _migrate-replicator: ############################################ Migrate from |ak| |mmaker| to |crep| in |cp| ############################################ `Kafka MirrorMaker `_ is a stand-alone tool for copying data between two |ak-tm| clusters. It is little more than a |ak| consumer and producer hooked together. Data will be read from topics in the origin cluster and written to a topic with the same name in the destination cluster. .. important:: |crep-full| is a more complete solution that copies topic configuration and data, and also integrates with |kconnect-long| and |c3-short| to improve availability, scalability and ease of use. This topic provides examples of how to migrate from an existing datacenter that is using |ak-tm| |mmaker| to |crep|. In these examples, messages are replicated from a specific point in time, not from the beginning. This is helpful if you have a large number of legacy messages that you do not want to migrate. Assume there are two datacenters, DC1 (Active) and DC2 (Passive), that are each running a |ak| cluster. There is a single topic in DC1 and it has been replicated to DC2 with the same topic name. The topic name is ``inventory``. .. _samepartition: *************************************************** Example 1: Same number of partitions in DC1 and DC2 *************************************************** In this example, you migrate from |mmaker| to |crep| and keep the same number of partitions for ``inventory`` in DC1 and DC2. Prerequisites: - |cp| 5.0.0 or later is :ref:`installed `. - You must have the same number of partitions for ``inventory`` in DC1 and DC2 to use this method. - The ``src.consumer.group.id`` in |crep| must match ``group.id`` in |mmaker|. #. Stop the running |mmaker| instance in DC1, where ```` is the |mmaker| process ID: :: kill #. Configure and start |crep|. In this example, |crep| is run as an executable from the command line or from :ref:`a Docker image `. #. Add these values to ``CONFLUENT_HOME/etc/kafka-connect-replicator/replicator_consumer.properties``. Replace ``localhost:9082`` with the ``bootstrap.servers`` of DC1, the source cluster: .. codewithvars:: bash bootstrap.servers=localhost:9082 topic.preserve.partitions=true #. Add this value to ``CONFLUENT_HOME/etc/kafka-connect-replicator/replicator_producer.properties``. Replace ``localhost:9092`` with the ``bootstrap.servers`` of DC2, the destination cluster: .. codewithvars:: bash bootstrap.servers=localhost:9092 #. Ensure the replication factors are set to ``2`` or ``3`` for production, if they are not already: .. codewithvars:: bash echo "confluent.topic.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties echo "offset.storage.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties echo "config.storage.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties echo "status.storage.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties #. Start |crep|: .. codewithvars:: bash replicator --cluster.id \ --producer.config replicator_producer.properties \ --consumer.config replicator_consumer.properties \ --replication.config ./etc/kafka-connect-replicator/quickstart-replicator.properties |crep| will use the committed offsets by |mmaker| from DC1 and start replicating messages from DC1 to DC2 based on these offsets. .. _diffpartition: ******************************************************** Example 2: Different number of partitions in DC1 and DC2 ******************************************************** In this example, you migrate from |mmaker| to |crep| and have a different number of partitions for ``inventory`` in DC1 and DC2. Prerequisite: - |cp| 5.0.0 or later is :ref:`installed `. - The ``src.consumer.group.id`` in |crep| must match ``group.id`` in |mmaker|. #. Stop the running |mmaker| instance from DC1. #. Configure and start |crep|. In this example, |crep| is run as an executable from the command line or from :ref:`a Docker image `. #. Add this value to ``CONFLUENT_HOME/etc/kafka-connect-replicator/replicator_consumer.properties``. Replace ``localhost:9082`` with the ``bootstrap.servers`` of DC1, the source cluster: .. codewithvars:: bash bootstrap.servers=localhost:9082 topic.preserve.partitions=false #. Add this value to ``CONFLUENT_HOME/etc/kafka-connect-replicator/replicator_producer.properties``. Replace ``localhost:9092`` with the ``bootstrap.servers`` of DC2, the destination cluster: .. codewithvars:: bash bootstrap.servers=localhost:9092 #. Ensure the replication factors are set to ``2`` or ``3`` for production, if they are not already: .. codewithvars:: bash echo "confluent.topic.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties echo "offset.storage.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties echo "config.storage.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties echo "status.storage.replication.factor=3" >> ./etc/kafka-connect-replicator/quickstart-replicator.properties #. Start |crep|: .. codewithvars:: bash replicator --cluster.id \ --producer.config replicator_producer.properties \ --consumer.config replicator_consumer.properties \ --replication.config ./etc/kafka-connect-replicator/quickstart-replicator.properties |crep| will use the committed offsets by |mmaker| from DC1 and start replicating messages from DC1 to DC2 based on these offsets. ********** Next steps ********** - Sign up for :ccloud-cta:`Confluent Cloud|` and use the :cloud:`Cloud quick start|get-started/index.html` to get started. - Download :cp-download:`Confluent Platform|` and use the :ref:`Confluent Platform quick start ` to get started.