You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

MapR DB Sink Connector

The MapR DB connector is used to write data to a MapR DB cluster.


This connector requires that the MapR Client is installed and working properly on the host running the connect worker process. The Kafka Connect worker process must be started with -Dmapr.home.dir=/opt/mapr -Djava.library.path=/opt/mapr/lib/. You can do this by exporting the KAFKA_OPTS environment variable. For example export KAFKA_OPTS="-Dmapr.home.dir=/opt/mapr -Djava.library.path=/opt/mapr/lib/" before starting Kafka Connect.


The table on the MapR DB cluster is selected based on the topic name. If you need to change this take a look at the RegexRouter transformation which can be used to change the topic name before it’s sent to MapR DB.


Property based example

This configuration is used typically along with standalone workers.

topics=< Required Configuration >

Rest based example

This configuration is used typically along with distributed workers. Write the following json to connector.json, configure all of the required values, and use the command below to post the configuration to one the distributed connect worker(s). Check here for more information about the Kafka Connect Rest API

Connect Distributed REST example
  "config" : {
    "name" : "MapRDBSinkConnector1",
    "connector.class" : "io.confluent.connect.mapr.db.MapRDBSinkConnector",
    "tasks.max" : "1",
    "topics" : "< Required Configuration >"

Use curl to post the configuration to one of the Kafka Connect Workers. Change http://localhost:8083/ the the endpoint of one of your Kafka Connect worker(s).

Create a new connector
curl -s -X POST -H 'Content-Type: application/json' --data @connector.json http://localhost:8083/connectors
Update an existing connector
curl -s -X PUT -H 'Content-Type: application/json' --data @connector.json http://localhost:8083/connectors/MapRDBSinkConnector1/config