Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

InfluxDB Sink

The InfluxDB sink connector is used to write data from a :term:`Kafka topic`_ to an InfluxDB host. When there are more than one record in a batch that have the same measurement, time, and tags, they will be combined to a single point an written to InfluxDB in a batch.

Examples

Property based example

This configuration is used typically along with standalone workers.

name=InfluxDBSinkConnector1
connector.class=io.confluent.influxdb.InfluxDBSinkConnector
tasks.max=1
topics=< Required Configuration >
influxdb.url=< Required Configuration >

Rest based example

This configuration is used typically along with distributed workers. Write the following json to connector.json, configure all of the required values, and use the command below to post the configuration to one the distributed connect worker(s). Check here for more information about the Kafka Connect Rest API

Connect Distributed REST example
{
  "config" : {
    "name" : "InfluxDBSinkConnector1",
    "connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector",
    "tasks.max" : "1",
    "topics" : "< Required Configuration >",
    "influxdb.url" : "< Required Configuration >"
  }
}

Use curl to post the configuration to one of the Kafka Connect Workers. Change http://localhost:8083/ the the endpoint of one of your Kafka Connect worker(s).

Create a new connector
curl -s -X POST -H 'Content-Type: application/json' --data @connector.json http://localhost:8083/connectors
Update an existing connector
curl -s -X PUT -H 'Content-Type: application/json' --data @connector.json http://localhost:8083/connectors/InfluxDBSinkConnector1/config