You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

InfluxDB Sink Connector

You can use the InfluxDB sink connector to write data from a Kafka topic to an InfluxDB host. When there are more than one record in a batch that have the same measurement, time and tags, they are combined to a single point and written to InfluxDB in a batch.

Quick Start

In this quick start, you copy data from a single Kafka topic to a measurement on a local Influx database running on Docker.

This example assumes you are running Kafka and Schema Registry locally on the default ports. It also assumes your have Docker installed and running.


InfluxDB Docker can be replaced with any installed InfluxDB server.

First, bring up the Influx database by running the following Docker command:

docker run -d -p 8086:8086 --name influxdb-local influxdb:1.7.7

This starts the Influx database and maps it to port 8086 on localhost. By default, the username and password are blank. The database connection URL is http://localhost:8086.

Start the Confluent Platform using the Confluent CLI command below:

confluent start

Property-based example

Next, create a configuration file for the connector. This configuration is used typically with standalone workers. This file is included with the connector in ./etc/kafka-connect-influxdb/ and contains the following settings:


The first few settings are common settings you specify for all connectors, except for topics which are specific to sink connectors like this one.

The influxdb.url specify the connection URL of the influxDB server. The influxdb.username and influxdb.password specify the username and password of the InfluxDB server. By default the username and password are blank for the above InfluxDB server, so it is not added in the configuration.

Run the connector with this configuration.

confluent load InfluxDBSinkConnector​ -d etc/kafka-connect-influxdb/

REST-based example

This configuration is used typically along with distributed workers. Write the following JSON to influxdb-sink-connector.json, configure all of the required values, and use the command below to post the configuration to one of the distributed connect worker(s). See the Kafka Connect REST API for more information.

  "config" : {
    "name" : "InfluxDBSinkConnector",
    "connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector",
    "tasks.max" : "1",
    "topics" : "orders",
    "influxdb.url" : "http://localhost:8086"

Use curl to post the configuration to one of the Kafka Connect worker(s). Change http://localhost:8083/ to the endpoint of one of your Kafka Connect worker(s).

Run the connector with this configuration.

curl -X POST -d @influxdb-sink-connector.json http://localhost:8083/connectors -H "Content-Type: application/json"

Next, create a record in the orders topic

bin/kafka-avro-console-producer \
--broker-list localhost:9092 --topic orders \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"measurement","type":"string"},{"name":"id","type":"int"},{"name":"product", "type": "string"}, {"name":"quantity", "type": "int"}, {"name":"price",
"type": "float"}]}'

The console producer is waiting for input. Copy and paste the following record into the terminal:

{"measurement": "orders", "id": 999, "product": "foo", "quantity": 100, "price": 50}

To verify the data in InfluxDB, log in to the Docker container using the following command:

docker exec -it <containerid> bash


To find the container ID use the docker ps command.

Once you are in the Docker container, log in to InfluxDB shell:


Your output should resemble:

Connected to http://localhost:8086 version 1.7.7
InfluxDB shell version: 1.7.7

Finally, run the following query to verify the records:

> USE orders;
  Using database orders

> SELECT * FROM orders;
  name: orders
  time                id  price product quantity
  ----                --  ----- ------- --------
  1567164248415000000 999 50    foo     100

Additional Documentation