InfluxDB Sink Connector

You can use the InfluxDB sink connector to write data from a Kafka topic to an InfluxDB host. When there are more than one record in a batch that have the same measurement, time and tags, they are combined to a single point and written to InfluxDB in a batch.

Quick Start

In this quick start, you copy data from a single Kafka topic to a measurement on a local Influx database running on Docker.

This example assumes you are running Kafka and Schema Registry locally on the default ports. It also assumes your have Docker installed and running.


InfluxDB Docker can be replaced with any installed InfluxDB server.

First, bring up the Influx database by running the following Docker command:

docker run -d -p 8086:8086 --name influxdb-local influxdb:1.7.7

This starts the Influx database and maps it to port 8086 on localhost. By default, the username and password are blank. The database connection URL is http://localhost:8086.

Start the Confluent Platform using the Confluent CLI command below.


The command syntax for the Confluent CLI development commands changed in 5.3.0. These commands have been moved to confluent local. For example, the syntax for confluent start is now confluent local start. For more information, see confluent local.

confluent local start

Property-based example

Next, create a configuration file for the connector. This configuration is used typically with standalone workers. This file is included with the connector in ./etc/kafka-connect-influxdb/ and contains the following settings:


The first few settings are common settings you specify for all connectors, except for topics which are specific to sink connectors like this one.

The influxdb.url specify the connection URL of the influxDB server. The influxdb.db, influxdb.username and influxdb.password specify the database name, username, and password of the InfluxDB server, respectively. By default the username and password are blank for the InfluxDB server above, so it is not added in the configuration.

Run the connector with this configuration.

confluent local load InfluxDBSinkConnector -- -d etc/kafka-connect-influxdb/

REST-based example

This configuration is used typically along with distributed workers. Write the following JSON to influxdb-sink-connector.json, configure all of the required values, and use the command below to post the configuration to one of the distributed connect worker(s). See the Kafka Connect REST API for more information.

  "config" : {
    "name" : "InfluxDBSinkConnector",
    "connector.class" : "io.confluent.influxdb.InfluxDBSinkConnector",
    "tasks.max" : "1",
    "topics" : "orders",
    "influxdb.url" : "http://localhost:8086"
    "influxdb.db" : "influxTestDB"

Use curl to post the configuration to one of the Kafka Connect worker(s). Change http://localhost:8083/ to the endpoint of one of your Kafka Connect worker(s).

Run the connector with this configuration.

curl -X POST -d @influxdb-sink-connector.json http://localhost:8083/connectors -H "Content-Type: application/json"

Next, create a record in the orders topic

bin/kafka-avro-console-producer \
--broker-list localhost:9092 --topic orders \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"id","type":"int"},{"name":"product", "type": "string"}, {"name":"quantity", "type": "int"}, {"name":"price",
"type": "float"}]}'

The console producer is waiting for input. Copy and paste the following record into the terminal:

{"id": 999, "product": "foo", "quantity": 100, "price": 50}

To verify the data in InfluxDB, log in to the Docker container using the following command:

docker exec -it <containerid> bash


To find the container ID use the docker ps command.

Once you are in the Docker container, log in to InfluxDB shell:


Your output should resemble:

Connected to http://localhost:8086 version 1.7.7
InfluxDB shell version: 1.7.7

Finally, run the following query to verify the records:

> USE influxTestDB;
  Using database influxTestDB

> SELECT * FROM orders;
  name: orders
  time                id  price product quantity
  ----                --  ----- ------- --------
  1567164248415000000 999 50    foo     100

Record Structure

Each InfluxDB record consists of measurement, tags (optional), value fields you define, and a timestamp.

  "measurement": "cpu",
  "tags": {
    "hostname": "test",
    "ip": ""
  "cpu1": 10,
  "cpu2": 5,
  "cpu3": 15
  • measurement is a required field and must be of type String. However, if the connector’s and influxdb.db are specified, then measurement is optional; that is, not required in the record.
  • tags is an optional field and must be of type map (or called records in Avro).
  • All other fields are considered value fields, and can be of type Float, Integer, String, or Boolean.
  • At least one value field is required in the record.
  • The timestamp in the header of the record is used as the timestamp in InfluxDB.

To learn more see the InfluxDB documentation.

Additional Documentation