InfluxDB 2 Sink Connector for Confluent Cloud¶
Note
If you are installing the connector locally for Confluent Platform, see InfluxDB Sink Connector for Confluent Platform.
The fully-managed Kafka Connect InfluxDB 2 Sink connector writes data from an Apache Kafka® topic to an InfluxDB bucket.
Features¶
The InfluxDB 2 Sink connector supports the following features:
- At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
- Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Limitations¶
Be sure to review the following information.
- For connector limitations, see InfluxDB 2 Sink Connector limitations.
- If you plan to use one or more Single Message Transforms (SMTs), see SMT Limitations.
- If you plan to use Confluent Cloud Schema Registry, see Environment Limitations.
Record structure¶
Each record is in JSON format. It can contain a number of InfluxdDB fields, a tag section ("tags"
), and a measurement section ("measurement"
). The following example shows the record structure required for the connector.
{
"measurement":"measurement-name",
"tags": {
"tag1":"value1",
"tag2":"value2"
},
"time-field":<timestamp-in-epochs>,
"field1":<value>,
"field2":<value>,
...
}
Note the following:
- The
"tags"
section is optional. This section provides the list of tags associated with the set of fields. Each tag must be a key-value pair of type string. - The
"measurement"
field takes the name of the InfluxDB measurement. This field is optional. However, if you do not provide the measurement name here then you must specify the measurement name in themeasurement.name.format
configuration property. Also, specifying this field will override whatever is specified in the Kafka record. - You can use multiple fields in a record. Fields can be of type int, float, boolean or string.
- You can designate one of the fields to have the record timestamp information using the
event.time.fieldname
configuration property. If left unspecified, the timestamp used is the Kafka record timestamp. - For AVRO, PROTOBUF, and JSON_SR the structure remains the same. Note that the corresponding schema must be in Schema Registry.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud InfluxDB 2 Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to an InfluxDB bucket.
- Prerequisites
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
Authorized access to write data to InfluxDB. For more information, see writing data to InfluxDB.
Note
The connector requires
--read-bucket
and--write-bucket
permissions for the bucket where it sends data. For more information, see influx auth create.Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
At least one source Kafka topic must exist in your Confluent Cloud cluster before creating the sink connector.
Using the Confluent Cloud Console¶
Step 1: Launch your Confluent Cloud cluster.¶
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Step 2: Add a connector.¶
In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.
Step 4: Set up the connection.¶
Note
- Make sure you have all your prerequisites completed.
- An asterisk ( * ) designates a required entry.
Select one or more topics.
Enter a connector Name.
Select an Input Kafka record value format (data coming from the Kafka topic): AVRO, PROTOBUF, JSON_SR (JSON Schema), or JSON (schemaless). A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
Enter the InfluxDB Connection details:
InfluxDB API URL: Fully-qualified InfluxDB API URL used for establishing a connection. For example,
http://influxdb-test.com:8086
.InfluxDB Token: Token to authenticate with the InfluxDB host.
InfluxDB Organization ID: The InfluxDB organization ID.
Note
The connector requires
--read-bucket
andwrite-bucket
permissions for the bucket where it sends data. For more information, see influx auth create.
For more information, see writing data to InfluxDB.
Enter the Write Configuration details:
- Bucket Name: The bucket where the connector sends data.
- Write Precision: The write precision of Influx DB timestamp. Valid values are
microseconds
,milliseconds
,nanoseconds
, andseconds
. The default value ismilliseconds
.
Note
If the connector uses the Kafka record timestamp (the default) for the event time, instead of using a timestamp field specified in the Event Time field name (
event.time.fieldname
) property, then the Kafka record timestamp (in milliseconds) is converted using the precision defined here. Ifevent.time.fieldname
property is used get the timestamp from a field within the Kafka record, you must provide the correct timeunit of the field here.- Event Time field name: The name of field in the Kafka record that contains the event time that the connector uses when it writes to an InfluxDB data point. If nothing is entered, the default value used is the Kafka record timestamp that identifies when the Kafka record was created, which corresponds to the time that the event was processed.
- Measurement Name Format: A format string for the destination measurement name, which may contain
${topic}
as a placeholder for the originating topic name. For example,kafka_${topic}
for the topicorders
maps to the measurement namekafka_orders
. If the measurement name format is not provided the connector uses themeasurement
field value present in the Kafka message. If such a field is not present in the message, the message is sent to the Dead Letter Queue. - Enable compression: Determine whether gzip is enabled or not. Defaults to
false
.
Enter the Retries details:
- Backoff Time: Backoff time duration in milliseconds that the connector waits before retrying. Defaults to
1000
ms. - Max retries: The maximum number of times to retry a task when errors occur and before the task fails. Defaults to
10
.
- Backoff Time: Backoff time duration in milliseconds that the connector waits before retrying. Defaults to
Enter the number of tasks to use with the connector. More tasks may improve performance.
Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.
See Configuration Properties for all property values and descriptions.
Step 5: Launch the connector.¶
Verify the connection details and click Launch.
Step 6: Check the connector status.¶
The status for the connector should go from Provisioning to Running.
Step 7: Check for files.¶
Verify that data is being produced at the InfluxDB host.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.
Using the Confluent CLI¶
To set up and run the connector using the Confluent CLI, complete the following steps.
Note
Make sure you have all your prerequisites completed.
Step 1: List the available connectors.¶
Enter the following command to list available connectors:
confluent connect plugin list
Step 2: Show the required connector configuration properties.¶
Enter the following command to show the required connector properties:
confluent connect plugin describe <connector-catalog-name>
For example:
confluent connect plugin describe InfluxDB2Sink
Example output:
The following are required configs:
connector.class : InfluxDB2Sink
topics
input.data.format
name
kafka.api.key
kafka.api.secret
influxdb.url
influxdb.token
influxdb.org.id
influxdb.bucket
Step 3: Create the connector configuration file.¶
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{
"connector.class": "InfluxDB2Sink",
"topics": "orders",
"input.data.format": "JSON",
"name": "InfluxDB2Sink_0",
"kafka.api.key": "****************",
"kafka.api.secret": "*********************************",
"influxdb.url": "http://influxdb-test.com:8086",
"influxdb.token": "***************************",
"influxdb.org.id": "<organization-id>",
"influxdb.bucket": "<bucket-name>",
"tasks.max": "1",
}
Note the following property definitions:
"connector.class"
: Identifies the connector plugin name."topics"
: Enter the topic name or a comma-separated list of topic names."input.data.format"
(data coming from the Kafka topic): Supports AVRO, PROTOBUF, JSON_SR (JSON Schema), or JSON (schemaless). A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf)."name"
: Sets a name for your new connector.
"kafka.auth.mode"
: Identifies the connector authentication mode you want to use. There are two options:SERVICE_ACCOUNT
orKAFKA_API_KEY
(the default). To use an API key and secret, specify the configuration propertieskafka.api.key
andkafka.api.secret
, as shown in the example configuration (above). To use a service account, specify the Resource ID in the propertykafka.service.account.id=<service-account-resource-ID>
. To list the available service account resource IDs, use the following command:confluent iam service-account list
For example:
confluent iam service-account list Id | Resource ID | Name | Description +---------+-------------+-------------------+------------------- 123456 | sa-l1r23m | sa-1 | Service account 1 789101 | sa-l4d56p | sa-2 | Service account 2
"influxdb.url"
: Fully-qualified InfluxDB API URL used for establishing a connection. For example,http://influxdb-test.com:8086
"influxdb.token"
: Token to authenticate with the InfluxDB host."influxdb.org.id"
: The InfluxDB organization ID.Note
The connector requires
--read-bucket
andwrite-bucket
permissions for the bucket where it sends data. For more information, see influx auth create.For more information, see writing data to InfluxDB.
"influxdb.bucket"
: The bucket where the connector sends data."tasks.max"
: Enter the maximum number of tasks for the connector to use. More tasks may improve performance.
Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.
See Configuration Properties for all property values and descriptions.
Step 3: Load the properties file and create the connector.¶
Enter the following command to load the configuration and start the connector:
confluent connect create --config <file-name>.json
For example:
confluent connect create --config influxdb2-sink-config.json
Example output:
Created connector InfluxDB2Sink_0 lcc-do6vzd
Step 4: Check the connector status.¶
Enter the following command to check the connector status:
confluent connect list
Example output:
ID | Name | Status | Type | Trace
+------------+---------------------------+---------+------+-------+
lcc-do6vzd | InfluxDB2Sink_0 | RUNNING | sink | |
Step 5: Check for files.¶
Verify that data is being produced at the InfluxDB 2 host.
For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.
Tip
When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.
Configuration Properties¶
Use the following configuration properties with this connector.
Which topics do you want to get data from?¶
topics
Identifies the topic name or a comma-separated list of topic names.
- Type: list
- Importance: high
Input messages¶
input.data.format
Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or plain JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.
- Type: string
- Importance: high
How should we connect to your data?¶
name
Sets a name for your connector.
- Type: string
- Valid Values: A string at most 64 characters long
- Importance: high
Kafka Cluster credentials¶
kafka.auth.mode
Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.
- Type: string
- Default: KAFKA_API_KEY
- Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
- Importance: high
kafka.api.key
- Type: password
- Importance: high
kafka.service.account.id
The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.
- Type: string
- Importance: high
kafka.api.secret
- Type: password
- Importance: high
InfluxDB¶
influxdb.url
Fully qualified InfluxDB API URL used for establishing connection.
- Type: string
- Importance: high
influxdb.token
Token to authenticate with influx db.
- Type: password
- Importance: high
influxdb.org.id
Organization ID.
- Type: string
- Importance: high
Write Configuration¶
influxdb.bucket
Bucket to which this connector will send the data to
- Type: string
- Importance: high
write.precision
Write precision of InfluxDB timestamp. Valid values are Seconds, Milliseconds, Microseconds, and Nanoseconds. Note that if the kafka record timestamp is used, instead of specifying a timestamp field, using ‘event.time.fieldname’, then the kafka timestamp(in Milliseconds) will be converted the precision defined here. Otherwise you must provide the correct time unit of the ‘event.time.fieldname’ here.
- Type: string
- Default: Milliseconds
- Importance: medium
event.time.fieldname
The name of field in the Kafka record that contains the event time to be written to an InfluxDB data point. By default (if this config is unspecified), the timestamp written to InfluxDB is the Kafka record timestamp (when the Kafka record was created) which corresponds to the time that the event was processed.
- Type: string
- Importance: medium
measurement.name.format
A format string for the destination measurement name, which may contain ‘${topic}’ as a placeholder for the originating topic name.
For example,
kafka_${topic}
for the topic ‘orders’ will map to the measurement name ‘kafka_orders’. If the measurement name format is not provided the connector will use the ‘measurement’ field value present in the kafka message. If such a field is not present in the message the message will be sent to the dlq.- Type: string
- Importance: medium
influxdb.gzip.enable
Flag to determine if gzip should be enabled.
- Type: boolean
- Default: false
- Importance: low
Retries¶
retry.backoff.ms
Backoff time duration to wait before retrying
- Type: int
- Default: 1000 (1 second)
- Importance: medium
max.retries
The maximum number of times to retry on errors before failing the task.
- Type: int
- Default: 10
- Importance: medium
Number of tasks for this connector¶
tasks.max
- Type: int
- Valid Values: [1,…]
- Importance: high
Next Steps¶
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.