Datadog Metrics Sink for Confluent Cloud

You can use the fully-managed Datadog Metrics Sink connector for Confluent Cloud to export data from Apache Kafka® to Datadog using the post time-series metrics API. The connector can be used to export Kafka records in Avro, JSON Schema (JSON-SR), Protobuf, JSON (schemaless), or Bytes format to a Datadog endpoint.

Note

This is a Quick Start for the fully-managed cloud connector. If you are installing the connector locally for Confluent Platform, see Datadog Metrics Sink Connector for Confluent Platform.

Features

The Datadog Metrics Sink connector supports the following features:

  • At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.

  • Automatically creates topics: The following three topics are automatically created when the connector starts:

    The suffix for each topic name is the connector’s logical ID. In the example below, there are the three connector topics and one pre-existing Kafka topic named pageviews.

    Datadog Metrics Sink Connector Topics

    Connector Topics

    If the records sent to the topic are not in the correct format, or if important fields are missing in the record, the errors are recorded in the error topic, and the connector continues to run.

  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, JSON (schemaless), and Bytes formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). See Schema Registry Enabled Environments for additional information.

  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance (that is, consumer lag is reduced with multiple tasks running).

  • Batches multiple Datadog metrics: The connector tries to batch metrics in a single payload for each API request (maximum payload size 3.2 MB). For more information, see the post time-series metrics API docs.

  • Supported metrics types: The connector supports Gauge, Rate, and Count metric types. Each metric type has a different schema. Kafka topics that contain one of these metric types must have records that adhere to the metric type schema. For additional information, see Metric types.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Limitations

Be sure to review the following information.

Kafka record mapping

The connector accepts a struct type as the Kafka record. Additionally, the Kafka topic requires certain fields. There must be a name field, a timestamp field, and a values field. The values field entry refers to the metrics value. The timestamp value must be in UNIX epoch format.

An optional dimensions entry provides support for metrics filtering. The metrics can be filtered using hosts (hostname), interval values, and tag key values. The connector accepts metrics defined by the Datadog custom metrics properties.

The following shows a Kafka record sample with optional fields noted:

{
  "name": string,
  "type": string,              -- optional (DEFAULT = gauge)
  "timestamp": long,
  "dimensions": {              -- optional
    "host": string,            -- optional
    "interval": int,           -- optional (DEFAULT = 0)
    <tag1-key>: <tag1-value>,  -- optional
    <tag2-key>: <tag2-value>,
    ....
  },
  "values": {
    "doubleValue": double
  }
}

The connector maps the submitted Kafka record to the metrics payload that is accepted by the Datadog post time-series metrics API. The Datadog Metrics Sink connector maps a Kafka record in this format:

{
  "name": "test.metric",
  "type": "gauge",
  "timestamp": 1615466162,
  "dimensions": {
    "host": "metric.host",
    "interval": 1,
    "tag1": "postman",
    "tag2": "linux"
  },
  "values": {
    "doubleValue": 0.966121580485208
  }
}

to this acceptable Datadog post time-series metrics API format:

{
  "series": [
    {
      "host": "metric.host",
      "metric": "test.metric",
      "points": [
        [
          "1615466162",
          "0.966121580485208"
        ]
      ],
      "tags": [
        "host:metric.host",
        "interval:1",
        "tag1:postman",
        "tag2:linux"
      ],
      "type": "gauge",
      "interval": 1
    }
  ]
}

Quick Start

Use this quick start to get up and running with the Confluent Cloud Datadog Metrics Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a Datadog project.

Prerequisites
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Datadog Metrics Sink connector card.

Datadog Metrics Sink Connector Card

Step 4: Enter the connector details

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Datadog Metrics Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check for records

Verify that metrics are being produced. Go to the Metrics Explorer in your Datadog project and search for the graph with the name you used for the Kafka topic metric property (for example, "metric": "test.metric").

Datadog metric graph

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "DatadogMetricsSink",
  "input.data.format": "JSON",
  "name": "DatadogMetricsSinkConnector_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "****************************************************************",
  "datadog.domain": "COM",
  "datadog.api.key": "**************************************************",
  "tasks.max": "1",
  "topics": "<topic-1>, <topic-2>",
  "max.retry.time.ms": "5000"
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, JSON, or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "datadog.domain": Use either COM or EU, depending on the domain where your Datadog project is located.

  • "datadog.api.key": This is the API key for your Datadog project. To create an API key, see Add an API key or client token.

  • "tasks.max": Enter the maximum number of tasks for the connector to use. More tasks may improve performance (that is, consumer lag is reduced with multiple tasks running).

  • "topics": Enter the topic name or a comma-separated list of topic names.

  • "max.retry.time.ms": When a post request error occurs, the connector will retry until the amount of time entered elapses. You should set this value to be at least 1000 milliseconds (ms). The default retry time is 5000 ms (5 seconds).

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI. See Unsupported transformations for a list of SMTs that are not supported with this connector.

See Configuration Properties for all property values and definitions.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file datadog-metrics-sink-config.json

Example output:

Created connector DatadogMetricsSinkConnector_0 lcc-do6vzd

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID           |             Name              | Status  | Type | Trace
+------------+-------------------------------+---------+------+-------+
lcc-do6vzd   | DatadogMetricsSinkConnector_0 | RUNNING | sink |

Step 6: Check for records.

Verify that metrics are being produced. Go to the Metrics Explorer in your Datadog project and search for the graph with the name you used for the Kafka topic metric property (for example, "metric": "test.metric").

Datadog metric graph

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Managed and Custom Connectors section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Confluent Cloud Dead Letter Queue for details.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, JSON or BYTES. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

How should we connect to Datadog?

datadog.api.key

Datadog API key is required by the Datadog agent to submit metrics and events to datadog

  • Type: password
  • Importance: high
datadog.site

Datadog site to which the datadog account belongs to. There are five possible values: US1, US3, US5, EU1 or US1-FED. This setting is used to determine the Datadog API, connector will use to post metrics to. In case this config is not configured, Datadog API is determined by datadog.domain config value.

  • Type: string
  • Default: “”
  • Importance: high
datadog.domain

Datadog domain to which the datadog account belongs to. The two possible values are EU or COM. If datadog.site is not configured, then this setting will determine the Datadog API which connector will use to post metrics to. The value EU will map to https://api.datadoghq.eu and COM will map to https://api.datadoghq.com.

  • Type: string
  • Default: COM
  • Importance: low

Datadog Details

max.retry.time.ms

In case of error, while executing a post request, the connector will retry until this time (in ms) elapses. The default value is 5000 (5 seconds). It’s recommended to set this value to be at least 1 second.

  • Type: int
  • Default: 5000 (5 seconds)
  • Valid Values: [1000,…]
  • Importance: low

How should we handle errors?

behavior.on.error

Error handling behavior setting when an error occurs while extracting metric from Kafka record value. Valid options are ‘log’ and ‘fail’. ‘log’ logs the error message in error-<connector-id> topic and continues processing, ‘fail’ stops the connector in case of an error.

  • Type: string
  • Default: log
  • Importance: low

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long
  • Default: 300000 (5 minutes)
  • Valid Values: [60000,…,1800000]
  • Importance: low
max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long
  • Default: 500
  • Valid Values: [1,…,500]
  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png