.. _datadog_metrics_sink_connector: Datadog Metrics Sink Connector for |cp| ======================================= The |kconnect-long| Datadog Metrics Sink connector is used to export data from |ak-tm| topics to Datadog using the `Post timeseries API `__. The connector accepts Struct as a |ak| record's value, where there must be ``name``, ``timestamp``, and ``values`` fields. The ``values`` field refers to a metric's values. The input data should look like the following: .. codewithvars:: bash { "name": string, "type": string, -- optional (DEFAULT = GAUGE) "timestamp": long, "dimensions": { -- optional "host": string, -- optional "interval": int, -- optional (DEFAULT = 0) : , -- optional : , ..... }, "values": { "doubleValue": double } } This connector can start at minimum one task supporting all exportation of data and can scale horizontally by adding more tasks. However, performance is limited by Datadog. See `API rate limiting `__ for more information. Prerequisites ------------- The following are required to run the |kconnect-long| Datadog connector: * |ak| Broker: |cp| 3.3.0 or above * |kconnect|: |cp| 4.1.0 or above * Java 1.8 * Datadog account with at least **reporting access** to send data through ``Post Timeseries Metric API``. See the `Datadog Documentation `__ for more information. Features -------- The Datadog Metrics Sink connector offers the following features: * **Support for Kafka record value of type Struct, Schemaless JSON, and JSON String**: The connector will attempt to fit the |ak| record ``values`` of type Struct, schemaless JSON, and JSON string into the one of the three defined metric types (Gauge, Rate, or Count) depending on the ``type`` field. Alternatively, if the value for ``type`` is anything other then three types mentioned above, Datadog will treat it as ``Gauge``. * **Batching Multiple Metrics**: The connector tries to batch metrics in a single payload of maximum size 3.2 megabytes for each API request. For additional details, see `Post timeseries ponts `__. Supported Metrics and Schemas ----------------------------- The connector supports metrics of type Gauge, Rate, and Count. Each metric type has a different schema. |ak| topics that contain these metrics must have records that adhere to these schemas. ------------ Gauge Schema ------------ The ``GAUGE`` metric submission type represents a value associated with system entity/parameter reporting continuously over time. .. codewithvars:: bash { "doubleValue": double } ------------ Rate Schema ------------ The ``RATE`` metric submission type represents the number of events over a defined time interval (flush interval) that is normalized per-second. .. codewithvars:: bash { "doubleValue": double } ------------ Count Schema ------------ The ``COUNT`` metric submission type represents the number of events that occur in a defined time interval. This is also known as the flush interval. .. codewithvars:: bash { "doubleValue": double } -------------- Record Mapping -------------- Individual data in the provided |ak| record value is mapped to a Datadog Post Timeseries Metric API request body. Following shows an example of the mapping done by the connector: .. codewithvars:: bash { "name": "metric-1", "type": "rate", "timestamp": 1575366904, "dimensions": { "host" : "host-1", "interval" : 1, "tag1" : "test", "tag2" : "linux" }, "host": "host-1", "values": { "doubleValue": 10.832442530901606 } } The example record above is mapped to ``Datadog TimeSeries Metric Post API`` request body as shown below: .. codewithvars:: bash { "series": [ { "host":"host-1", "metric":"metric-1", "points": [ [ "1575366904", "10.832442530901606" ] ], "tags":["host:host1", "interval:1", "tag1:test", "tag2:linux"], "type":"rate", "interval":1 } ] } Install the Datadog Metrics connector ------------------------------------- .. include:: ../includes/connector-install.rst .. include:: ../includes/connector-install-hub.rst .. codewithvars:: bash confluent-hub install confluentinc/kafka-connect-datadog-metrics:latest .. include:: ../includes/connector-install-version.rst .. codewithvars:: bash confluent-hub install confluentinc/kafka-connect-datadog-metrics:1.1.2 ------------------------------ Install the connector manually ------------------------------ `Download and extract the ZIP file `__ for your connector and then follow the manual connector installation :ref:`instructions `. License ------- .. include:: ../includes/enterprise-license.rst See :ref:`datadog_metrics_sink_connector_license_config` for license properties and :ref:`datadog_metrics_sink_license_topic_configuration` for information about the license topic. Configuration Properties ------------------------ For a complete list of configuration properties for this connector, see :ref:`datadog_metrics_sink_connector_config`. .. include:: ../includes/connect-to-cloud-note.rst .. _datadog_metrics_quickstart: Quick Start ----------- In this Quick Start, you configure the |kconnect-long| Datadog Sink connector to read records from |ak| topics and export the data to Datadog. Prerequisites * :ref:`Confluent Platform ` is installed. * The :ref:`Confluent CLI ` is installed. * `Get started with Datadog `__ is completed. * The Datadog API key is available. Find the API key under ``Integration > APIs > API Keys`` accessible from the Datadog Dashboard. ----------------- Preliminary Setup ----------------- To add a new connector plugin you must restart |kconnect|. Use the :ref:`Confluent CLI ` command to restart |kconnect|. .. include:: ../../includes/cli-new.rst .. codewithvars:: bash |confluent_stop| connect && |confluent_start| connect Your output should resemble: :: Using CONFLUENT_CURRENT: /Users/username/Sandbox/confluent-snapshots/var/confluent.NuZHxXfq Starting zookeeper zookeeper is [UP] Starting kafka kafka is [UP] Starting schema-registry schema-registry is [UP] Starting kafka-rest kafka-rest is [UP] Starting connect connect is [UP] Check if the Datadog plugin has been installed correctly and picked up by the plugin loader: :: curl -sS localhost:8083/connector-plugins | jq '.[].class' | grep datadog Your output should resemble: :: "io.confluent.connect.datadog.metrics.DatadogMetricsSinkConnector" ---------------------------- Sink Connector Configuration ---------------------------- Start the services using the |confluent-cli|: .. codewithvars:: bash |confluent_start| Create a configuration file named datadog-metrics-sink-config.json with the following contents: :: { "name": "datadog-metrics-sink", "config": { "topics": "datadog-metrics-topic", "connector.class": "io.confluent.connect.datadog.metrics.DatadogMetricsSinkConnector", "tasks.max": "1", "key.converter": "io.confluent.connect.string.StringConverter", "key.converter.schema.registry.url": "http://localhost:8081", "value.converter": "io.confluent.connect.json.JsonConverter", "value.converter.schema.registry.url": "http://localhost:8081", "datadog.api.key": "< your-api-key >" "datadog.domain": "COM" "behavior.on.error": "fail", "confluent.topic.bootstrap.servers": "localhost:9092", "confluent.topic.replication.factor": "1" } } Run this command to start the Datadog Metrics sink connector. .. include:: ../../includes/confluent-local-consume-limit.rst .. codewithvars:: bash |confluent_load| datadog-metrics-sink|dash| -d datadog-metrics-sink-config.json To check that the connector started successfully view the Connect worker's log by running: .. codewithvars:: bash |confluent_log| connect Produce test data to the ``datadog-metrics-topic`` topic in |ak| using the :ref:`cli` |confluent_produce| command. .. codewithvars:: bash kafka-avro-console-producer \ --broker-list localhost:9092 --topic datadog-metrics-topic \ --property value.schema='{"name": "metric","type": "record","fields": [{"name": "name","type": "string"},{"name": "type","type": "string"},{"name": "timestamp","type": "long"}, {"name": "dimensions", "type": {"name": "dimensions", "type": "record", "fields": [{"name": "host", "type":"string"}, {"name":"interval", "type":"int"}, {"name": "tag1", "type":"string"}]}},{"name": "values","type": {"name": "values","type": "record","fields": [{"name":"doubleValue", "type": "double"}]}}]}' .. important:: The timestamp should be in Unix epoch second format, *current*. Current is defined as not more than 10 minutes in the future or more than one hour in the past. Unix epoch second format .. codewithvars:: bash {"name":"perf.metric", "type":"rate","timestamp": 1575875976, "dimensions": {"host": "metric.host1", "interval": 1, "tag1": "testing-data"},"values": {"doubleValue": 5.639623848362502}} Using the Datadog Dashboard, you can view the metrics being produced. You can produce AVRO and JSON data to a |ak| topic for this connector. When completed, stop the Confluent services using the command: .. codewithvars:: bash |confluent_stop| .. datadog_metrics_sink_connector_examples: Examples -------- ---------------------- Property-based example ---------------------- Create a configuration file for the connector. This file is included with the connector in ``etc/kafka-connect-datadog-metrics/datadog-metrics-sink-connector.properties``. This configuration is typically used with :ref:`standalone workers `. .. note:: For details about using this connector with |kconnect-long| Reporter, see :ref:`userguide-connect-reporter`. .. codewithvars:: properties :name: connector.properties name=datadog-metrics-sink topics=datadog-metrics-topic connector.class=io.confluent.connect.datadog.metrics.DatadogMetricsSinkConnector tasks.max=1 datadog.api.key=< Your Datadog Api key > datadog.domain=< anyone of COM/EU > behavior.on.error=< Optional Configuration > reporter.bootstrap.servers=localhost:9092 key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 confluent.topic.bootstrap.servers=localhost:9092 confluent.topic.replication.factor=1 confluent.license= Before starting the connector, make sure that the configurations in ``datadog properties`` are properly set. .. note:: Provide ``datadog.api.key``, ``datadog.domain`` and ``behavior.on.error`` and start the connector. Then start the Datadog metrics connector by loading its configuration with the following command. .. include:: ../../includes/confluent-local-connector-limit.rst .. codewithvars:: bash |confluent_load| datadog-metrics-sink|dash| -d datadog-metrics-sink-connector.properties { "name": "datadog-metrics-sink", "config": { "connector.class": "io.confluent.connect.datadog.metrics.DatadogMetricsSinkConnector", "tasks.max":"1", "topics":"datadog-metrics-topic", "datadog.api.key": "< your-api-key > " "datadog.domain": "COM" "behavior.on.error": "fail", "key.converter":"io.confluent.connect.avro.AvroConverter", "key.converter.schema.registry.url":"http://localhost:8081", "value.converter":"io.confluent.connect.avro.AvroConverter", "value.converter.schema.registry.url":"http://localhost:8081", "confluent.topic.bootstrap.servers":"localhost:9092", "confluent.topic.replication.factor":"1", "reporter.bootstrap.servers": "localhost:9092" }, "tasks": [] } ------------------ REST-based example ------------------ This configuration is typically used with :ref:`distributed workers `. Write the following JSON to ``connector.json``, configure all of the required values, and use the command below to post the configuration to one the distributed connect workers. Check here for more information about the |kconnect-long| :ref:`REST API `. .. note:: For details about using this connector with |kconnect-long| Reporter, see :ref:`userguide-connect-reporter`. .. codewithvars:: bash :emphasize-lines: 8,9 { "name" : "datadog-metrics-sink-connector", "config" : { "connector.class": "io.confluent.connect.datadog.metrics.DatadogMetricsSinkConnector", "tasks.max": "1", "datadog.domain": "COM", "datadog.api.key": "< your-api-key >", "behavior.on.error": "fail", "reporter.bootstrap.servers": "localhost:9092", "confluent.topic.bootstrap.servers": "localhost:9092", "confluent.topic.replication.factor": "1" } } Use curl to post the configuration to one of the |kconnect-long| workers. Change ``http://localhost:8083/`` to the endpoint of one of your |kconnect-long| workers. .. codewithvars:: bash curl -s -X POST -H 'Content-Type: application/json' --data @connector.json http://localhost:8083/connectors .. codewithvars:: bash curl -s -X PUT -H 'Content-Type: application/json' --data @connector.json \ http://localhost:8083/connectors/datadog-metrics-sink-connector/config ------------------------ Additional Documentation ------------------------ .. toctree:: :maxdepth: 1 datadog_metrics_sink_connector_config changelog