Amazon CloudWatch Metrics Sink Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see AAmazon CloudWatch Metrics Sink Connector for Confluent Platform.

The Amazon CloudWatch Metrics Sink connector is used to export data to Amazon CloudWatch metrics from an Apache Kafka® topic. The connector will only accept Struct objects as the Kafka record. The record must consist of the fields name, type, timestamp, dimensions, and values. The values field refers to metric values which are also expected to be Struct objects. For more details about values, see Defined schemas.

The following example shows a sample input Struct object record.

{
  "name": string,
  "type": string,
  "timestamp": long,
  "dimensions": {
    "<dimension-1>": string,
    ...
  },
  "values": {
    "<datapoint-1>": double,
    "<datapoint-2>": double,
    ...
  }
}

The connector can start with one task and scale horizontally by adding more tasks. Note that even with multiple tasks, performance is limited by Amazon to 150 transactions per second. Contact Amazon to increase this transaction limit for your account.

Features

The Amazon CloudWatch Metrics Sink connector provides the following features:

  • At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance. Note that performance is limited by Amazon to 150 transactions per second. Contact Amazon to increase this transaction limit for your account.
  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), and Protobuf input formats. Schema Registry must be enabled to use these Schema Registry-based formats.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Limitations

Be sure to review the following information.

Defined schemas

The connector attempts to fit the values Struct into one of the four defined schemas (Gauge, Meter, Histogram, Timer) depending on the type field. The supported types are gauge, meter, histogram, timer or custom.

Note

  • If the value for type is custom, there is a catchall mechanism that accounts for any type of schema, but the type field’s value must be custom.
  • Each value in the values Struct must be type double.

Gauge schema

{
  "doubleValue": double
}

Meter schema

{
  "count": double,
  "oneMinuteRate": double,
  "fiveMinuteRate": double,
  "fifteenMinuteRate": double,
  "meanRate": double
}

Histogram schema

{
  "count": double,
  "max": double,
  "min": double,
  "mean": double,
  "stdDev": double,
  "sum": double,
  "median": double,
  "percentile75th": double,
  "percentile95th": double,
  "percentile98th": double,
  "percentile99th": double,
  "percentile999th": double,
}

Timer schema

{
  "count": double,
  "oneMinuteRate": double,
  "fiveMinuteRate": double,
  "fifteenMinuteRate": double,
  "meanRate": double,
  "max": double,
  "min": double,
  "mean": double,
  "stdDev": double,
  "sum": double,
  "median": double,
  "percentile75th": double,
  "percentile95th": double,
  "percentile98th": double,
  "percentile99th": double,
  "percentile999th": double
}

Sample custom schema

{
  "posts": double,
  "puts": double,
  "patches": double,
  "deletes": double,
}

Record mapping

Each value in the values Struct is mapped to its own MetricDatum object using the same timestamp and dimensions fields, with the name field as a prefix. For example, the following will be mapped to five separate MetricDatum objects, since there are five values in the values Struct:

{
  "name": "sample_meter_metric",
  "type": "meter",
  "timestamp": 23480239402348234,
  "dimensions": {
    "service": "ec2-2312",
    "method": "update"
  },
  "values": {
    "count": 12,
    "oneMinuteRate": 5.2,
    "fiveMinuteRate": 4.7,
    "fifteenMinuteRate": 4.9,
    "meanRate": 5.1"
  }
}

The following is an example of how the oneMinuteRate field is mapped to a separate MetricDatum object:

{
  "name": "sample_meter_metric_oneMinuteRate",
  "timestamp": 23480239402348234,
  "dimensions": {
    "service": "ec2-2312",
    "method": "update"
  },
  "value": 5.2
}

Quick Start

Use this quick start to get up and running with the Confluent Cloud Amazon CloudWatch Metrics Sink connector. The quick start provides the basics of selecting the connector and configuring it to send records to Amazon CloudWatch.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on AWS.
  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.
  • For networking considerations, see Networking and DNS Considerations. To use static egress IPs, see Static Egress IP Addresses.
  • An AWS account configured with Access Keys.
  • The Amazon CloudWatch Metrics region must in the same region where your Confluent Cloud cluster is located (where you are running the connector). Note that the hard-coded endpoint URL for the connector is set to https://monitoring.{kafka-cluster-region}.amazonaws.com. This sets the Amazon CloudWatch region to your Kafka cluster region.
  • Kafka cluster credentials. The following lists the different ways you can provide credentials.
    • Enter an existing service account resource ID.
    • Create a Confluent Cloud service account for the connector. Make sure to review the ACL entries required in the service account documentation. Some connectors have specific ACL requirements.
    • Create a Confluent Cloud API key and secret. To create a key and secret, you can use confluent api-key create or you can autogenerate the API key and secret directly in the Cloud Console when setting up the connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Amazon CloudWatch Metrics Sink connector card.

Amazon CloudWatch Metrics Sink Connector Card

Step 4: Enter the connector details.

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Amazon CloudWatch Metrics Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topic(s) you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check Amazon CloudWatch metrics.

Check for metrics in Amazon CloudWatch.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe CloudWatchMetricsSink

Example output:

Following are the required configs:
connector.class: CloudWatchMetricsSink
input.data.format
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
aws.access.key.id
aws.secret.access.key
aws.cloudwatch.metrics.namespace
tasks.max
topics

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "CloudWatchMetricsSink",
  "name": "CloudWatchMetricsSink_0",
  "input.data.format": "AVRO"
  "topics": "<my_topic_0>"
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "aws.access.key.id": "****************",
  "aws.secret.access.key": "********************************************",
  "aws.cloudwatch.metrics.namespace": "<namespace>",
  "tasks.max": "1"
}

Note the following required property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), and PROTOBUF. You must have Confluent Cloud Schema Registry configured if using a schema-based message format.

  • "topics": Identifies the topic name or a comma-separated list of topic names.

  • "aws.access.key.id" and "aws.secret.access.key": Enter the AWS Access Key ID and Secret. For information about how to set these up, see Access Keys.

  • "aws.cloudwatch.metrics.namespace": Enter a valid namespace for your CloudWatch Metrics region.

  • "tasks.max": Enter the number of tasks for the connector to use. More tasks may improve performance.

    Note

    Performance is limited by Amazon to 150 transactions per second. Contact Amazon to increase this transaction limit for your account.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for all property values and definitions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config amazon-cloudwatch-metrics-sink-config.json

Example output:

Created connector CloudWatchMetricsSink_0 lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID          |       Name              | Status  | Type
+-----------+-------------------------+---------+------+
lcc-ix4dl   | CloudWatchMetricsSink_0 | RUNNING | sink

Step 6: Check Amazon CloudWatch metrics.

Check for metrics in Amazon CloudWatch.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Configuration Properties

Use the following configuration properties with this connector.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, or PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key
  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret
  • Type: password
  • Importance: high

AWS Credentials

aws.access.key.id

The AWS Access Key used to connect to Amazon CloudWatch.

  • Type: password
  • Importance: high
aws.secret.access.key

The AWS Secret Key used to connect to Amazon CloudWatch.

  • Type: password
  • Importance: high

How should we connect to Amazon CloudWatch Metrics?

aws.cloudwatch.metrics.namespace

The Amazon CloudWatch metrics namespace associated with the desired metrics.

  • Type: string
  • Importance: high

How should we handle errors?

behavior.on.malformed.metric

The connector’s behavior if the kafka record does not contain an expected field. Valid options are ‘LOG’ and ‘FAIL’. ‘LOG’ will log and skip the malformed records and ‘FAIL’ will fail the connector.

  • Type: string
  • Default: FAIL
  • Importance: low

Number of tasks for this connector

tasks.max
  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png