Confluent Cloud Metrics API

Note

This feature is in preview. The API is not yet stable and breaking changes may occur. This feature is being rolled out to Confluent Cloud deployments in phases and may not be available in your cluster yet. The phased rollout is expected to complete in January 2020.

The Confluent Cloud Metrics API provides actionable operational metrics about your Confluent Cloud deployment. This is a queryable HTTP API in which the user will POST a query written in JSON and get back a time series of metrics specified by the query.

Metrics API Quick Start

Prerequisites

The following examples use HTTPie rather than cURL. This software package can be installed using most common software package managers by following the documentation .

List the available metrics

Get a description of the available metrics by using the descriptors endpoint of the API:

http -v https://api.telemetry.confluent.cloud/v1/metrics/cloud/descriptors --auth '<USER>:<PASSWORD>'

This returns a JSON blob with details on the available metrics to query.

Query for bytes produced per minute grouped by topic

  1. Create a file named sent_bytes_query.json using the following template. Be sure to change LKC-XXXXX and the timestamp values to match your needs.

    {
        "aggregations": [
            {
                "agg": "SUM",
                "metric": "io.confluent.kafka.server/sent_bytes/delta"
            }
        ],
        "filter": {
            "filters": [
                {
                    "field": "cluster_id",
                    "op": "EQ",
                    "value": "lkc-XXXXX"
                }
            ],
            "op": "AND"
        },
        "granularity": "PT1M",
        "group_by": [
            "topic"
        ],
        "intervals": [
            "2019-12-19T11:00:00-05:00/2019-12-19T11:05:00-05:00"
        ],
        "limit": 25
    }
    
  2. Submit the query using the following command. Please change USER and PASSWORD to match your environments.

    http -v https://api.telemetry.confluent.cloud/v1/metrics/cloud/descriptors --auth '<USER>:<PASSWORD>' < sent_bytes_query.json
    

    Your output should resemble:

    Note

    Be aware that if you have not produced data during the time window, the dataset will be empty for a given topic.

    {
         "data": [
             {
                 "timestamp": "2019-12-19T16:01:00Z",
                 "topic": "test-topic",
                 "value": 0.0
             },
             {
                 "timestamp": "2019-12-19T16:02:00Z",
                 "topic": "test-topic",
                 "value": 157.0
             },
             {
                 "timestamp": "2019-12-19T16:03:00Z",
                 "topic": "test-topic",
                 "value": 371.0
             },
             {
                 "timestamp": "2019-12-19T16:04:00Z",
                 "topic": "test-topic",
                 "value": 0.0
             }
         ]
     }
    

Query for bytes consumed per minute grouped by topic

  1. Create a file named received_bytes_query.json using the following template. Be sure to change LKC-XXXXX and the timestamp values to match your needs.

    {
        "aggregations": [
            {
                "agg": "SUM",
                "metric": "io.confluent.kafka.server/received_bytes/delta"
            }
        ],
        "filter": {
            "filters": [
                {
                    "field": "cluster_id",
                    "op": "EQ",
                    "value": "lkc-XXXXX"
                }
            ],
            "op": "AND"
        },
        "granularity": "PT1M",
        "group_by": [
            "topic"
        ],
        "intervals": [
            "2019-12-19T11:00:00-05:00/2019-12-19T11:05:00-05:00"
        ],
        "limit": 25
    }
    
  2. Submit the query using the following command. Be sure to change USER and PASSWORD to match your environments.

    http -v https://api.telemetry.confluent.cloud/v1/metrics/cloud/query --auth '<USER>:<PASSWORD>' < received_bytes_query.json
    

    Your output should resemble:

    Note

    Be aware that if you have not produced data during the time window, the dataset will be empty for a given topic.

    {
        "data": [
            {
                "timestamp": "2019-12-19T16:00:00Z",
                "topic": "test-topic",
                "value": 72.0
            },
            {
                "timestamp": "2019-12-19T16:01:00Z",
                "topic": "test-topic",
                "value": 139.0
            },
            {
                "timestamp": "2019-12-19T16:02:00Z",
                "topic": "test-topic",
                "value": 232.0
            },
            {
                "timestamp": "2019-12-19T16:03:00Z",
                "topic": "test-topic",
                "value": 0.0
            },
            {
                "timestamp": "2019-12-19T16:04:00Z",
                "topic": "test-topic",
                "value": 0.0
            }
        ]
    }
    

Query for max retained bytes per hour over 2 hours for topic named test-topic

  1. Create a file named retained_bytes_query.json using the following template. Change LKC-XXXXX and the timestamp values to match your needs.

    {
        "aggregations": [
            {
                "agg": "SUM",
                "metric": "io.confluent.kafka.server/retained_bytes/delta"
            }
        ],
        "filter": {
            "filters": [
                {
                     "field": "topic",
                     "op": "EQ",
                     "value": "test-topic"
                },
                {
                    "field": "cluster_id",
                    "op": "EQ",
                    "value": "lkc-XXXXX"
                }
            ],
            "op": "AND"
        },
        "granularity": "PT1M",
        "group_by": [
            "topic"
        ],
        "intervals": [
            "2019-12-19T11:00:00-05:00/P0Y0M0DT2H0M0S"
        ],
        "limit": 25
    }
    
  2. Submit the query using the following command. Be sure to change USER and PASSWORD to match your environments.

    http -v https://api.telemetry.confluent.cloud/v1/metrics/cloud/query --auth '<USER>:<PASSWORD>' < retained_bytes_query.json
    

    Your output should resemble:

    {
        "data": [
            {
                "timestamp": "2019-12-19T16:00:00Z",
                "topic": "test-topic",
                "value": 406561.0
            },
            {
                "timestamp": "2019-12-19T17:00:00Z",
                "topic": "test-topic",
                "value": 406561.0
            }
        ]
    }
    

Query for max retained bytes per hour over 2 hours for a cluster lkc-XXXX

  1. Create a file named cluster_retained_bytes_query.json using the following template. Be sure to change LKC-XXXXX and the timestamp values to match your needs.

    {
        "aggregations": [
            {
                "agg": "SUM",
                "metric": "io.confluent.kafka.server/retained_bytes"
            }
        ],
        "filter": {
            "filters": [
                {
                    "field": "cluster_id",
                    "op": "EQ",
                    "value": "lkc-xr36q"
                }
            ],
            "op": "AND"
        },
        "granularity": "PT1H",
        "group_by": [
            "cluster_id"
        ],
        "intervals": [
            "2019-12-19T11:00:00-05:00/P0Y0M0DT2H0M0S"
        ],
        "limit": 5
    }
    
  2. Submit the query using the following command. Be sure to change USER and PASSWORD to match your environments.

    http -v https://api.telemetry.confluent.cloud/v1/metrics/cloud/query --auth '<USER>:<PASSWORD>' < cluster_retained_bytes_query.json
    

    Your output should resemble:

    {
        "data": [
            {
                "timestamp": "2019-12-19T16:00:00Z",
                "value": 507350.0
            },
            {
                "timestamp": "2019-12-19T17:00:00Z",
                "value": 507350.0
            }
        ]
    }
    

FAQ

Why am I seeing empty data sets for topics that exist on queries other than for retained_bytes?

If there are only values of 0.0 in the time range queried, than the API will return an empty set. When there is non-zero data within the time range, time slices with values of 0.0 are returned.

Why didn’t retained_bytes decrease after I changed the retention policy for my topic?

The value of retained_bytes is the maximum over the time range returned. If data has been deleted during the current timeslice, you will not see the effect until the next time range window begins.