Integrate Confluent Cloud Metrics API with Third-party Monitoring Tools¶
Integrating directly with a third-party monitoring tool allows you to monitor Confluent Cloud alongside the rest of your applications.
Datadog¶
Datadog provides an integration where users can input a Confluent Cloud API key (resource-scoped for resource management) into the Datadog UI, select resources to monitor, and see metrics in minutes using an out-of-the-box dashboard. If you use Datadog, create your Confluent Cloud API key and follow the instructions from Datadog to get started. After configuring the integration, search the Datadog dashboards for “Confluent Cloud Overview,” the default Confluent Cloud dashboard at Datadog. Clone the default dashboard so that you can edit it to suit your needs.
Dynatrace¶
Dynatrace provides an extension where users can input a Confluent Cloud API key (resource-scoped for resource management) into the Dynatrace Monitoring Configuration, select resources to monitor, and see metrics in minutes in a prebuilt dashboard. If you use Dynatrace, create your Confluent Cloud API key (resource-scoped for resource management) and follow the instructions to get started.
Grafana Cloud¶
Grafana Cloud provides an integration where users can input a Confluent Cloud API key that is resource-scoped for resource management into Grafana Cloud, select resources to monitor, and see metrics in minutes using an out-of-the-box-dashboard. If you use Grafana Cloud, see Troubleshoot Grafana. If you use the default Grafana integration for Confluent Cloud you are not seeing some Confluent Cloud metrics in Grafana. This is because the native Grafana integration is not updated to include the latest set of metrics launched by Confluent Cloud.
Prometheus¶
Prometheus servers can scrape the Confluent Cloud Metrics API directly by making use
of the export
endpoint. This endpoint returns the single most recent data
point for each metric, for each distinct combination of labels in the Prometheus
exposition or Open Metrics format. For more information, see
Export metric values.
New Relic OpenTelemetry¶
You can collect metrics about your Confluent Cloud-managed Kafka deployment with the New Relic OpenTelemetry collector. The collector is a component of OpenTelemetry that collects, processes, and exports telemetry data to New Relic, or any observability back-end. For more information, see Monitoring Confluent Cloud Kafka with OpenTelemetry Collector.
Troubleshoot Grafana¶
Use this section to create a Grafana integration with the latest Confluent Cloud metrics.
Problem¶
You setup the native Grafana integration with Confluent Cloud but you are not seeing some Confluent Cloud metrics in Grafana. This is because the native Grafana integration is not updated to include the latest set of metrics launched by Confluent Cloud.
Solution¶
Use Metrics Endpoint in Grafana to get access to the latest Confluent Cloud metrics.
A scrape job in Grafana is a configuration that tells Grafana Cloud’s Metrics Endpoint to collect, or scrape, metrics from a specified source, such as a Prometheus-compatible endpoint. These jobs define what to monitor, how often to collect data, and what to do with that data once it’s collected.
Once you create a scrape job, no further management is required. Grafana Cloud automatically handles the scraping of metrics from Confluent Cloud into Grafana Cloud.
- Considerations:
- You must use a API Key resource-scoped for resource management to communicate with the Metrics API. For more information, see Create an API key to authenticate to the Metrics API.
- API Keys resource-scoped for Apache Kafka® clusters cause an authentication error.
- This topic contains steps for a third-party tool that might change without our knowledge.
Resource IDs¶
You must use Confluent Cloud resource IDs to create URLs for the scrape job. For more information, see Discover resources and metrics with the Metrics API and the Metrics API reference.
For example, to scrape metrics of compute pools, you would use the resource ID for compute pools, which looks like this: lfcp-examp2
Use the resource ID to create the scrape job URL:
https://api.telemetry.confluent.cloud/v2/metrics/cloud/export?resource.compute_pool.id=lfcp-examp2
Scrape jobs¶
Use this section to create scrape jobs.
After you the scrape job, you can modify it to scrape other resources or completely delete the scrape job by returning to Metrics Endpoint.
Grafana integrations for Confluent Cloud use the API key and secret as a form of basic authentication. Use the Confluent Cloud API Key as the username and the API Secret as the password. To avoid transcription errors, consider copying the API key and secret from Confluent Cloud and paste it into the scrape job.
If you encounter an error testing your connection, be sure that you have entered the right API credentials from your Confluent Cloud account.

To create scrape jobs:
- In your Grafana instance, click Connections and then click Add new connection.
- In Search connections, enter
Metrics Endpoint
and select the Metrics Endpoint integration. - In the Metrics Endpoint integration, click Configuration details. If you already have a scrape job, click Add new scrape job.
- In Scrape job name, enter a name for your scrape job.
- In Scrape job URL, enter a URL that contains the resource Ids that need to be scraped.
- In Scrape interval, select an appropriate interval for scrapes.
- In Types of authentication credentials, click Basic.
- In Basic username, enter your Confluent Cloud API key.
- In Basic password, enter your Confluent Cloud API secret.
- To test the connection, click Test Connection.
- To save your work, click Save scrape job.
Dashboards¶
Use this section to create dashboards.

To create dashboards:
- In your Grafana instance, click Dashboards then click New and select New dashboard.
- Click Add visualization.
- To create a visualization, select a data source, metrics, and filters.