PagerDuty Sink Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see PagerDuty Sink Connector for Confluent Platform.

The Kafka Connect PagerDuty Sink connector reads records from an Apache Kafka® topic and creates PagerDuty incidents.

Features

The PagerDuty Sink connector supports the following features:

  • At least once delivery: This connector guarantees that records from the Kafka topic are delivered at least once.
  • Supports multiple tasks: The connector supports running one or more tasks. More tasks may improve performance.
  • Automatic retries: If the PagerDuty Sink connector fails to connect to the PagerDuty endpoint it automatically retries the connection using exponential backoff.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Limitations

Be sure to review the following information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud PagerDuty Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a PagerDuty directory.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
  • A PagerDuty API key.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • At least one source Kafka topic must exist in your Confluent Cloud cluster before creating the sink connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the PagerDuty Sink connector card.

PagerDuty Sink Connector Card

Step 4: Enter the connector details.

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add PagerDuty Sink Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topic(s) you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check for PagerDuty incidents.

Verify that incidents are being produced on the PagerDuty host.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Using the Confluent CLI

To set up and run the connector using the Confluent CLI, complete the following steps.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe PagerDutySink

Example output:

The following are required configs:
connector.class : PagerDutySink
topics
input.data.format
name
kafka.api.key
kafka.api.secret
pagerduty.api.key
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "PagerDutySink",
  "topics": "incidents",
  "input.data.format": "JSON",
  "name": "PagerDutySinkConnector_0",
  "kafka.api.key": "****************",
  "kafka.api.secret": "*********************************",
  "pagerduty.api.key": "a1b2CDe3...",
  "tasks.max": "1",
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "topics": Enter the topic name or a comma-separated list of topic names.
  • "input.data.format": Sets (data coming from the Kafka topic): AVRO, PROTOBUF, or JSON_SR. A valid schema must be available in Schema Registry.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "pagerduty.api.key": PagerDuty API key with write permissions to create incidents. For more information, see the PagerDuty docs.

  • "tasks.max": Enter the maximum number of tasks for the connector to use. More tasks may improve performance.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for all property values and descriptions.

Step 3: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config pagerduty-sink-config.json

Example output:

Created connector PagerDutySinkConnector_0 lcc-do6vzd

Step 4: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID           |             Name              | Status  | Type | Trace
+------------+-------------------------------+---------+------+-------+
lcc-do6vzd   | PagerDutySinkConnector_0      | RUNNING | sink |       |

Step 5: Check for PagerDuty incidents.

Verify that incidents are being produced on the PagerDuty host.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Configuration Properties

Use the following configuration properties with this connector.

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR and PROTOBUF. Note that you need to have Confluent Cloud Schema Registry configured

  • Type: string
  • Importance: high

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key
  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret
  • Type: password
  • Importance: high

PagerDuty details

pagerduty.api.key

PagerDuty API key with write permissions to create incidents.

  • Type: password
  • Importance: high
pagerduty.max.retry.time.ms

In case of error, while executing a post request, the connector will retry until this time (in ms) elapses. The default value is 10000 (10 seconds). It’s recommended to set this value to be at least 1 second.

  • Type: int
  • Default: 10000 (10 seconds)
  • Valid Values: [1000,…]
  • Importance: low
behavior.on.error

The connector’s behavior if the kafka record does not contain an expected field. Valid options are ‘log’, ‘fail’ ans ‘ignore’. ‘log’ will log and skip the malformed records, ‘fail’ will fail the connector and ‘ignore’ will ignore the error.

  • Type: string
  • Default: fail
  • Importance: low

Number of tasks for this connector

tasks.max
  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png