ServiceNow Source Connector for Confluent Cloud

The Kafka Connect ServiceNow Source connector is used to poll for additions and changes made in a ServiceNow table (see the ServiceNow documentation) and get these changes into Apache Kafka® in real time. The connector consumes data from the ServiceNow table and adds to or updates the Kafka topic using range queries against the ServiceNow Table API.

Features

The ServiceNow Source connector provides the following features:

  • Topics created automatically: The connector can automatically create Kafka topics.
  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic.
  • Automatic retries: When network failures occur, the connector automatically retries the request. The property retry.max.times controls how many times retries are attempted. An exponential backoff is added to each retry interval.
  • Elasticity: The connector allows you to configure two parameters that enforce the throughput limit: batch.max.rows and poll.interval.s. The connector defaults to 10000 records and a 30 second polling interval. If a large number of updates occur within the given interval, the connector will paginate records according to configurable batch size. Note that since ServiceNow provides precision to one second, the ServiceNow Connector provides one second as the lowest poll.interval.s configuration property setting.
  • Supports one task: The connector supports running one task only. That is, one table is handled by one task.
  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, and JSON (schemaless) output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). See Environment Limitations for additional information.

See Configuration Properties for configuration property descriptions.

See Cloud connector limitations for additional information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud ServiceNow Source connector. The quick start provides the basics of selecting the connector and configuring it to stream events.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  • You must have the ServiceNow instance URL, table name, and connector authentication details. For details, see the ServiceNow docs.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the ServiceNow Source connector icon.

ServiceNow Source Connector Icon

Step 4: Set up the connection.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector Name.
  2. Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
  3. Enter the Kafka topic name where you want data sent. The connector can create a topic automatically if no topics exist.
  4. Select the Output Kafka record value format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  5. Enter the ServiceNow details:
    • Enter the instance URL, table name, and connector (user) authentication details. For additional information, see the ServiceNow docs. An instance URL looks like this: https://dev1000.service-now.com/.
    • Enter the HTTP request timeout. The default value is 50000 milliseconds (50 seconds).
    • Enter the maximum number of times to retry requests. The default value is three retries.
    • Enter the poll interval. This is the interval at which the connector polls the ServiceNow table for new and updated data. The default value is 30 seconds. The minimum poll interval is 1 second and the maximum poll interval is 60 seconds.
  6. Enter the ServiceNow query details:
    • Enter the maximum rows per batch. This is the maximum number of rows to include in a single batch when polling for new data. The default is 10000 rows.
    • Enter the starting time in UTC. The time to start fetching all additions and updates from a ServiceNow table. The required format is YYYY-MM-DD. The starting time is 00:00 UTC in the current timezone. If left blank, this property defaults to the date and time the connector launches.
  7. Enter the number of tasks to use with the connector. The connector supports running one task only. That is, one table is handled by one task.
  8. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for configuration property values and descriptions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Launch the connector

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Connector status

Step 7: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe ServiceNowSource

Example output:

Following are the required configs:
connector.class: ServiceNowSource
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
kafka.topic
output.data.format
servicenow.url
servicenow.table
servicenow.user
servicenow.password
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "ServiceNowSource",
  "name": "ServiceNowSource_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "****************",
  "kafka.api.secret": "************************************************",
  "kafka.topic": "<topic-name>",
  "output.data.format": "AVRO",
  "servicenow.url": "<instance-URL>",
  "servicenow.table": "<table-name>",
  "servicenow.user": "<username>",
  "servicenow.password": "<password>",
  "tasks.max": "1",
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "kafka.topic": Enter the topic name where data is sent.

  • output.data.format": Enter an output data format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.

  • "servicenow.<>": Enter the instance URL, table name, and connector authentication details. For additional information, see the ServiceNow docs. An instance URL looks like this: https://dev1000.service-now.com/.

  • "tasks.max": Enter the number of tasks to use with the connector. The connector supports running one task only. That is, one table is handled by one task.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for configuration property values and descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config servicenow-source-config.json

Example output:

Created connector ServiceNowSource_0 lcc-do6vzd

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID           |             Name              | Status  | Type  | Trace
+------------+-------------------------------+---------+--------+-------+
lcc-do6vzd   | ServiceNowSource_0            | RUNNING | source |       |

Step 6: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration Properties

The following connector configuration properties are used with the ServiceNow Source connector for Confluent Cloud.

kafka.topic

The name of the Kafka topic to publish data to. You can specify only one topic.

  • Type: string
  • Importance: high
servicenow.url

This property specifies the IP or hostname and port. For example, https://dev1000.service-now.com/.

  • Type: string
  • Importance: high
servicenow.table

The ServiceNow table name from which to poll data.

  • Type: string
  • Importance: high
servicenow.username

The username used when connecting to ServiceNow.

  • Type: string
  • Importance: high
servicenow.password

The password used when connecting to ServiceNow.

  • Type: password
  • Importance: high
connection.timeout.ms

HTTP request timeout for ServiceNow. The default value is 50000 milliseconds (50 seconds).

  • Type: int
  • Default: 50000ms
  • Importance: low
retry.max.times

This property specifies the maximum number of times to retry a data connection if the first connection attempt fails.

  • Type: int
  • Default: 3
  • Importance: medium
poll.interval.s

Polling interval in seconds to poll new data in each ServiceNow table.

  • Type: int
  • Default: 30
  • Valid Values: [1,…,60]
  • Importance: medium
batch.max.rows

Maximum number of rows to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.

  • Type: int
  • Default: 10000
  • Importance: medium
servicenow.since

The date to begin consuming data from ServiceNow, starting at 00:00 UTC. The acceptable date format is YYYY-MM-DD.

  • Type: string
  • Default: Date and time (in UTC) when the connector is launched
  • Importance: medium

Note

The ServiceNow Source connector will produce messages created before the servicenow.since date that are updated on or after the servicenow.since date.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png