ServiceNow Source Connector for Confluent Cloud

The Kafka Connect ServiceNow Source connector is used to poll for additions and changes made in a ServiceNow table (see the ServiceNow documentation) and get these changes into Apache Kafka® in real time. The connector consumes data from the ServiceNow table and adds to or updates the Kafka topic using range queries against the ServiceNow Table API.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The ServiceNow Source connector provides the following features:

  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic.
  • Automatic retries: When network failures occur, the connector automatically retries the request. The property retry.max.times controls how many times retries are attempted. An exponential backoff is added to each retry interval.
  • Elasticity: The connector allows you to configure two parameters that enforce the throughput limit: batch.max.rows and poll.interval.s. The connector defaults to 10000 records and a 30 second polling interval. If a large number of updates occur within the given interval, the connector will paginate records according to configurable batch size. Note that since ServiceNow provides precision to one second, the ServiceNow Connector provides one second as the lowest poll.interval.s configuration property setting.
  • Supports one task: The connector supports running one task only. That is, one table is handled by one task.
  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, and JSON (schemaless) output formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf).

See Configuration Properties for configuration property descriptions.

See Cloud connector limitations for additional information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud ServiceNow Source connector. The quick start provides the basics of selecting the connector and configuring it to stream events.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • The Confluent Cloud CLI installed and configured for the cluster. See Install and Configure the Confluent Cloud CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • You must have the ServiceNow instance URL, table name, and connector authentication details. For details, see the ServiceNow docs.
  • At least one Kafka topic must exist in your Confluent Cloud cluster before creating the source connector.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

Click Connectors. If you already have connectors in your cluster, click Add connector.

Step 3: Select your connector.

Click the ServiceNow Source connector icon.

ServiceNow Source Connector Icon

Step 4: Set up the connection.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector Name.
  2. Enter your Kafka Cluster credentials. The credentials are either the cluster API key and secret or the service account API key and secret.
  3. Select a topic where you want to send data.
  4. Select an Output message format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  5. Enter the ServiceNow details:
    • Enter the instance URL, table name, and connector (user) authentication details. For additional information, see the ServiceNow docs. An instance URL looks like this: https://dev1000.service-now.com/.
    • Enter the HTTP request timeout. The default value is 50000 milliseconds (50 seconds).
    • Enter the maximum number of times to retry requests. The default value is three retries.
    • Enter the poll interval. This is the interval at which the connector polls the ServiceNow table for new and updated data. The default value is 30 seconds. The minimum poll interval is 1 second and the maximum poll interval is 60 seconds.
  6. Enter the ServiceNow query details:
    • Enter the maximum rows per batch. This is the maximum number of rows to include in a single batch when polling for new data. The default is 10000 rows.
    • Enter the starting time in UTC. The time to start fetching all additions and updates from a ServiceNow table. The required format is YYYY-MM-DD. The starting time is 00:00 UTC in the current timezone. If left blank, this property defaults to the date and time the connector launches.
  7. Enter the number of tasks to use with the connector. The connector supports running one task only. That is, one table is handled by one task.

Note

See Configuration Properties for configuration property descriptions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Launch the connector

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Connector status

Step 7: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe ServiceNowSource

Example output:

Following are the required configs:
connector.class: ServiceNowSource
name
kafka.api.key
kafka.api.secret
kafka.topic
output.data.format
servicenow.url
servicenow.table
servicenow.user
servicenow.password
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "ServiceNowSource",
  "name": "ServiceNowSource_0",
  "kafka.api.key": "****************",
  "kafka.api.secret": "************************************************",
  "kafka.topic": "<topic-name>",
  "output.data.format": "AVRO",
  "servicenow.url": "<instance-URL>",
  "servicenow.table": "<table-name>",
  "servicenow.user": "<username>",
  "servicenow.password": "<password>",
  "tasks.max": "1",
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.api.key" and ""kafka.api.secret": These credentials are either the cluster API key and secret or the service account API key and secret.
  • "kafka.topic": Enter the topic name where data is sent.
  • output.data.format": Enter an output data format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "servicenow.<>": Enter the instance URL, table name, and connector authentication details. For additional information, see the ServiceNow docs. An instance URL looks like this: https://dev1000.service-now.com/.
  • "tasks.max": Enter the number of tasks to use with the connector. The connector supports running one task only. That is, one table is handled by one task.

Note

See Configuration Properties for configuration property descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config servicenow-source-config.json

Example output:

Created connector ServiceNowSource_0 lcc-do6vzd

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID           |             Name              | Status  | Type  | Trace
+------------+-------------------------------+---------+--------+-------+
lcc-do6vzd   | ServiceNowSource_0            | RUNNING | source |       |

Step 6: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration Properties

The following connector configuration properties are used with the ServiceNow Source connector for Confluent Cloud.

kafka.topic

The name of the Kafka topic to publish data to. You can specify only one topic.

  • Type: string
  • Importance: high
servicenow.url

This property specifies the IP or hostname and port. For example, https://dev1000.service-now.com/.

  • Type: string
  • Importance: high
servicenow.table

The ServiceNow table name from which to poll data.

  • Type: string
  • Importance: high
servicenow.username

The username used when connecting to ServiceNow.

  • Type: string
  • Importance: high
servicenow.password

The password used when connecting to ServiceNow.

  • Type: password
  • Importance: high
connection.timeout.ms

HTTP request timeout for ServiceNow. The default value is 50000 milliseconds (50 seconds).

  • Type: int
  • Default: 50000ms
  • Importance: low
retry.max.times

This property specifies the maximum number of times to retry a data connection if the first connection attempt fails.

  • Type: int
  • Default: 3
  • Importance: medium
poll.interval.s

Polling interval in seconds to poll new data in each ServiceNow table.

  • Type: int
  • Default: 30
  • Valid Values: [1,…,60]
  • Importance: medium
batch.max.rows

Maximum number of rows to include in a single batch when polling for new data. This setting can be used to limit the amount of data buffered internally in the connector.

  • Type: int
  • Default: 10000
  • Importance: medium
servicenow.since

The date to begin consuming data from ServiceNow, starting at 00:00 UTC. The acceptable date format is YYYY-MM-DD.

  • Type: string
  • Default: Date and time (in UTC) when the connector is launched
  • Importance: medium

Note

The ServiceNow Source Connector will produce messages created before the servicenow.since date that are updated on or after the servicenow.since date.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png