Zendesk Source Connector for Confluent Cloud

Zendesk is a customer service system for tracking, prioritizing, and solving customer support tickets. The Kafka Connect Zendesk Source connector copies data into Apache Kafka® from various Zendesk support tables such as tickets, ticket_audits, ticket_fields, groups, organizations, satisfaction_ratings, among others. The connector streams data to Zendesk using the Zendesk Support API. See Supported tables for more information.

Features

The Zendesk Source connector provides the following features:

  • Topics created automatically: The connector can automatically create Kafka topics.
  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic.
  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, and JSON (schemaless) output formats. You must enable Schema Registry to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf).

See Configuration Properties for configuration property descriptions. See Cloud connector limitations for additional information.

Supported tables

The following tables from Zendesk are supported:

  • custom_roles
  • groups
  • group_memberships
  • organizations
  • organization_subscriptions
  • organization_memberships
  • satisfaction_ratings
  • tickets
  • ticket_audits
  • ticket_fields
  • ticket_metrics
  • users

Quick Start

Use this quick start to get up and running with the Confluent Cloud Zendesk Source connector. The quick start provides the basics of selecting the connector and configuring it to stream events.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.
  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  • Authorization and credentials to access the Zendesk service URL.
  • Zendesk API: Support APIs must be enabled for the Zendesk account.
  • Either the oauth2 or password mechanisms should be enabled for the Zendesk account. For additional information, see Using the API dashboard: Enabling password or token access.
  • Certain tables, such as custom_roles, can only be accessed if the Zendesk Account is an Enterprise account. For more information, see Custom Agent Roles.
  • A few Zendesk configuration settings may need to be enabled to ensure export is possible. For example, satisfaction_ratings can only be exported if this option is enabled. For more information, see Support API: Satisfaction Ratings.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

In the left navigation menu, click Data integration, and then click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector.

Click the Zendesk Source connector icon.

Zendesk Source Connector Icon

Step 4: Set up the connection.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector Name.
  2. Select the way you want to provide Kafka Cluster credentials. You can either select a service account resource ID or you can enter an API key and secret (or generate these in the Cloud Console).
  3. Enter the Topic Name Pattern. The pattern to use for the topic name, where the ${entityName} literal is replaced with each entity name. If ${entityName} is not specified, the connector writes all records to a single topic named ZD_${entityName}. A valid topic pattern should follow the regex [a-zA-Z0-9\\.\\-\\_]*(\\$\\{entityName\\})?[a-zA-Z0-9\\.\\-\\_]*.
  4. Enter the Zendesk connection details:
    • Zendesk Service URL: The URL where the connector gets Zendesk source data. For example, https://<sub-domain>.zendesk.com
    • Endpoint Authentication type: Choose either basis or bearer for the authentication type. For more information, see OAuth tokens in the Zendesk docs.
    • Zendesk tables: The Zendesk tables that the connector exports and writes to Kafka. To balance the load between workers, order the tables by their expected size or throughput. See Supported tables for the list of tables.
    • Zendesk start time: Rows updated after the time entered are processed by the connector. The value should be formatted using the ISO 8601 format yyyy-MM-dd'T'HH:mm:SS. If left blank, the default time is set to the time the connector is launched minus one minute.
  5. If using the Basic endpoint authentication type, enter the credentials. If using the Bearer type, enter the token. The connector adds the token to the Authorization header in the HTTP request.
  6. Enter the connector details:
    • Maximum Batch Size: Defaults to 100 records.
    • Maximum In Flight requests: The maximum number of requests that may be in flight at one time. Defaults to 10 requests.
    • Maximum Poll interval (ms): The time in milliseconds (ms) between requests to fetch changed or updated resources. Defaults to 3000 ms (3 seconds).
    • Request Interval (ms): The time in milliseconds to wait before checking for updated records. Defaults to 15000 ms (15 seconds).
    • Maximum Retries: The maximum number of times the connector retries a task before the task fails. Defaults to 10 retries.
    • Retry Backoff (ms): The time in milliseconds to wait following an error before a retry attempt is made. Defaults to 3000 ms (3 seconds).
  7. Select the Output Kafka record value format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.
  8. Enter the number of tasks to use with the connector. Only one task per connector is supported.
  9. Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for configuration property values and descriptions.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 7: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

  • Make sure you have all your prerequisites completed.
  • The example commands use Confluent CLI version 2. For more information see, Confluent CLI v2.

Step 1: List the available connectors.

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

confluent connect plugin describe <connector-catalog-name>

For example:

confluent connect plugin describe ZendeskSource

Example output:

Following are the required configs:
connector.class: ZendeskSource
name
kafka.auth.mode
kafka.api.key
kafka.api.secret
zendesk.url
zendesk.tables
zendesk.user
zendesk.password
output.data.format
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties. See Configuration Properties for additional configuration property values and descriptions.

{
  "connector.class": "ZendeskSource",
  "name": "ZendeskSource_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "zendesk.url": "https://<sub-domain>.zendesk.com",
  "zendesk.tables": "tickets, groups, users",
  "zendesk.user": "<username>",
  "zendesk.password": "*********************************",
  "output.data.format": "AVRO",
  "tasks.max": "1",
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • Enter the Zendesk connection details.

    • "zendesk.url": The URL where the connector gets Zendesk source data. For example, https://<sub-domain>.zendesk.com``.
    • "zendesk.tables": The Zendesk tables the connector exports and writes to Kafka. To balance the load between workers, order the tables by their expected size or throughput. See Supported tables for the list of tables.
  • Enter the authentication details. The example shows the default basic authentication properties "zendesk.user" and "zendesk.password". You can use the properties "zendesk.auth.type": "bearer" and "bearer.token": "<token-string>" to authenticate. This is a single string that is sent in the HTTP Authorization header.

  • "output.data.format": Enter an output data format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Environment Limitations for additional information.

  • "tasks.max": Enter the number of tasks to use with the connector. Only one task per connector is supported.

  • Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for configuration property values and descriptions.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

confluent connect create --config <file-name>.json

For example:

confluent connect create --config zendesk-source-config.json

Example output:

Created connector ZendeskSource_0 lcc-do6vzd

Step 5: Check the connector status.

Enter the following command to check the connector status:

confluent connect list

Example output:

ID           |             Name         | Status  | Type  | Trace
+------------+--------------------------+---------+--------+-------+
lcc-do6vzd   | ZendeskSource_0          | RUNNING | source |       |

Step 6: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect section.

Configuration Properties

The following connector configuration properties are used with the Zendesk Source connector for Confluent Cloud.

zendesk.url

Zendesk Service URL. For example, https://<sub-domain>.zendesk.com.

  • Type: string
  • Valid Values: URL with one of these schemes: http, https
  • Importance: high
zendesk.tables

The Zendesk tables that the connector exports and writes to Kafka. To balance the load between workers, order the tables by their expected size or throughput.

  • Type: list (List of resources, separated by commas)
  • Valid Values: [custom_roles, groups, group_memberships, organizations, organization_subscriptions, organization_memberships, satisfaction_ratings, tickets, ticket_audits, ticket_fields, ticket_metrics, users]
  • Importance: high
topic.name.pattern

The pattern to use for the topic name, where the ${entityName} literal will be replaced with each entity name. If ${entityName} is not specified all the records are written to a single topic. A valid topic pattern should follow the regex [a-zA-Z0-9\\.\\-\\_]*(\\$\\{entityName\\})?[a-zA-Z0-9\\.\\-\\_]*.

  • Type: string
  • Default: ZD_${entityName}
  • Importance: high
zendesk.auth.type

Authentication type for the endpoint. Valid values are basic and bearer.

  • Type: string
  • Valid Values: [basic, bearer]
  • Importance: high
zendesk.user

The username for authenticating with the endpoint.

  • Type: string
  • Importance: high
zendesk.password

The password for authenticating with the endpoint.

  • Type: password
  • Importance: high
bearer.token

The bearer authentication token to be used when zendesk.auth.type=bearer. The supplied token is used as the value of the Authorization header in HTTP requests.

  • Type: password
  • Importance: high
zendesk.since

Rows updated after this time are processed by the connector. If left blank, the default time is set to the time this connector is launched minus one minute. The value should be formatted as ISO 8601. For example, yyyy-MM-dd'T'HH:mm:SS

  • Type: string
  • Valid Values: ISO 8601 formatted date/time
  • Importance: medium
max.batch.size

The maximum number of batched records that can be returned and written to Kafka at one time.

  • Type: int
  • Default: 100
  • Valid Values: [1,…,2147483647]
  • Importance: high
max.in.flight.requests

The maximum number of requests that may be in-flight at once.

  • Type: int
  • Default: 10
  • Valid Values: [1,…,200]
  • Importance: high
max.poll.interval.ms

The time in milliseconds to wait while polling for a full batch of records.

  • Type: long
  • Default: 3000
  • Valid Values: [1,…,300000]
  • Importance: medium
request.interval.ms

The time in milliseconds to wait before checking for updated records.

  • Type: long
  • Default: 15000
  • Valid Values: [1,…,86400000]
  • Importance: medium
max.retries

The maximum number of times to retry on errors before failing the task.

  • Type: int
  • Default: 10
  • Valid Values: [0,…,2147483647]
  • Importance: medium
retry.backoff.ms

The time in milliseconds to wait following an error before a retry attempt is made.

  • Type: int
  • Default: 3000
  • Valid Values: [0,…,2147483647]
  • Importance: medium

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png