Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Zendesk Source Connector for Confluent Platform

Zendesk Support is a system for tracking, prioritizing, and solving customer support tickets. The Kafka Connect Zendesk Source connector copies data into Apache Kafka® from various Zendesk support tables such as tickets, ticket_audits, ticket_fields, groups, organizations, satisfaction_ratings, and others, using the Zendesk Support API. Please find the list of supported Zendesk tables in the supported tables section.

Features

The Zendesk Source Connector offers the following features:

  • Quick Turnaround: The Zendesk connector ensures that data between your Zendesk Tables and corresponding Kafka topics are synced quickly, without unnecessary lag. The poll frequency on each table has been specifically configured based on the size of the table, so that larger and more dynamic tables, like Tickets, are polled more frequently than the static tables like Organizations.
  • At Least Once Delivery: The connector guarantees no loss of messages from Zendesk to Kafka. Messages may be reprocessed because of task failure or API limits, which may cause duplication.
  • Schema Detection and Evolution: The connector supports automatic schema detection and backward compatible schema evolution for all supported tables.
  • Real-time and Historical Lookup: The connector supports fetching all the past historical records for all tables. It can also be configured to pull in data from only a specified time in the past (see configuration property `zendesk.since).
  • Automatic Retries: In case of a connection error between the API server and Kafka Connect, the connector may receive a not OK response from the API server or no response at all. In such cases, the connector can be made robust using the automatic retry mechanism with linear backoff using configuration properties max.retries and retry.backoff.ms.
  • Intelligent backoffs: If there are too many requests because of support API rate limits, the connector intelligently spaces out the HTTP fetch operations to ensure a smooth balance between recency, API limits, and back pressure.
  • Resource Balance and throughput: Different resources with Zendesk could have different rates of creation and updation. Such resources can be balanced among the workers, with reduced hot-spotting, by keeping the resources in configuration zendesk.tables sorted by the order of their expected cardinality. Also, the task.max, max.in.flight.requests, and max.batch.size configuration properties can be used to improve overall throughput.

Supported Tables

The following tables from Zendesk are supported in this version of Kafka Connect Zendesk Source connector: custom_roles, groups, group_memberships, organizations, organization_subscriptions, organization_memberships, satisfaction_ratings, tickets, ticket_audits, ticket_fields, ticket_metrics and users.

Prerequisites

The following are required to run the Kafka Connect Zendesk Source Connector:

  • Kafka Broker: Confluent Platform 3.3.0 or above, or Kafka 0.11.0 or above
  • Kafka Connect: Confluent Platform 4.1.0 or above, or Kafka 1.1.0 or above
  • Java 1.8
  • Zendesk API: Support APIs should be enabled for the Zendesk account. Also either oauth2 or password mechanism should be enabled in the Zendesk account. For information, look at Using the API dashboard: Enabling password or token access.
  • Zendesk account type: Certain tables, such as custom_roles, can only be accessed if Zendesk Account is an Enterprise account. Refer Custom Agent Roles.
  • Zendesk settings: Some settings may need to be enabled to ensure export is possible. Example, satisfaction_ratings can only be exported if it is enabled. Refer to Support API: Satisfaction Ratings.

Install the Zendesk Source Connector

You can install this connector by using the Confluent Hub client (recommended) or you can manually download the ZIP file.

Install the connector using Confluent Hub

Prerequisite
Confluent Hub Client must be installed. This is installed by default with Confluent Enterprise.

Navigate to your Confluent Platform installation directory and run the following command to install the latest (latest) connector version. The connector must be installed on every machine where Connect will run.

confluent-hub install confluentinc/kafka-connect-zendesk-source:latest

You can install a specific version by replacing latest with a version number. For example:

confluent-hub install confluentinc/kafka-connect-zendesk-source:1.0.0-preview

Install the connector manually

Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.

License

You can use this connector for a 30-day trial period without a license key.

After 30 days, this connector is available under a Confluent enterprise license. Confluent issues enterprise license keys to subscribers, along with providing enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, please contact Confluent Support at support@confluent.io for more information.

See Confluent Platform license for license properties and License topic configuration for information about the license topic.

Quick Start

Prerequisite
Zendesk Developer Account

In this quick start guide, the Zendesk Connector is used to consume records from a Zendesk resource called tickets and send the records to a Kafka topic named ZD_tickets.

  1. Install the connector through the Confluent Hub Client.

    # run from your confluent platform installation directory
    confluent-hub install confluentinc/kafka-connect-zendesk-source:latest
    
  2. Start the Confluent Platform.

    Tip

    The command syntax for the Confluent CLI development commands changed in 5.3.0. These commands have been moved to confluent local. For example, the syntax for confluent start is now confluent local start. For more information, see confluent local.

    confluent local start
    
  3. Check the status of all services.

    confluent local status
    
  4. Configure your connector by first creating a JSON file named zendesk.json with the following properties.

    // substitute <> with your config
    {
        "name": "ZendeskConnector",
        "config": {
            "connector.class": "io.confluent.connect.zendesk.ZendeskSourceConnector",
            "key.converter": "io.confluent.connect.avro.AvroConverter",
            "value.converter": "io.confluent.connect.avro.AvroConverter",
            "confluent.topic.bootstrap.servers": "127.0.0.1:9092",
            "confluent.topic.replication.factor": 1,
            "confluent.license": "<license>", // leave it empty for evaluation license
            "tasks.max": 1,
            "poll.interval.ms": 1000,
            "topic.name.pattern": "ZD_${entityName}",
            "curl.logging": "false",
            "zendesk.auth.type": "basic",
            "zendesk.url": "https://<sub-domain>.zendesk.com",
            "zendesk.user": "<username>",
            "zendesk.password": "<password>",
            "zendesk.tables": "tickets",
            "zendesk.since": "2019-08-01"
        }
    }
    
  5. Start the Zendesk Source connector by loading the connector’s configuration with the following command:

    Caution

    You must include a double dash (--) between the topic name and your flag. For more information, see this post.

    confluent local load zendesk -- -d zendesk.json
    
  6. Confirm that the connector is in a RUNNING state.

    confluent local status ZendeskConnector
    
  7. Create one ticket record using Zendesk API as follows.

    curl -X POST \
        https://{subdomain}.zendesk.com/api/v2/tickets.json \
        -H "Content-Type: application/json" \
        -d '{"ticket": {"subject": "My connector is working!", \
        "comment": { "body": "Is it Kafkaesque?" }}}' \
        -v \
        -u {email_address}:{password}
    
  8. Confirm the messages were delivered to the ZD_tickets topic in Kafka. Note, it may take a minute before the record populates the topic.

    confluent local consume ZD_tickets -- --from-beginning
    

Additional Documentation