Zendesk Source Connector for Confluent Cloud

Zendesk is a customer service system for tracking, prioritizing, and solving customer support tickets. The fully-managed Zendesk Source connector for Confluent Cloud copies data into Apache Kafka® from various Zendesk support tables such as tickets, ticket_audits, ticket_fields, groups, organizations, satisfaction_ratings, among others. The connector streams data to Zendesk using the Zendesk Support API. See Supported tables for more information.

Note

Features

The Zendesk Source connector provides the following features:

  • Topics created automatically: The connector can automatically create Kafka topics.

  • At least once delivery: The connector guarantees that records are delivered at least once to the Kafka topic.

  • Supported data formats: The connector supports Avro, JSON Schema (JSON-SR), Protobuf, and JSON (schemaless) output formats. You must enable Schema Registry to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf).

  • Offset management capabilities: Supports offset management. For more information, see Manage custom offsets.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Supported tables

See the following dropdown list for supported Zendesk tables.

Supported tables
  • activities

  • apps

  • audit_logs

  • automations

  • bookmarks

  • brands

  • custom_roles

  • groups

  • group_memberships

  • locales

  • macros

  • organizations

  • organization_fields

  • organization_subscriptions

  • organization_memberships

  • recipient_addresses

  • requests

  • resource_collections

  • satisfaction_ratings

  • satisfaction_reasons

  • sharing_agreements

  • suspended_tickets

  • targets

  • target_failures

  • tickets

  • ticket_audits

  • ticket_fields

  • ticket_forms

  • ticket_metrics

  • triggers

  • trigger_categories

  • users

  • user_fields

  • views

  • workspaces

Manage custom offsets

You can manage the offsets for this connector. Offsets provide information on the point in the system from which the connector is accessing data. For more information, see Manage Offsets for Fully-Managed Connectors in Confluent Cloud.

To manage offsets:

To get the current offset, make a GET request that specifies the environment, Kafka cluster, and connector name.

GET /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets
Host: https://api.confluent.cloud

Response:

Successful calls return HTTP 200 with a JSON payload that describes the offset.

{
    "id": "lcc-example123",
    "name": "{connector_name}",
     "offsets": [

        {
           "partition": {
           "name": "tickets"
           },
           "offset": {
           "updated_at": 1712559408
           }
        },
        {
           "partition": {
           "name": "targets"
           },
           "offset": {
           "created_at": 1607376776000
           }
        },
        {
           "partition": {
           "name": "users"
           },
           "offset": {
           "updated_at": 1712639446
           }
        },
        {
           "partition": {
           "name": "ticket_audits"
           },
           "offset": {
           "created_at": 1607359500000
           }
        }
     ],
    "metadata": {
        "observed_at": "2024-03-28T17:57:48.139635200Z"
    }
}

Responses include the following information:

  • The position of latest offset.

  • The observed time of the offset in the metadata portion of the payload. The observed_at time indicates a snapshot in time for when the API retrieved the offset. A running connector is always updating its offsets. Use observed_at to get a sense for the gap between real time and the time at which the request was made. By default, offsets are observed every minute. Calling GET repeatedly will fetch more recently observed offsets.

  • Information about the connector.

To update the offset, make a POST request that specifies the environment, Kafka cluster, and connector name. Include a JSON payload that specifies new offset and a patch type.

POST /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request
Host: https://api.confluent.cloud

 {
     "type": "PATCH",
      "offsets": [
         {
            "partition": {
            "name": "tickets"
            },
            "offset": {
            "updated_at": 1554687029
            }
         },
         {
            "partition": {
            "name": "targets"
            },
            "offset": {
            "created_at": 1554687029
            }
         },
         {
            "partition": {
            "name": "users"
            },
            "offset": {
            "updated_at": 1554687029
            }
         },
         {
            "partition": {
            "name": "ticket_audits"
            },
            "offset": {
            "created_at": 1554687029
            }
         }
      ]
 }

Considerations:

  • You can only make one offset change at a time for a given connector.

  • This is an asynchronous request. To check the status of this request, you must use the check offset status API. For more information, see Get the status of an offset request.

  • For source connectors, the connector attempts to read from the position defined by the requested offsets.

Response:

Successful calls return HTTP 202 Accepted with a JSON payload that describes the offset.

{
    "id": "lcc-example123",
    "name": "{connector_name}",
    "offsets": [
       {
          "partition": {
          "name": "tickets"
          },
          "offset": {
          "date_updated": 1618184736
          }
       }
    ],
    "requested_at": "2024-03-28T17:58:45.606796307Z",
    "type": "PATCH"
}

Responses include the following information:

  • The requested position of the offsets in the source.

  • The time of the request to update the offset.

  • Information about the connector.

To delete the offset, make a POST request that specifies the environment, Kafka cluster, and connector name. Include a JSON payload that specifies the delete type.

 POST /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request
 Host: https://api.confluent.cloud

{
  "type": "DELETE"
}

Considerations:

  • Delete requests delete the offset for the provided partition and reset to the base state. A delete request is as if you created a fresh new connector.

  • This is an asynchronous request. To check the status of this request, you must use the check offset status API. For more information, see Get the status of an offset request.

  • Do not issue delete and patch requests at the same time.

  • For source connectors, the connector attempts to read from the position defined in the base state.

Response:

Successful calls return HTTP 202 Accepted with a JSON payload that describes the result.

{
  "id": "lcc-example123",
  "name": "{connector_name}",
  "offsets": [],
  "requested_at": "2024-03-28T17:59:45.606796307Z",
  "type": "DELETE"
}

Responses include the following information:

  • Empty offsets.

  • The time of the request to delete the offset.

  • Information about Kafka cluster and connector.

  • The type of request.

To get the status of a previous offset request, make a GET request that specifies the environment, Kafka cluster, and connector name.

GET /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request/status
Host: https://api.confluent.cloud

Considerations:

  • The status endpoint always shows the status of the most recent PATCH/DELETE operation.

Response:

Successful calls return HTTP 200 with a JSON payload that describes the result. The following is an example of an applied patch.

{
   "request": {
      "id": "lcc-example123",
      "name": "{connector_name}",
      "offsets": [
         {
         "partition": {
            "name": "tickets"
         },
         "offset": {
            "date_updated": 1618184736
         }
         }
      ],
      "requested_at": "2024-03-28T17:58:45.606796307Z",
      "type": "PATCH"
   },
   "status": {
      "phase": "APPLIED",
      "message": "The Connect framework-managed offsets for this connector have been altered successfully. However, if this connector manages offsets externally, they will need to be manually altered in the system that the connector uses."
   },
   "previous_offsets": [
      {
         "partition": {
         "name": "brands"
         },
         "offset": {
         "updated_at": 1666023665000
         }
      },
      {
         "partition": {
         "name": "apps"
         },
         "offset": {
         "updated_at": 1713982063000
         }
      }
   ],
   "applied_at": "2024-03-28T17:58:48.079141883Z"
}

Responses include the following information:

  • The original request, including the time it was made.

  • The status of the request: applied, pending, or failed.

  • The time you issued the status request.

  • The previous offsets. These are the offsets that the connector last updated prior to updating the offsets. Use these to try to restore the state of your connector if a patch update causes your connector to fail or to return a connector to its previous state after rolling back.

JSON payload

The table below offers a description of the unique fields in the JSON payload for managing offsets of the Zendesk Source connector.

Field

Definition

Required/Optional

created_at

The UNIX timestamp when the table row was created.

Required

updated_at

The UNIX timestamp when the row was last updated.

Required

Quick Start

Use this quick start to get up and running with the Confluent Cloud Zendesk Source connector. The quick start provides the basics of selecting the connector and configuring it to stream events.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud.

  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.

  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

  • Authorization and credentials to access the Zendesk service URL.

  • Zendesk API: Support APIs must be enabled for the Zendesk account.

  • Either the oauth2 or password mechanisms should be enabled for the Zendesk account. For additional information, see Using the API dashboard: Enabling password or token access.

  • Certain tables, such as custom_roles, can only be accessed if the Zendesk Account is an Enterprise account. For more information, see Custom Agent Roles.

  • A few Zendesk configuration settings may need to be enabled to ensure export is possible. For example, satisfaction_ratings can only be exported if this option is enabled. For more information, see Support API: Satisfaction Ratings.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Zendesk Source connector card.

Zendesk Source Connector Card

Step 4: Enter the connector details

Note

  • Make sure you have all your prerequisites completed.

  • An asterisk ( * ) designates a required entry.

At the Add Zendesk Source Connector screen, complete the following:

  1. Select the way you want to provide Kafka Cluster credentials. You can choose one of the following options:

    • My account: This setting allows your connector to globally access everything that you have access to. With a user account, the connector uses an API key and secret to access the Kafka cluster. This option is not recommended for production.

    • Service account: This setting limits the access for your connector by using a service account. This option is recommended for production.

    • Use an existing API key: This setting allows you to specify an API key and a secret pair. You can use an existing pair or create a new one. This method is not recommended for production environments.

    Note

    Freight clusters support only service accounts for Kafka authentication.

  2. Click Continue.

  1. Add the Zendesk authentication details:

    • Zendesk Service URL: The URL where the connector gets Zendesk source data. For example, https://<sub-domain>.zendesk.com

    • Endpoint Authentication type: Choose either basis or bearer for the authentication type. For more information, see OAuth tokens in the Zendesk docs.

  2. Click Continue.

  • Zendesk tables: The Zendesk tables the connector exports and writes to Kafka. To balance the load between workers, order the tables by their expected size or throughput requirement. For the list of supported tables, see Supported tables.

  • Topic Name Pattern: The pattern to use for the topic name, where the ${entityName} literal is replaced with each entity name. If ${entityName} is not specified, the connector writes all records to a single topic named ZD_${entityName}. A valid topic pattern should follow the regex [a-zA-Z0-9\\.\\-\\_]*(\\$\\{entityName\\})?[a-zA-Z0-9\\.\\-\\_]*.

  • Zendesk start time (ISO 8601): Rows updated after the time entered are processed by the connector. The value should be formatted using the ISO 8601 format yyyy-MM-dd'T'HH:mm:SS. If left blank, the default time is set to the time the connector is launched minus one minute.

Output messages

  • Select output record value format: Select the output record value format (data going to the Kafka topic). Valid values are AVRO, JSON, JSON_SR (JSON Schema), or PROTOBUF. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON Schema, or Protobuf). For additional information, see Schema Registry Enabled Environments.

Note

For Schema Registry-based output formats, the connector attempts to deduce the schema based on the source API response returned. The connector registers a new schema for every NULL and NOT NULL value of an optional field in the API response. For this reason, the connector may register schema versions at a much higher rate than expected.

Show advanced configurations
  • Schema context: Select a schema context to use for this connector, if using a schema-based data format. This property defaults to the Default context, which configures the connector to use the default schema set up for Schema Registry in your Confluent Cloud environment. A schema context allows you to use separate schemas (like schema sub-registries) tied to topics in different Kafka clusters that share the same Schema Registry environment. For example, if you select a non-default context, a Source connector uses only that schema context to register a schema and a Sink connector uses only that schema context to read from. For more information about setting up a schema context, see What are schema contexts and when should you use them?.

  • Maximum Batch Size: The maximum number of records to return and write to Kafka at one time.

  • Maximum In Flight Requests: The maximum number of requests that can be in-flight at once.

  • Maximum Poll Interval (ms): The time in milliseconds between requests to fetch changed or updated entities.

  • Request Interval (ms): The time in milliseconds to wait before checking for updated records.

  • Maximum Retries: The maximum number of times to retry on errors before failing the task.

  • Retry Backoff (ms): The time in milliseconds to wait after an error before a retry attempt is made.

Auto-restart policy

  • Enable Connector Auto-restart: Control the auto-restart behavior of the connector and its task in the event of user-actionable errors. Defaults to true, enabling the connector to automatically restart in case of user-actionable errors. Set this property to false to disable auto-restart for failed connectors. In such cases, you would need to manually restart the connector.

Additional Configs

  • Value Converter Decimal Format: Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals: BASE64 to serialize DECIMAL logical types as base64 encoded binary data and NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Key Converter Schema ID Serializer: The class name of the schema ID serializer for keys. This is used to serialize schema IDs in the message headers.

  • Value Converter Reference Subject Name Strategy: Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Value Converter Connect Meta Data: Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Value Converter Value Subject Name Strategy: Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Key Converter Key Subject Name Strategy: How to construct the subject name for key schema registration.

  • Value Converter Schema ID Serializer: The class name of the schema ID serializer for values. This is used to serialize schema IDs in the message headers.

Transforms

Processing position

  • Set offsets: Click Set offsets to define a specific offset for this connector to begin procession data from. For more information on managing offsets, see Manage offsets.

For all property values and definitions, see Configuration Properties.

  • Click Continue.

Based on the number of topic partitions you select, you will be provided with a recommended number of tasks.

  1. To change the number of tasks, use the Range Slider to select the desired number of tasks.

  2. Click Continue.

  1. Verify the connection details by previewing the running configuration.

  2. After you’ve validated that the properties are configured to your satisfaction, click Launch.

    The status for the connector should go from Provisioning to Running.

Step 5: Check for records

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties. See Configuration Properties for additional configuration property values and descriptions.

{
  "connector.class": "ZendeskSource",
  "name": "ZendeskSource_0",
  "kafka.auth.mode": "KAFKA_API_KEY",
  "kafka.api.key": "<my-kafka-api-key>",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "zendesk.url": "https://<sub-domain>.zendesk.com",
  "zendesk.tables": "tickets, groups, users",
  "zendesk.user": "<username>",
  "zendesk.password": "*********************************",
  "output.data.format": "AVRO",
  "tasks.max": "1",
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.

  • "name": Sets a name for your new connector.

  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • Enter the Zendesk connection details.

    • "zendesk.url": The URL where the connector gets Zendesk source data. For example, https://<sub-domain>.zendesk.com``.

    • "zendesk.tables": A comma-separated list of Zendesk tables the connector exports and writes to Kafka. To balance the load between workers, order the tables by their expected size or throughput requirement. For the list of supported tables, see Supported tables.

  • Enter the authentication details. The example shows the default basic authentication properties "zendesk.user" and "zendesk.password". You can use the properties "zendesk.auth.type": "bearer" and "bearer.token": "<token-string>" to authenticate. This is a single string that is sent in the HTTP Authorization header.

  • "output.data.format": Enter an output data format (data going to the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

    Note

    For Schema Registry-based output formats, the connector attempts to deduce the schema based on the source API response returned. The connector registers a new schema for every NULL and NOT NULL value of an optional field in the API response. For this reason, the connector may register schema versions at a much higher rate than expected.

  • "tasks.max": Enter the number of tasks to use with the connector. Only one task per connector is supported.

  • Transforms and Predicates: See the Single Message Transforms (SMT) documentation for details.

See Configuration Properties for all property values and descriptions.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file zendesk-source-config.json

Example output:

Created connector ZendeskSource_0 lcc-do6vzd

Step 5: Check the connector status

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID           |             Name         | Status  | Type  | Trace
+------------+--------------------------+---------+--------+-------+
lcc-do6vzd   | ZendeskSource_0          | RUNNING | source |       |

Step 6: Check for records.

Verify that records are being produced at the Kafka topic.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

Note

These are properties for the fully-managed cloud connector. If you are installing the connector locally for Confluent Platform, see Zendesk Source Connector for Confluent Platform.

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string

  • Valid Values: A string at most 64 characters long

  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode, whenever possible.

  • Type: string

  • Valid Values: SERVICE_ACCOUNT, KAFKA_API_KEY

  • Importance: high

kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string

  • Importance: high

kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password

  • Importance: high

How do you want to name your topic(s)?

topic.name.pattern

The pattern to use for the topic name, where the ${entityName} literal will be replaced with each entity name. If ${entityName} is not specified all the records will be written to a single topic. A valid topic pattern should follow the regex [a-zA-Z0-9.-_]*(${entityName})?[a-zA-Z0-9.-_]*

  • Type: string

  • Default: ZD_${entityName}

  • Valid Values: Must match the regex [a-zA-Z0-9\.\-\_]*(\$\{entityName\})?[a-zA-Z0-9\.\-\_]*

  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string

  • Default: default

  • Importance: medium

How should we connect to Zendesk?

zendesk.url

The zendesk service url that connector will connect to.

  • Type: string

  • Importance: high

zendesk.auth.type

Authentication type of the endpoint. Valid values are basic and bearer

  • Type: string

  • Default: basic

  • Valid Values: basic, bearer

  • Importance: high

zendesk.tables

The Zendesk tables that are to be exported and written to Kafka. To avail a reasonable load balance between workers, the tables could be ordered by their expected size or throughput.

  • Type: list

  • Importance: high

zendesk.since

Rows updated after this time will be processed by the connector. If left blank, the default time will be set to the time this connector is launched minus 1 minute. The value should be formatted as ISO 8601. Example format yyyy-MM-dd’T’HH:mm:SS.

  • Type: string

  • Importance: medium

Authorization: Basic

zendesk.user

The username to be used with an endpoint requiring authentication.

  • Type: string

  • Importance: high

zendesk.password

The password to be used with an endpoint requiring authentication.

  • Type: password

  • Importance: high

Authorization: Bearer

bearer.token

The bearer authentication token to be used when auth.type=bearer. The supplied token will be used as the value of Authorization header in HTTP requests.

  • Type: password

  • Importance: high

Connection details

max.batch.size

The maximum number of records that should be returned and written to Kafka at one time.

  • Type: int

  • Default: 100

  • Importance: low

max.in.flight.requests

The maximum number of requests that may be in-flight at once.

  • Type: int

  • Default: 10

  • Importance: low

max.poll.interval.ms

The time in milliseconds between requests to fetch changed or updated entities.

  • Type: long

  • Default: 3000 (3 seconds)

  • Importance: low

request.interval.ms

The time in milliseconds to wait before checking for updated records.

  • Type: long

  • Default: 15000 (15 seconds)

  • Importance: low

max.retries

The maximum number of times to retry on errors before failing the task.

  • Type: int

  • Default: 10

  • Importance: low

retry.backoff.ms

The time in milliseconds to wait following an error before a retry attempt is made.

  • Type: long

  • Default: 3000 (3 seconds)

  • Importance: low

Output messages

output.data.format

Sets the output Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF

  • Type: string

  • Default: JSON

  • Importance: high

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int

  • Valid Values: [1,…]

  • Importance: high

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean

  • Default: true

  • Importance: medium

Additional Configs

header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string

  • Importance: low

producer.override.compression.type

The compression type for all data generated by the producer. Valid values are none, gzip, snappy, lz4, and zstd.

  • Type: string

  • Importance: low

value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean

  • Importance: low

value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean

  • Importance: low

value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean

  • Importance: low

value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean

  • Importance: low

value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean

  • Importance: low

key.converter.key.schema.id.serializer

The class name of the schema ID serializer for keys. This is used to serialize schema IDs in the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.PrefixSchemaIdSerializer

  • Importance: low

key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string

  • Default: BASE64

  • Importance: low

value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean

  • Default: false

  • Importance: low

value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string

  • Default: DefaultReferenceSubjectNameStrategy

  • Importance: low

value.converter.value.schema.id.serializer

The class name of the schema ID serializer for values. This is used to serialize schema IDs in the message headers.

  • Type: string

  • Default: io.confluent.kafka.serializers.schema.id.PrefixSchemaIdSerializer

  • Importance: low

value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string

  • Default: TopicNameStrategy

  • Importance: low

Frequently asked questions

Find answers to frequently asked questions about the Zendesk Source connector for Confluent Cloud.

Authentication and OAuth

Why do I get 401 Unauthorized errors with OAuth authentication?

This error occurs when OAuth token authentication fails with the Zendesk API or Confluent Cloud Schema Registry. Common causes include:

  • Token invalidation: The OAuth/OIDC token expired or was invalidated.

  • Token size limits: The authorization header exceeds the maximum allowed size (approximately 8 KB for Confluent Cloud Schema Registry).

  • Rate limiting on token exchange: Concurrent token exchanges may interfere with each other on the STS token exchange endpoint.

Resolution:

  1. Verify token validity: Ensure your OAuth tokens are valid and not expired.

  2. Check header size: If using OIDC tokens, verify the Authorization header does not exceed size limits.

  3. Use API key authentication temporarily: For initial data imports, consider using API key authentication (basic authentication with zendesk.user and zendesk.password), then switch to OAuth (bearer authentication) for ongoing operations.

  4. Configure retry settings: Ensure the connector has appropriate retry configurations for transient authentication failures.

Example configuration for basic authentication:

{
  "zendesk.auth.type": "basic",
  "zendesk.user": "<username>",
  "zendesk.password": "<password>"
}

Example configuration for bearer authentication:

{
  "zendesk.auth.type": "bearer",
  "bearer.token": "<your-oauth-token>"
}

Rate limiting and API throttling

Why am I seeing HTTP 429 Too Many Requests errors?

This error indicates that the connector has exceeded the Zendesk API rate limits. Rate limits are enforced by Zendesk to prevent API abuse and ensure fair usage.

Common causes:

  • High request volume: The connector is making too many API requests in a short time period.

  • Multiple connectors: Multiple connectors are accessing the same Zendesk account simultaneously.

  • Zendesk account tier limits: Different Zendesk account types have different rate limits.

Resolution:

  1. Reduce polling frequency: Increase the zendesk.since time interval to reduce the number of API calls.

  2. Monitor Zendesk API usage: Check your Zendesk account’s API usage dashboard to understand current rate limit consumption.

  3. Implement backoff strategy: The connector automatically retries with exponential backoff. Ensure retry configurations are appropriate.

  4. Contact Zendesk support: If rate limits are consistently exceeded, contact Zendesk to discuss your account’s rate limit allocation.

  5. Stagger connector tasks: If running multiple connectors, consider staggering their polling schedules to distribute API load.

The connector resumes data syncing automatically once the API becomes available again.

Why does the connector fail with HTTP 429 errors for Schema Registry?

This error occurs when the connector exceeds Confluent Cloud Schema Registry rate limits, causing intermittent failures.

Resolution:

  1. Monitor Schema Registry usage: Check your Confluent Cloud Schema Registry usage in the Confluent Cloud Console.

  2. Upgrade cluster type: Consider upgrading to a Dedicated cluster, where certain limits (such as Schema Registry requests) scale automatically with the number of CKUs.

  3. Reduce schema operations: Minimize schema updates and registrations by ensuring schemas are stable before production use.

For more information about cluster types and limits, see Schema Registry Enabled Environments.

Data synchronization and table selection

Why are some Zendesk tables not syncing data to Kafka?

This can occur for several reasons related to table configuration and Zendesk account permissions.

Common causes:

  • Table not included in configuration: The table is not listed in the zendesk.tables property.

  • Zendesk account restrictions: Some tables (such as custom_roles) require an Enterprise Zendesk account.

  • Feature not enabled: Certain tables (such as satisfaction_ratings) can only be exported if the corresponding feature is enabled in your Zendesk account.

  • API permissions: The API user does not have permissions to access the table.

Resolution:

  1. Verify table configuration: Check that the table is included in the zendesk.tables comma-separated list. For supported tables, see Supported tables.

  2. Check Zendesk account type: Verify your Zendesk account tier supports the table. For example, custom_roles requires an Enterprise account. See Custom Agent Roles in the Zendesk documentation.

  3. Enable required features: For tables like satisfaction_ratings, ensure the feature is enabled in your Zendesk admin settings. See Support API: Satisfaction Ratings in the Zendesk documentation.

  4. Review API user permissions: Ensure the API user has appropriate read permissions for all requested tables.

How do I handle missing or incomplete data in Kafka topics?

Missing or incomplete data can result from configuration issues, API errors, or offset management problems.

Troubleshooting checklist:

  1. Check connector status: Verify the connector is in RUNNING state without errors.

  2. Review connector logs: Look for error messages, API failures, or authentication issues in the connector logs.

  3. Verify offset positions: Use the offset management API to check current offset positions and ensure they are progressing.

  4. Check for API errors: Review the error topic error-lcc-<connector-id> for failed records.

  5. Monitor Zendesk API health: Verify Zendesk API is operational and not experiencing outages.

  6. Validate table configuration: Ensure all required tables are listed in zendesk.tables.

If data is consistently missing, consider resetting offsets to re-sync from a specific point in time. See Manage custom offsets for details.

Offset management

How do I reset offsets to re-sync data from a specific time?

You can use the offset management API to reset offsets and re-sync data from a specific point in time.

Steps:

  1. Stop the connector: Pause or stop the connector to prevent offset updates during the reset.

  2. Determine the target timestamp: Identify the UNIX timestamp (in seconds) from which you want to re-sync data. For example, 1712559408 represents April 8, 2024.

  3. Update offsets: Use the PATCH operation to update offsets for the relevant tables. See Manage custom offsets for the complete API reference.

Example offset reset for the tickets table:

POST /connect/v1/environments/{environment_id}/clusters/{kafka_cluster_id}/connectors/{connector_name}/offsets/request
Host: https://api.confluent.cloud

{
  "type": "PATCH",
  "offsets": [
    {
      "partition": {
        "name": "tickets"
      },
      "offset": {
        "updated_at": 1712559408
      }
    }
  ]
}
  1. Verify the update: Use the status API to confirm the offset update was applied successfully.

  2. Resume the connector: Restart the connector to begin syncing from the new offset position.

Warning

Resetting offsets may cause duplicate records in Kafka if the data has already been synced. Ensure downstream applications can handle duplicates or use idempotent processing.

What do the created_at and updated_at offset fields mean?

The Zendesk Source connector uses these timestamp fields to track the last synced position for each table:

  • created_at: Tracks the creation timestamp of the last synced record. Used for tables that support filtering by creation time.

  • updated_at: Tracks the last update timestamp of the last synced record. Used for tables that support filtering by update time.

Different Zendesk tables use different offset fields based on the Zendesk API’s filtering capabilities:

  • Tables like tickets and users use updated_at because the Zendesk API supports incremental updates based on modification time.

  • Tables like ticket_audits use created_at because they are append-only and filter by creation time.

For the complete list of offset fields for each table, see Manage custom offsets.

Schema and data formats

Configuration and connectivity

What authentication types does the Zendesk connector support?

The Zendesk Source connector supports the following authentication types:

  • Basic authentication (basic): Uses username and password or username and API token. Configure with zendesk.user and zendesk.password.

  • Bearer authentication (bearer): Uses a bearer token. Configure with bearer.token.

  • OAuth2 authentication (oauth2): Uses OAuth 2.0 with Client Credentials grant type. Configure with oauth2.token.url, oauth2.client.id, and oauth2.client.secret.

  • No authentication (none): No authentication required.

Why does the connector fail with connection timeout or DNS errors?

Connection failures typically indicate network connectivity issues between Confluent Cloud and your Zendesk instance.

Common causes:

  • Incorrect Zendesk URL: The zendesk.url is malformed or incorrect.

  • DNS resolution failure: The Zendesk subdomain cannot be resolved.

  • Network restrictions: Firewall rules or network policies block Confluent Cloud egress traffic.

  • Private network configuration: For PRIVATE_LINK or PCC clusters, outbound traffic may not be configured.

Resolution:

  1. Verify Zendesk URL: Ensure the zendesk.url is correct and uses the format https://<subdomain>.zendesk.com.

  2. Test connectivity: Use external tools (such as curl or ping) to verify the Zendesk URL is accessible from the internet.

  3. Check cluster network type: Confirm your Confluent Cloud cluster’s network configuration. For PRIVATE_LINK or PCC clusters, ensure network connectivity to Zendesk is properly configured.

  4. Review firewall rules: If using IP allowlisting, ensure Confluent Cloud egress IPs are permitted.

For more information about network configuration, see Prerequisites.

Performance and tasks

Why is the connector slow or taking a long time to sync data?

Slow data synchronization can result from several factors related to Zendesk API performance, data volume, and connector configuration.

Common causes:

  • Large data volume: Syncing large Zendesk tables with millions of records takes time, especially on initial sync.

  • Zendesk API rate limits: The connector is being throttled by Zendesk API rate limits.

  • Single task limitation: The Zendesk Source connector supports only one task (tasks.max=1), limiting parallelism.

  • Network latency: High latency between Confluent Cloud and Zendesk API endpoints.

Resolution:

  1. Monitor API rate limits: Check if the connector is being rate-limited by reviewing connector logs for HTTP 429 errors.

  2. Optimize table selection: Only sync tables you need by configuring zendesk.tables with the minimal required set.

  3. Order tables by size: In zendesk.tables, list smaller tables first to balance load and see faster results.

  4. Use appropriate polling intervals: Configure zendesk.since to an appropriate value for your use case (default is to sync all historical data).

  5. Be patient on initial sync: The first sync of large tables can take hours or days. Subsequent incremental syncs will be much faster.

Note

The Zendesk Source connector supports only one task per connector (tasks.max=1). Increasing this value will not improve performance.

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud for Apache Flink, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png