Important

You are viewing documentation for an older version of Confluent Platform. For the latest, click here.

Jira Source Connector for Confluent Platform

The Kafka Connect Jira Source Connector is used to move data from Jira to Apache Kafka® topics. This connector polls data from Jira through Jira v2 APIs, converts data into Kafka records, and pushes the records into a Kafka topic. Each row from Jira tables is converted into exactly one Kafka record.

Features

The Jira Source connector offers the following features:

  • At Least Once Delivery: The connector guarantees that records from Jira are delivered at least once to the Kafka topic. In the case of connector restart there will be some duplicate records in the Kafka topic.
  • Supports HTTPS Proxy The connector can connect to Jira using an https proxy server. To configure the proxy, you can set http.proxy.host, http.proxy.port, http.proxy.user and http.proxy.password in the configuration file. The connector has been tested with HTTPS proxy with basic authentication.

Limitations

  • Resources which do not support fetching records by datetime will have duplicate records and will be fetched repeatedly at a duration specified by the request.interval.ms configuration property.
  • The connector is not able to detect data deletion on Jira.
  • The connector does not guarantee accurate record order in the Kafka topic.

Jira Tables

The connector supports fetching the following resources:

  • changelogs : Changelogs for an issue, refer the following schema.
  • issue_comments : Comments for an issue, refer the following schema.
  • issue_transitions : All transitions for an issue, refer the following schema.
  • issues : Issues in all states, refer the following schema.
  • project_categories : All project categories, refer the following schema.
  • project_types : All project types, refer the following schema.
  • projects : All projects, refer the following schema.
  • resolutions : Resolutions for issues, refer the following schema.
  • roles : All project roles, refer the following schema.
  • users : Users in active and in-active states, refer the following schema.
  • versions : Project versions for a project, refer the following schema.
  • worklogs : All worklogs for an issues, refer the following schema.

Prerequisites

The following are required to run the Kafka Connect Jira Source Connector:

  • Kafka Broker: Confluent Platform 3.3.0 or above.
  • Connect: Confluent Platform 4.1.0 or above.
  • Java 1.8
  • No additional setup is required on Jira account for this connector to work, other than access token with user privileges.

Install the Jira Source Connector

You can install this connector by using the Confluent Hub client (recommended) or you can manually download the ZIP file.

Install the connector using Confluent Hub

Prerequisite
Confluent Hub Client must be installed. This is installed by default with Confluent Enterprise.

Navigate to your Confluent Platform installation directory and run the following command to install the latest (latest) connector version. The connector must be installed on every machine where Connect will run.

confluent-hub install confluentinc/kafka-connect-jira:latest

You can install a specific version by replacing latest with a version number. For example:

confluent-hub install confluentinc/kafka-connect-jira:1.0.0-preview

Install the connector manually

Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.

License

You can use this connector for a 30-day trial period without a license key.

After 30 days, this connector is available under a Confluent enterprise license. Confluent issues enterprise license keys to subscribers, along with providing enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, please contact Confluent Support at support@confluent.io for more information.

See Confluent Platform license for license properties and License topic configuration for information about the license topic.

Configuration Properties

For a complete list of configuration properties for this connector, see Jira Source Connector Configuration Properties.

Note

For an example of how to get Kafka Connect connected to Confluent Cloud, see Distributed Cluster in Connect Kafka Connect to Confluent Cloud.

Quick Start

In this quickstart, you will configure the Jira Source Connector to copy data from Jira to the kafka topic.

Start Confluent

Start the Confluent services using the following Confluent CLI command:

confluent local start

Important

Do not use the Confluent CLI in production environments.

Property-based example

Configure the jira-source-quickstart.properties file with following properties:

name=MyJiraConnector
confluent.topic.bootstrap.servers=localhost:9092
confluent.topic.replication.factor=1
tasks.max=1
connector.class=io.confluent.connect.jira.JiraSourceConnector
jira.url=<Your-Jira-URL>
jira.since=2019-10-17 23:50
jira.username=<Your-Jira-Username>
jira.api.token=<Your-Jira-Access-Token>
jira.tables=roles
topic.name.pattern=jira-topic-${entityName}
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081

Next, load the Source Connector.

Tip

Before starting the connector, verify that the properties in etc/kafka-connect-jira/jira-source-quickstart.properties are properly set.

Caution

You must include a double dash (--) between the topic name and your flag. For more information, see this post.

./bin/confluent local load MyJiraConnector -- -d ./etc/kafka-connect-jira/jira-source-quickstart.properties

Your output should resemble the following:

{
     "name": "MyJiraConnector",
     "config": {
        "confluent.topic.bootstrap.servers": "localhost:9092",
        "confluent.topic.replication.factor": "1",
        "tasks.max": "1",
        "connector.class": "io.confluent.connect.jira.JiraSourceConnector",
        "jira.url": "<Your-Jira-URL>",
        "jira.since": "2019-10-17 23:50",
        "jira.username": "< Your-Jira-Username >",
        "jira.api.token": "< Your-Jira-Access-Token >",
        "jira.tables": "roles",
        "topic.name.pattern":"jira-topic-${entityName}",
        "key.converter":"io.confluent.connect.avro.AvroConverter",
        "key.converter.schema.registry.url":"http://localhost:8081",
        "value.converter":"io.confluent.connect.avro.AvroConverter",
        "value.converter.schema.registry.url":"http://localhost:8081"
        "name": "MyJiraConnector"
     },
     "tasks": [],
     "type": "source"
}

Enter the following command to confirm that the connector is in a RUNNING state:

confluent local status MyJiraConnector

The output should resemble the example below:

{
   "name":"MyJiraConnector",
   "connector":{
      "state":"RUNNING",
      "worker_id":"127.0.1.1:8083"
   },
   "tasks":[
      {
         "id":0,
         "state":"RUNNING",
         "worker_id":"127.0.1.1:8083"
      }
   ],
   "type":"source"
}

REST-based example

Use this setting with distributed workers. Write the following JSON to config.json, configure all of the required values, and use the following command to post the configuration to one of the distributed connect workers. Check here for more information about the Kafka Connect REST API.

{
    "name": "MyJiraConnector",
    "config":
    {
        "connector.class": "io.confluent.connect.jira.JiraSourceConnector",
        "confluent.topic.bootstrap.servers": "localhost:9092",
        "confluent.topic.replication.factor": "1",
        "tasks.max": "1",
        "jira.url":"< Your-Jira-URL >",
        "jira.since": "2019-12-26 12:36",
        "jira.username":"< Your-Jira-Username >",
        "jira.api.token":"< Your-Jira-Access-Token >",
        "jira.tables":"roles",
        "topic.name.pattern":"jira-topic-${entityName}",
        "key.converter":"io.confluent.connect.avro.AvroConverter",
        "key.converter.schema.registry.url":"http://localhost:8081",
        "value.converter":"io.confluent.connect.avro.AvroConverter",
        "value.converter.schema.registry.url":"http://localhost:8081"
    }
}

Note

Change the confluent.topic.bootstrap.servers property to include your broker address(es), and change the confluent.topic.replication.factor to 3 for staging or production use.

Use curl to post a configuration to one of the Connect workers. Change http://localhost:8083/ to the endpoint of one of your Connect worker(s).

curl -sS -X POST -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors

Enter the following command to confirm that the connector is in a RUNNING state:

curl http://localhost:8083/connectors/MyJiraConnector/status

The output should resemble the example below:

{
   "name":"MyJiraConnector",
   "connector":{
      "state":"RUNNING",
      "worker_id":"127.0.1.1:8083"
   },
   "tasks":[
      {
         "id":0,
         "state":"RUNNING",
         "worker_id":"127.0.1.1:8083"
      }
   ],
   "type":"source"
}

Enter the following command to consume records written by the connector to the Kafka topic:

./bin/kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic jira-topic-roles --from-beginning

The output should resemble the example below:

{
    "type":"roles",
    "data":{
       "self":"<Your-Jira-URL>/rest/api/2/role/10100",
       "name":"Project_Name",
       "id":10111,
       "description":"A test role added to the project",
       "scope":null,
       "actors":{
          "array":[
             {
                "id":10012,
                "displayName":"Jira_Actor_Name",
                "type":"user-role-actor",
                "actorUser":{
                   "accountId":"101"
                }
             }
          ]
       }
    }
}