Jira Source Connector for Confluent Platform¶
The Kafka Connect Jira Source connector is used to move data from Jira to an Apache Kafka® topic. This connector polls data from Jira through Jira v2 APIs, converts data into Kafka records, and pushes the records into a Kafka topic. Each row from Jira tables is converted into exactly one Kafka record. Note that the Jira Source connector does not support Jira on-premises deployments.
Features¶
The Jira Source connector offers the following features:
- At least once delivery
- Supports one task
- HTTPS proxy
- Jira resources
- CSFLE (Client-side Field level encryption)
At least once delivery¶
This connector guarantees that records are delivered to the Kafka topic at least once. If the connector restarts, there may be some duplicate records in the Kafka topic.
Supports one task¶
The Jira Source connector supports running one task–one table is covered by one task.
HTTPS proxy¶
The connector can connect to Jira using an HTTPS proxy server. To configure the
proxy, you can set http.proxy.host
, http.proxy.port
, http.proxy.user
and http.proxy.password
in the configuration file. The connector has been
tested with HTTPS proxy with basic authentication.
Jira resources¶
The connector supports fetching the following resources:
- changelogs : Changelogs for an issue, refer the following schema.
- issue_comments : Comments for an issue, refer the following schema.
- issue_transitions : All transitions for an issue, refer the following schema.
- issues : Issues in all states, refer the following schema.
- project_categories : All project categories, refer the following schema.
- project_types : All project types, refer the following schema.
- projects : All projects, refer the following schema.
- resolutions : Resolutions for issues, refer the following schema.
- roles : All project roles, refer the following schema.
- users : Users in active and in-active states, refer the following schema.
- versions : Project versions for a project, refer the following schema.
- worklogs : All worklogs for an issues, refer the following schema.
Task Distribution¶
The Jira Source connector groups resources to process related data within a single task. The tasks.max
configuration defines the maximum number of tasks, but the connector might use fewer, depending on the resources configured in jira.resources
.
The connector uses the following grouping logic:
- Issue Grouping: When you include issues in your configuration, it forms the basis of a task. Any of its dependent resources that are also specified in
jira.resources
will be processed in the same task.- The dependent resources for issues are:
changelogs
,issue_comments
,issue_transitions
,resolutions
, andworklogs
.
- The dependent resources for issues are:
- Project Grouping: When you include projects, it forms a separate task group.
- The dependent resource for projects is
versions
.
- The dependent resource for projects is
- Other Resources: Any other resources not part of these groups (for example,
users
orproject_categories
) are distributed among the remaining available tasks.
Example 1: Grouping with Dependents
Given the following configuration, where dependents are included:
"tasks.max": "4"
"jira.resources": "issues, resolutions, versions, worklogs, projects, project_categories"
Even though tasks.max
is set to 4, the connector only creates three tasks because there are only three resource groups to process. The distribution is as follows:
- Task 1: Processes
projects
and its requested dependent,versions
. - Task 2: Processes
issues
and its requested dependents,resolutions
andworklogs
. - Task 3: Processes the remaining resource,
project_categories
.
Example 2: Grouping without Dependents
Given a configuration where only primary resources are requested:
"tasks.max": "2"
"jira.resources": "issues, projects"
The dependents are not included because they were not specified in jira.resources
. The distribution is:
- Task 1: Processes
issues
. - Task 2: Processes
projects
.
CSFLE (Client-side Field level encryption)¶
This connector supports the CSFLE functionality. For more information, see Manage CSFLE.
Limitations¶
- Resources which do not support fetching records by date and time will have duplicate records and will be fetched repeatedly at a duration specified by the
request.interval.ms
configuration property. - The connector is not able to detect data deletion on Jira.
- The connector does not guarantee accurate record order in the Kafka topic.
- The timezone set by the user (defined in the
jira.username
configuration property) must match the general setting Jira timezone set by the administrator for this connector.
For Schema Registry-based output formats, the connector tries to deduce the schema based on the source API response returned. The connector registers a new schema for every NULL and NOT NULL value of an optional field in the API response. For this reason, the connector may register schema versions at a much higher rate than expected.
License¶
You can use this connector for a 30-day trial period without a license key.
After 30 days, you must purchase a connector subscription which includes Confluent enterprise license keys to subscribers, along with enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, you can contact Confluent Support at support@confluent.io for more information.
For license properties, see Confluent Platform license. For information about the license topic, see License topic configuration.
Configuration Properties¶
For a complete list of configuration properties for this connector, see Configuration Reference for Jira Source Connector for Confluent Platform.
For an example of how to get Kafka Connect connected to Confluent Cloud, see Connect Self-Managed Kafka Connect to Confluent Cloud.
Install the Jira Source Connector¶
You can install this connector by using the confluent connect plugin install command, or by manually downloading the ZIP file.
Prerequisites¶
You must install the connector on every machine where Connect will run.
Kafka Broker: Confluent Platform 3.3.0 or later.
Connect: Confluent Platform 4.1.0 or later.
Java 1.8.
Although no additional setup is required in your Jira account, you must have an access token with user privileges.
An installation of the latest (
latest
) connector version.To install the
latest
connector version, navigate to your Confluent Platform installation directory and run the following command:confluent connect plugin install confluentinc/kafka-connect-jira:latest
You can install a specific version by replacing
latest
with a version number as shown in the following example:confluent connect plugin install confluentinc/kafka-connect-jira:1.3.0
Install the connector manually¶
Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.
Quick Start¶
In this quick start, you will configure the Jira Source connector to copy data from Jira to the Kafka topic.
Start Confluent¶
Start the Confluent services using the following Confluent CLI command:
confluent local services start
Important
Do not use the Confluent CLI in production environments.
Property-based example¶
Configure the jira-source-quickstart.properties
file with following properties:
name=MyJiraConnector
confluent.topic.bootstrap.servers=localhost:9092
confluent.topic.replication.factor=1
tasks.max=1
connector.class=io.confluent.connect.jira.JiraSourceConnector
jira.url=<Your-Jira-URL>
jira.since=2019-10-17 23:50
jira.username=<Your-Jira-Username>
jira.api.token=<Your-Jira-Access-Token>
jira.tables=roles
topic.name.pattern=jira-topic-${resourceName}
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
Next, load the Source connector.
Tip
Before starting the connector, verify that the properties in etc/kafka-connect-jira/jira-source-quickstart.properties
are properly set.
Caution
You must include a double dash (--
) between the topic name and your flag. For more information,
see this post.
./bin/confluent local services connect connector load MyJiraConnector --config ./etc/kafka-connect-jira/jira-source-quickstart.properties
Your output should resemble the following:
{
"name": "MyJiraConnector",
"config": {
"confluent.topic.bootstrap.servers": "localhost:9092",
"confluent.topic.replication.factor": "1",
"tasks.max": "1",
"connector.class": "io.confluent.connect.jira.JiraSourceConnector",
"jira.url": "<Your-Jira-URL>",
"jira.since": "2019-10-17 23:50",
"jira.username": "< Your-Jira-Username >",
"jira.api.token": "< Your-Jira-Access-Token >",
"jira.tables": "roles",
"topic.name.pattern":"jira-topic-${resourceName}",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url":"http://localhost:8081",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url":"http://localhost:8081"
"name": "MyJiraConnector"
},
"tasks": [],
"type": "source"
}
Enter the following command to confirm that the connector is in a RUNNING
state:
confluent local services connect connector status MyJiraConnector
The output should resemble the example below:
{
"name":"MyJiraConnector",
"connector":{
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
},
"tasks":[
{
"id":0,
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
}
],
"type":"source"
}
REST-based example¶
Use this setting with distributed workers. Write the following JSON to config.json
, configure all of the required values, and use the following command to post the configuration to one of the distributed connect workers. Check here for more information about the Kafka Connect REST API.
{
"name": "MyJiraConnector",
"config":
{
"connector.class": "io.confluent.connect.jira.JiraSourceConnector",
"confluent.topic.bootstrap.servers": "localhost:9092",
"confluent.topic.replication.factor": "1",
"tasks.max": "1",
"jira.url":"< Your-Jira-URL >",
"jira.since": "2019-12-26 12:36",
"jira.username":"< Your-Jira-Username >",
"jira.api.token":"< Your-Jira-Access-Token >",
"jira.tables":"roles",
"topic.name.pattern":"jira-topic-${resourceName}",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url":"http://localhost:8081",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url":"http://localhost:8081"
}
}
Note
Change the confluent.topic.bootstrap.servers
property to include your broker address(es), and change the confluent.topic.replication.factor
to 3
for staging or production use.
Use curl to post a configuration to one of the Connect workers. Change http://localhost:8083/
to the endpoint of one of your Connect worker(s).
curl -sS -X POST -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors
Enter the following command to confirm that the connector is in a RUNNING
state:
curl http://localhost:8083/connectors/MyJiraConnector/status
The output should resemble the example below:
{
"name":"MyJiraConnector",
"connector":{
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
},
"tasks":[
{
"id":0,
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
}
],
"type":"source"
}
Enter the following command to consume records written by the connector to the Kafka topic:
./bin/kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic jira-topic-roles --from-beginning
The output should resemble the example below:
{
"type":"roles",
"data":{
"self":"<Your-Jira-URL>/rest/api/2/role/10100",
"name":"Project_Name",
"id":10111,
"description":"A test role added to the project",
"scope":null,
"actors":{
"array":[
{
"id":10012,
"displayName":"Jira_Actor_Name",
"type":"user-role-actor",
"actorUser":{
"accountId":"101"
}
}
]
}
}
}