Jira Source Connector for Confluent Platform¶
The Kafka Connect Jira Source connector is used to move data from Jira to an Apache Kafka® topic. This connector polls data from Jira through Jira v2 APIs, converts data into Kafka records, and pushes the records into a Kafka topic. Each row from Jira tables is converted into exactly one Kafka record. Note that the Jira Source connector does not support Jira on-premises deployments.
Features¶
The Jira Source connector offers the following features:
At least once delivery¶
This connector guarantees that records are delivered to the Kafka topic at least once. If the connector restarts, there may be some duplicate records in the Kafka topic.
Supports one task¶
The Jira Source connector supports running one task–one table is covered by one task.
HTTPS proxy¶
The connector can connect to Jira using an HTTPS proxy server. To configure the
proxy, you can set http.proxy.host
, http.proxy.port
, http.proxy.user
and http.proxy.password
in the configuration file. The connector has been
tested with HTTPS proxy with basic authentication.
Jira resources¶
The connector supports fetching the following resources:
- changelogs : Changelogs for an issue, refer the following schema.
- issue_comments : Comments for an issue, refer the following schema.
- issue_transitions : All transitions for an issue, refer the following schema.
- issues : Issues in all states, refer the following schema.
- project_categories : All project categories, refer the following schema.
- project_types : All project types, refer the following schema.
- projects : All projects, refer the following schema.
- resolutions : Resolutions for issues, refer the following schema.
- roles : All project roles, refer the following schema.
- users : Users in active and in-active states, refer the following schema.
- versions : Project versions for a project, refer the following schema.
- worklogs : All worklogs for an issues, refer the following schema.
Limitations¶
- Resources which do not support fetching records by date and time will have duplicate records and will be fetched repeatedly at a duration specified by the
request.interval.ms
configuration property. - The connector is not able to detect data deletion on Jira.
- The connector does not guarantee accurate record order in the Kafka topic.
- The timezone set by the user (defined in the
jira.username
configuration property) must match the general setting Jira timezone set by the administrator for this connector. - For Schema Registry-based output formats, the connector tries to deduce the schema based on the source API response returned. The connector registers a new schema for every NULL and NOT NULL value of an optional field in the API response. For this reason, the connector may register schema versions at a much higher rate than expected.
License¶
You can use this connector for a 30-day trial period without a license key.
After 30 days, you must purchase a connector subscription which includes Confluent enterprise license keys to subscribers, along with enterprise-level support for Confluent Platform and your connectors. If you are a subscriber, you can contact Confluent Support at support@confluent.io for more information.
For license properties, see Confluent Platform license. For information about the license topic, see License topic configuration.
Configuration Properties¶
For a complete list of configuration properties for this connector, see Configuration Reference for Jira Source Connector for Confluent Platform.
For an example of how to get Kafka Connect connected to Confluent Cloud, see Connect Self-Managed Kafka Connect to Confluent Cloud.
Install the Jira Source Connector¶
You can install this connector by using the confluent connect plugin install command, or by manually downloading the ZIP file.
Prerequisites¶
You must install the connector on every machine where Connect will run.
Kafka Broker: Confluent Platform 3.3.0 or later.
Connect: Confluent Platform 4.1.0 or later.
Java 1.8.
Although no additional setup is required in your Jira account, you must have an access token with user privileges.
An installation of the latest (
latest
) connector version.To install the
latest
connector version, navigate to your Confluent Platform installation directory and run the following command:confluent connect plugin install confluentinc/kafka-connect-jira:latest
You can install a specific version by replacing
latest
with a version number as shown in the following example:confluent connect plugin install confluentinc/kafka-connect-jira:1.2.4
Install the connector manually¶
Download and extract the ZIP file for your connector and then follow the manual connector installation instructions.
Quick Start¶
In this quick start, you will configure the Jira Source connector to copy data from Jira to the Kafka topic.
Start Confluent¶
Start the Confluent services using the following Confluent CLI command:
confluent local services start
Important
Do not use the Confluent CLI in production environments.
Property-based example¶
Configure the jira-source-quickstart.properties
file with following properties:
name=MyJiraConnector
confluent.topic.bootstrap.servers=localhost:9092
confluent.topic.replication.factor=1
tasks.max=1
connector.class=io.confluent.connect.jira.JiraSourceConnector
jira.url=<Your-Jira-URL>
jira.since=2019-10-17 23:50
jira.username=<Your-Jira-Username>
jira.api.token=<Your-Jira-Access-Token>
jira.tables=roles
topic.name.pattern=jira-topic-${resourceName}
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
Next, load the Source connector.
Tip
Before starting the connector, verify that the properties in etc/kafka-connect-jira/jira-source-quickstart.properties
are properly set.
Caution
You must include a double dash (--
) between the topic name and your flag. For more information,
see this post.
./bin/confluent local services connect connector load MyJiraConnector --config ./etc/kafka-connect-jira/jira-source-quickstart.properties
Your output should resemble the following:
{
"name": "MyJiraConnector",
"config": {
"confluent.topic.bootstrap.servers": "localhost:9092",
"confluent.topic.replication.factor": "1",
"tasks.max": "1",
"connector.class": "io.confluent.connect.jira.JiraSourceConnector",
"jira.url": "<Your-Jira-URL>",
"jira.since": "2019-10-17 23:50",
"jira.username": "< Your-Jira-Username >",
"jira.api.token": "< Your-Jira-Access-Token >",
"jira.tables": "roles",
"topic.name.pattern":"jira-topic-${resourceName}",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url":"http://localhost:8081",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url":"http://localhost:8081"
"name": "MyJiraConnector"
},
"tasks": [],
"type": "source"
}
Enter the following command to confirm that the connector is in a RUNNING
state:
confluent local services connect connector status MyJiraConnector
The output should resemble the example below:
{
"name":"MyJiraConnector",
"connector":{
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
},
"tasks":[
{
"id":0,
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
}
],
"type":"source"
}
REST-based example¶
Use this setting with distributed workers. Write the following JSON to config.json
, configure all of the required values, and use the following command to post the configuration to one of the distributed connect workers. Check here for more information about the Kafka Connect REST API.
{
"name": "MyJiraConnector",
"config":
{
"connector.class": "io.confluent.connect.jira.JiraSourceConnector",
"confluent.topic.bootstrap.servers": "localhost:9092",
"confluent.topic.replication.factor": "1",
"tasks.max": "1",
"jira.url":"< Your-Jira-URL >",
"jira.since": "2019-12-26 12:36",
"jira.username":"< Your-Jira-Username >",
"jira.api.token":"< Your-Jira-Access-Token >",
"jira.tables":"roles",
"topic.name.pattern":"jira-topic-${resourceName}",
"key.converter":"io.confluent.connect.avro.AvroConverter",
"key.converter.schema.registry.url":"http://localhost:8081",
"value.converter":"io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url":"http://localhost:8081"
}
}
Note
Change the confluent.topic.bootstrap.servers
property to include your broker address(es), and change the confluent.topic.replication.factor
to 3
for staging or production use.
Use curl to post a configuration to one of the Connect workers. Change http://localhost:8083/
to the endpoint of one of your Connect worker(s).
curl -sS -X POST -H 'Content-Type: application/json' --data @config.json http://localhost:8083/connectors
Enter the following command to confirm that the connector is in a RUNNING
state:
curl http://localhost:8083/connectors/MyJiraConnector/status
The output should resemble the example below:
{
"name":"MyJiraConnector",
"connector":{
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
},
"tasks":[
{
"id":0,
"state":"RUNNING",
"worker_id":"127.0.1.1:8083"
}
],
"type":"source"
}
Enter the following command to consume records written by the connector to the Kafka topic:
./bin/kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic jira-topic-roles --from-beginning
The output should resemble the example below:
{
"type":"roles",
"data":{
"self":"<Your-Jira-URL>/rest/api/2/role/10100",
"name":"Project_Name",
"id":10111,
"description":"A test role added to the project",
"scope":null,
"actors":{
"array":[
{
"id":10012,
"displayName":"Jira_Actor_Name",
"type":"user-role-actor",
"actorUser":{
"accountId":"101"
}
}
]
}
}
}