Jira Time Tracking (Part 1)

Adding Visibility Through Scripting

Written by Ben Carle, CIO

Measuring both team and individual efficiency across a sprint is key to managing sprint commitments. Jira work logs allow individuals to log time directly to stories, providing product managers with broad visibility into the sprint progress as a whole. Gauging individual performance with work logs is difficult, however, because more than one team member may work on a single issue, and individual team member progress is not aggregated across a sprint.


In this article, we will leverage Jira’s API and Google Sheets’ charts to gather individual performance data without the use of paid integrations. In the next article, we will analyze and chart the data.



Jira Work Log Primer


Time tracking can be added to a Jira story in the issue view.


JIRA Issue View

A user can now manually record time spent on an issue. The time remaining is automatically recalculated after work is logged unless the user manually overrides the original estimate by providing a custom value for it.


The list of worklogs for a story can be viewed by selecting the “Work Logs” item in the “Activity” section.


Activity View

Jira API


Jira may not offer built-in reporting for aggregate user time logs, but the Jira REST API provides issue and work log data in JSON format. To prepare this data for use in Google Sheets, we need to:

  1. Fetch Jira issues (with work logs) for a specific sprint through the REST API.
  2. Parse the JSON response to CSV and copy to the clipboard.
  3. Paste the CSV data into a spreadsheet.

Getting Started


To access the REST API we will create a JavaScript application built with a Node runtime. Additionally, we will use the following NPM packages:

  • Jira-connector
    • Wrap the Jira API to simplify REST calls.
  • json2csv
    • Parse JSON API responses to CSV data.
  • dotenv
    • Load environment variables from local file.

Next, we need to create an API token to interact with the Jira REST API. (To learn more about creating an API token, reference the Jira documentation.)


Finally, we need to create an .env file with the following values:


    
HOST=
EMAIL=
API_TOKEN=
PROJECT_KEY=
SPRINT_ID=
    
  

Here are some dummy values for reference:


    
HOST=my-jira-instance.atlassian.net
[email protected]
API_TOKEN=ABCDefghIJKLmnopQRSTuvwx
PROJECT_KEY=AB
SPRINT_ID=1234
    
  

Fetching the Data


To begin, we need to instantiate the JiraClient instance from jira-connector


    
import JiraClient from 'jira-connector';

const jira = new JiraClient({
  host: process.env.HOST,
  basic_auth: {
    email: process.env.EMAIL,
    api_token: process.env.API_TOKEN,
  },
});
    
  

Assign the PROJECT_KEY and SPRINT_ID environment values to the variables project and sprint:


    
const project = process.env.PROJECT_KEY;
const sprint = process.env.SPRINT_ID;
    
  

Next, use the Jira search function to fetch all stories with worklogs. Pass a JQL query that describes the issues we want to fetch (using the project and sprint variables), as well as an array of fields we would like to receive for those issues.


    
response = await jira.search.search({
  jql: `project = ${project} AND Sprint = ${sprint}`,
  fields: ['id', 'key', 'issuetype', 'parent', 'worklog'],
});
    
  

We can now access the issues in response.issues.


We need to paginate our search into multiple requests, because the API limits the number of results it will provide through the search function.


    
const project = process.env.PROJECT_KEY;
const sprint = process.env.SPRINT_ID;
const maxResults = 50;
let startAt = 0;
let total = 1;
let issues = [];

for (let response; startAt < total; startAt += maxResults) {
  response = await jira.search.search({
    jql: `project = ${project} AND Sprint = ${sprint}`,
    fields: ['id', 'key', 'issuetype', 'parent', 'worklog'],
    maxResults,
    startAt,
  });

  total = response.total;
  issues = [...issues, ...response.issues];
}
    
  

Processing the Data


Worklogs contain a lot of data that we won’t need for our analysis.


    
{
  "self": "http://www.example.com/jira/rest/api/2/issue/10010/worklog/10000",
  "author": {
    "self": "http://www.example.com/jira/rest/api/2/user?username=fred",
    "name": "fred",
    "displayName": "Fred F. User",
    "active": false
  },
  "updateAuthor": {
    "self": "http://www.example.com/jira/rest/api/2/user?username=fred",
    "name": "fred",
    "displayName": "Fred F. User",
    "active": false
  },
  "comment": "I did some work here.",
  "updated": "2016-03-16T04:22:37.471+0000",
  "visibility": {
    "type": "group",
    "value": "jira-developers"
  },
  "started": "2016-03-16T04:22:37.471+0000",
  "timeSpent": "3h 20m",
  "timeSpentSeconds": 12000,
  "id": "100028",
  "issueId": "10002"
}
    
  

This is a good time to clear away any of these unnecessary properties The following block will map the work logs into a flat array with issue-level data merged into the work log data:


    
const worklogs = issues.reduce(
  (worklogs, { id, key, fields: { issuetype, parent, worklog } }) => [
    ...worklogs,
    ...worklog.worklogs.map(
      ({
        author: { key: authorKey, displayName },
        started,
        timeSpentSeconds,
      }) => ({
        authorKey,
        name: displayName,
        key,
        issuetype,
        parent,
        started: startedBreakout(started),
        timeSpentSeconds,
        timeSpentHours: secondsToHours(timeSpentSeconds),
      })
    ),
  ],
  []
);
    
  

See the GitHub repository for the full definition of data prep functions such as startedBreakout and secondsToHours.


Parsing the Data


To parse the JSON into CSV, we need to define the relevant values and labels.


    
import { Parser } from 'json2csv';

const fields = [
  { value: 'authorKey', label: 'Author Key' },
  { value: 'name', label: 'Author' },
  { value: 'key', label: 'Issue Key' },
  { value: 'issuetype.name', label: 'Issue Type' },
  { value: 'parent.key', label: 'Parent Key' },
  { value: 'parent.fields.issuetype.name', label: 'Parent Type' },
  { value: 'started.date', label: 'Date' },
  { value: 'started.time', label: 'Time' },
  { value: 'timeSpentSeconds', label: 'Seconds' },
  { value: 'timeSpentHours', label: 'Hours' },
];

export const parser = new Parser({ fields, delimiter: '\t' });
    
  

Then we can execute the parser to parse the JSON into CSV data.


    
const csv = parser.parse(worklogs);
    
  

Copying the Data


We are finally ready to add our CSV data to our spreadsheet. The following function and statement will copy the CSV to our clipboard.


NOTE: This has been tested on MacOS but may not work on all systems.


    
const pbcopy = data => {
  const proc = require('child_process').spawn('pbcopy');
  proc.stdin.write(data);
  proc.stdin.end();
};

pbcopy(csv);
    
  

You can now paste the clipboard data into a spreadsheet, such as Google Sheets. The data will take the following format:


Copied Data in Table

Next Steps


For tips on analyzing and charting this data in Google Sheets, see the next article, coming soon.


Find the full source code for this article in the GitHub repository.


Special thanks to Adam Burdette for contributing to this script by sharing his work with jira-connector.


---
At FullStack Labs, we pride ourselves on our ability to push the capabilities of cutting-edge frameworks like React. Interested in learning more about speeding up development time on your next project? Contact us.

Let’s Talk!

We’d love to learn more about your project. Contact us below for a free consultation with our CEO.
Projects start at $25,000.

FullStack Labs
This field is required
This field is required
Type of project
Reason for contact:
How did you hear about us? This field is required