On the unreliability of bug severity data

Severity levels, e.g., critical and minor, of bugs are often used to prioritize development efforts. Prior research efforts have proposed approaches to automatically assign the severity label to a bug report. All prior efforts verify the accuracy of their approaches using human-assigned bug reports...

Full description

Saved in:
Bibliographic Details
Main Authors: TIAN, Yuan, ALI, Nasir, David LO, HASSAN, Ahmed E.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2015
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/2859
https://ink.library.smu.edu.sg/context/sis_research/article/3859/viewcontent/UnreliabilityBugSeverityData_2016.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-3859
record_format dspace
spelling sg-smu-ink.sis_research-38592017-03-28T03:31:42Z On the unreliability of bug severity data TIAN, Yuan ALI, Nasir David LO, HASSAN, Ahmed E. Severity levels, e.g., critical and minor, of bugs are often used to prioritize development efforts. Prior research efforts have proposed approaches to automatically assign the severity label to a bug report. All prior efforts verify the accuracy of their approaches using human-assigned bug reports data that is stored in software repositories. However, all prior efforts assume that such human-assigned data is reliable. Hence a perfect automated approach should be able to assign the same severity label as in the repository – achieving a 100% accuracy. Looking at duplicate bug reports (i.e., reports referring to the same problem) from three open-source software systems (OpenOffice, Mozilla, and Eclipse), we find that around 51 % of the duplicate bug reports have inconsistent human-assigned severity labels even though they refer to the same software problem. While our results do indicate that duplicate bug reports have unreliable severity labels, we believe that they send warning signals about the reliability of the full bug severity data (i.e., including non-duplicate reports). Future research efforts should explore if our findings generalize to the full dataset. Moreover, they should factor in the unreliable nature of the bug severity data. Given the unreliable nature of the severity data, classical metrics to assess the accuracy of models/learners should not be used for assessing the accuracy of approaches for automated assigning severity label. Hence, we propose a new approach to assess the performance of such models. Our new assessment approach shows that current automated approaches perform well – 77-86 % agreement with human-assigned severity labels. 2015-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/2859 info:doi/10.1007/s10664-015-9409-1 https://ink.library.smu.edu.sg/context/sis_research/article/3859/viewcontent/UnreliabilityBugSeverityData_2016.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Bug report management Data quality Noise prediction Performance evaluation Severity prediction Computer Sciences Information Security Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Bug report management
Data quality
Noise prediction
Performance evaluation
Severity prediction
Computer Sciences
Information Security
Software Engineering
spellingShingle Bug report management
Data quality
Noise prediction
Performance evaluation
Severity prediction
Computer Sciences
Information Security
Software Engineering
TIAN, Yuan
ALI, Nasir
David LO,
HASSAN, Ahmed E.
On the unreliability of bug severity data
description Severity levels, e.g., critical and minor, of bugs are often used to prioritize development efforts. Prior research efforts have proposed approaches to automatically assign the severity label to a bug report. All prior efforts verify the accuracy of their approaches using human-assigned bug reports data that is stored in software repositories. However, all prior efforts assume that such human-assigned data is reliable. Hence a perfect automated approach should be able to assign the same severity label as in the repository – achieving a 100% accuracy. Looking at duplicate bug reports (i.e., reports referring to the same problem) from three open-source software systems (OpenOffice, Mozilla, and Eclipse), we find that around 51 % of the duplicate bug reports have inconsistent human-assigned severity labels even though they refer to the same software problem. While our results do indicate that duplicate bug reports have unreliable severity labels, we believe that they send warning signals about the reliability of the full bug severity data (i.e., including non-duplicate reports). Future research efforts should explore if our findings generalize to the full dataset. Moreover, they should factor in the unreliable nature of the bug severity data. Given the unreliable nature of the severity data, classical metrics to assess the accuracy of models/learners should not be used for assessing the accuracy of approaches for automated assigning severity label. Hence, we propose a new approach to assess the performance of such models. Our new assessment approach shows that current automated approaches perform well – 77-86 % agreement with human-assigned severity labels.
format text
author TIAN, Yuan
ALI, Nasir
David LO,
HASSAN, Ahmed E.
author_facet TIAN, Yuan
ALI, Nasir
David LO,
HASSAN, Ahmed E.
author_sort TIAN, Yuan
title On the unreliability of bug severity data
title_short On the unreliability of bug severity data
title_full On the unreliability of bug severity data
title_fullStr On the unreliability of bug severity data
title_full_unstemmed On the unreliability of bug severity data
title_sort on the unreliability of bug severity data
publisher Institutional Knowledge at Singapore Management University
publishDate 2015
url https://ink.library.smu.edu.sg/sis_research/2859
https://ink.library.smu.edu.sg/context/sis_research/article/3859/viewcontent/UnreliabilityBugSeverityData_2016.pdf
_version_ 1770572643432398848