Detecting false alarms from automatic static analysis tools: how far are we?

Automatic static analysis tools (ASATs), such as Findbugs, have a high false alarm rate. The large number of false alarms produced poses a barrier to adoption. Researchers have proposed the use of machine learning to prune false alarms and present only actionable warnings to developers. The state-of...

Full description

Saved in:
Bibliographic Details
Main Authors: KANG, Hong Jin, AW, Khai Loong, LO, David
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7686
https://ink.library.smu.edu.sg/context/sis_research/article/8689/viewcontent/detecting.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8689
record_format dspace
spelling sg-smu-ink.sis_research-86892023-01-10T03:18:11Z Detecting false alarms from automatic static analysis tools: how far are we? KANG, Hong Jin AW, Khai Loong LO, David Automatic static analysis tools (ASATs), such as Findbugs, have a high false alarm rate. The large number of false alarms produced poses a barrier to adoption. Researchers have proposed the use of machine learning to prune false alarms and present only actionable warnings to developers. The state-of-the-art study has identified a set of “Golden Features” based on metrics computed over the characteristics and history of the file, code, and warning. Recent studies show that machine learning using these features is extremely effective and that they achieve almost perfect performance. We perform a detailed analysis to better understand the strong performance of the “Golden Features”. We found that several studies used an experimental procedure that results in data leakage and data duplication, which are subtle issues with significant implications. Firstly, the ground-truth labels have leaked into features that measure the proportion of actionable warnings in a given context. Secondly, many warnings in the testing dataset appear in the training dataset. Next, we demonstrate limitations in the warning oracle that determines the ground-truth labels, a heuristic comparing warnings in a given revision to a reference revision in the future. We show the choice of reference revision influences the warning distribution. Moreover, the heuristic produces labels that do not agree with human oracles. Hence, the strong performance of these techniques previously seen is overoptimistic of their true performance if adopted in practice. Our results convey several lessons and provide guidelines for evaluating false alarm detectors. 2022-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7686 info:doi/10.1145/3510003.3510214 https://ink.library.smu.edu.sg/context/sis_research/article/8689/viewcontent/detecting.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Static analysis False alarms Data leakage Data duplication Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Static analysis
False alarms
Data leakage
Data duplication
Databases and Information Systems
spellingShingle Static analysis
False alarms
Data leakage
Data duplication
Databases and Information Systems
KANG, Hong Jin
AW, Khai Loong
LO, David
Detecting false alarms from automatic static analysis tools: how far are we?
description Automatic static analysis tools (ASATs), such as Findbugs, have a high false alarm rate. The large number of false alarms produced poses a barrier to adoption. Researchers have proposed the use of machine learning to prune false alarms and present only actionable warnings to developers. The state-of-the-art study has identified a set of “Golden Features” based on metrics computed over the characteristics and history of the file, code, and warning. Recent studies show that machine learning using these features is extremely effective and that they achieve almost perfect performance. We perform a detailed analysis to better understand the strong performance of the “Golden Features”. We found that several studies used an experimental procedure that results in data leakage and data duplication, which are subtle issues with significant implications. Firstly, the ground-truth labels have leaked into features that measure the proportion of actionable warnings in a given context. Secondly, many warnings in the testing dataset appear in the training dataset. Next, we demonstrate limitations in the warning oracle that determines the ground-truth labels, a heuristic comparing warnings in a given revision to a reference revision in the future. We show the choice of reference revision influences the warning distribution. Moreover, the heuristic produces labels that do not agree with human oracles. Hence, the strong performance of these techniques previously seen is overoptimistic of their true performance if adopted in practice. Our results convey several lessons and provide guidelines for evaluating false alarm detectors.
format text
author KANG, Hong Jin
AW, Khai Loong
LO, David
author_facet KANG, Hong Jin
AW, Khai Loong
LO, David
author_sort KANG, Hong Jin
title Detecting false alarms from automatic static analysis tools: how far are we?
title_short Detecting false alarms from automatic static analysis tools: how far are we?
title_full Detecting false alarms from automatic static analysis tools: how far are we?
title_fullStr Detecting false alarms from automatic static analysis tools: how far are we?
title_full_unstemmed Detecting false alarms from automatic static analysis tools: how far are we?
title_sort detecting false alarms from automatic static analysis tools: how far are we?
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7686
https://ink.library.smu.edu.sg/context/sis_research/article/8689/viewcontent/detecting.pdf
_version_ 1770576414060314624