The future can't help fix the past: Assessing program repair in the wild

Automated program repair (APR) has been gaining ground with substantial effort devoted to the area, opening up many challenges and opportunities. One such challenge is that the state-of-the-art repair techniques often resort to incomplete specifications, e.g., test cases that witness buggy behavior,...

Full description

Saved in:
Bibliographic Details
Main Authors: KABADI, Vinay, KONG, Dezhen, XIE, Siyu, BAO, Lingfeng, PRANA, Gede Artha Azriadi, LE, Tien Duy B., LE, Xuan Bach D., David LO
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8610
https://ink.library.smu.edu.sg/context/sis_research/article/9613/viewcontent/Future_Fix_Program_Wild_ICSME23_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9613
record_format dspace
spelling sg-smu-ink.sis_research-96132024-01-25T08:22:18Z The future can't help fix the past: Assessing program repair in the wild KABADI, Vinay KONG, Dezhen XIE, Siyu BAO, Lingfeng PRANA, Gede Artha Azriadi LE, Tien Duy B. LE, Xuan Bach D. David LO, Automated program repair (APR) has been gaining ground with substantial effort devoted to the area, opening up many challenges and opportunities. One such challenge is that the state-of-the-art repair techniques often resort to incomplete specifications, e.g., test cases that witness buggy behavior, to generate repairs. In practice, bug-exposing test cases are often available when: (1) developers, at the same time of (or after) submitting bug fixes, create the tests to assure the correctness of the fixes, or (2) regression errors occur. The former case – a scenario commonly used for creating popular bug datasets – however, may not be suitable to assess how APR performs in the wild. Since developers already know where and how to fix the bugs, tests created in this case may encapsulate knowledge gained only after bugs are fixed. Thus, more effort is needed to create datasets for more realistically evaluating APR.We address this challenge by creating a dataset focusing on bugs identified via continuous integration (CI) failures – a special case of regression errors – wherein bugs happen when the program after being changed is re-executed on the existing test suite. We argue that CI failures, wherein bug-exposing tests are created before bug fixes and thus assume no prior knowledge of developers on the bugs to be involved, are more realistic for evaluating APR. Toward this end, we curated 102 CI failures from 40 popular real-world software on GitHub. We demonstrate various features and the usefulness of the dataset via an evaluation of five well-known APR techniques, namely GenProg, Kali, Cardumen, RsRepair and Arja. We subsequently discuss several findings and implications for future APR studies. Overall, experiment results show that our dataset is complementary to existing datasets such as Defect4J in realistic evaluations of APR. 2023-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8610 info:doi/10.1109/ICSME58846.2023.00017 https://ink.library.smu.edu.sg/context/sis_research/article/9613/viewcontent/Future_Fix_Program_Wild_ICSME23_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Benchmark Program Repair Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Benchmark
Program Repair
Software Engineering
spellingShingle Benchmark
Program Repair
Software Engineering
KABADI, Vinay
KONG, Dezhen
XIE, Siyu
BAO, Lingfeng
PRANA, Gede Artha Azriadi
LE, Tien Duy B.
LE, Xuan Bach D.
David LO,
The future can't help fix the past: Assessing program repair in the wild
description Automated program repair (APR) has been gaining ground with substantial effort devoted to the area, opening up many challenges and opportunities. One such challenge is that the state-of-the-art repair techniques often resort to incomplete specifications, e.g., test cases that witness buggy behavior, to generate repairs. In practice, bug-exposing test cases are often available when: (1) developers, at the same time of (or after) submitting bug fixes, create the tests to assure the correctness of the fixes, or (2) regression errors occur. The former case – a scenario commonly used for creating popular bug datasets – however, may not be suitable to assess how APR performs in the wild. Since developers already know where and how to fix the bugs, tests created in this case may encapsulate knowledge gained only after bugs are fixed. Thus, more effort is needed to create datasets for more realistically evaluating APR.We address this challenge by creating a dataset focusing on bugs identified via continuous integration (CI) failures – a special case of regression errors – wherein bugs happen when the program after being changed is re-executed on the existing test suite. We argue that CI failures, wherein bug-exposing tests are created before bug fixes and thus assume no prior knowledge of developers on the bugs to be involved, are more realistic for evaluating APR. Toward this end, we curated 102 CI failures from 40 popular real-world software on GitHub. We demonstrate various features and the usefulness of the dataset via an evaluation of five well-known APR techniques, namely GenProg, Kali, Cardumen, RsRepair and Arja. We subsequently discuss several findings and implications for future APR studies. Overall, experiment results show that our dataset is complementary to existing datasets such as Defect4J in realistic evaluations of APR.
format text
author KABADI, Vinay
KONG, Dezhen
XIE, Siyu
BAO, Lingfeng
PRANA, Gede Artha Azriadi
LE, Tien Duy B.
LE, Xuan Bach D.
David LO,
author_facet KABADI, Vinay
KONG, Dezhen
XIE, Siyu
BAO, Lingfeng
PRANA, Gede Artha Azriadi
LE, Tien Duy B.
LE, Xuan Bach D.
David LO,
author_sort KABADI, Vinay
title The future can't help fix the past: Assessing program repair in the wild
title_short The future can't help fix the past: Assessing program repair in the wild
title_full The future can't help fix the past: Assessing program repair in the wild
title_fullStr The future can't help fix the past: Assessing program repair in the wild
title_full_unstemmed The future can't help fix the past: Assessing program repair in the wild
title_sort future can't help fix the past: assessing program repair in the wild
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8610
https://ink.library.smu.edu.sg/context/sis_research/article/9613/viewcontent/Future_Fix_Program_Wild_ICSME23_av.pdf
_version_ 1789483286291021824