The future can't help fix the past: Assessing program repair in the wild

Automated program repair (APR) has been gaining ground with substantial effort devoted to the area, opening up many challenges and opportunities. One such challenge is that the state-of-the-art repair techniques often resort to incomplete specifications, e.g., test cases that witness buggy behavior,...

Full description

Saved in:
Bibliographic Details
Main Authors: KABADI, Vinay, KONG, Dezhen, XIE, Siyu, BAO, Lingfeng, PRANA, Gede Artha Azriadi, LE, Tien Duy B., LE, Xuan Bach D., David LO
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8610
https://ink.library.smu.edu.sg/context/sis_research/article/9613/viewcontent/Future_Fix_Program_Wild_ICSME23_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Automated program repair (APR) has been gaining ground with substantial effort devoted to the area, opening up many challenges and opportunities. One such challenge is that the state-of-the-art repair techniques often resort to incomplete specifications, e.g., test cases that witness buggy behavior, to generate repairs. In practice, bug-exposing test cases are often available when: (1) developers, at the same time of (or after) submitting bug fixes, create the tests to assure the correctness of the fixes, or (2) regression errors occur. The former case – a scenario commonly used for creating popular bug datasets – however, may not be suitable to assess how APR performs in the wild. Since developers already know where and how to fix the bugs, tests created in this case may encapsulate knowledge gained only after bugs are fixed. Thus, more effort is needed to create datasets for more realistically evaluating APR.We address this challenge by creating a dataset focusing on bugs identified via continuous integration (CI) failures – a special case of regression errors – wherein bugs happen when the program after being changed is re-executed on the existing test suite. We argue that CI failures, wherein bug-exposing tests are created before bug fixes and thus assume no prior knowledge of developers on the bugs to be involved, are more realistic for evaluating APR. Toward this end, we curated 102 CI failures from 40 popular real-world software on GitHub. We demonstrate various features and the usefulness of the dataset via an evaluation of five well-known APR techniques, namely GenProg, Kali, Cardumen, RsRepair and Arja. We subsequently discuss several findings and implications for future APR studies. Overall, experiment results show that our dataset is complementary to existing datasets such as Defect4J in realistic evaluations of APR.