Invalidator: Automated patch correctness assessment via semantic and syntactic reasoning

Automated program repair (APR) has been gaining ground recently. However, a significant challenge that still remains is test overfitting, in which APR-generated patches plausibly pass the validation test suite but fail to generalize. A common practice to assess the correctness of APR-generated patch...

Full description

Saved in:
Bibliographic Details
Main Authors: LE-CONG, Tranh, LUONG, Duc Minh, LE, Xuan Bach D., LO, David, TRAN, Nhat-Hoa, QUANG-HUY, Bui, HUYNH, Quyet-Thang
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7800
https://ink.library.smu.edu.sg/context/sis_research/article/8803/viewcontent/Invalidator_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Automated program repair (APR) has been gaining ground recently. However, a significant challenge that still remains is test overfitting, in which APR-generated patches plausibly pass the validation test suite but fail to generalize. A common practice to assess the correctness of APR-generated patches is to judge whether they are equivalent to ground truth, i.e., developer-written patches, by either generating additional test cases or employing human manual inspections. The former often requires the generation of at least one test that shows behavioral differences between the APR-patched and developer-patched programs. Searching for this test, however, can be difficult as the search space can be enormous. Meanwhile, the latter is prone to human biases and requires repetitive and expensive manual effort. In this paper, we propose a novel technique, , to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. leverages program invariants to reason about program semantics while also capturing program syntax through language semantics learned from a large code corpus using a pre-trained language model. Given a buggy program and the developer-patched program, infers likely invariants on both programs. Then, determines that an APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains erroneous behaviors from the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of is threefold. First, leverages both semantic and syntactic reasoning to enhance its discriminative capability. Second, does not require new test cases to be generated, but instead only relies on the current test suite and uses invariant inference to generalize program behaviors. Third, is fully automated. We conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that correctly classified 79% of overfitting patches, accounting for 23% more overfitting patches being detected than the best baseline. also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.