Leveraging Large Language Model for automatic patch correctness assessment

Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the...

Full description

Saved in:
Bibliographic Details
Main Authors: ZHOU, Xin, XU, Bowen, KIM, Kisub, HAN, DongGyun, NGUYEN, Hung Huu, LE-CONG, Thanh, HE, Junda, LE, Bach, David LO
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9917
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10917
record_format dspace
spelling sg-smu-ink.sis_research-109172025-01-02T08:03:58Z Leveraging Large Language Model for automatic patch correctness assessment ZHOU, Xin XU, Bowen KIM, Kisub HAN, DongGyun NGUYEN, Hung Huu LE-CONG, Thanh HE, Junda LE, Bach David LO, Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all available test cases. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. These approaches are mainly evaluated within the cross-validation setting. However, for patches generated by a new or unseen APR tool, users are implicitly required to manually label a significant portion of these patches (e.g., 90% in 10-fold cross-validation) in the cross-validation setting before inferring the remaining patches (e.g., 10% in 10-fold cross-validation). To mitigate the issue, in this study, we propose LLM4PatchCorrect, the patch correctness assessment by adopting a large language model for code. Specifically, for patches generated by a new or unseen APR tool, LLM4PatchCorrect does not need labeled patches of this new or unseen APR tool for training but directly queries the large language model for code to get predictions on the correctness labels without training. In this way, LLM4PatchCorrect can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment (APCA) task to the large language model for code, LLM4PatchCorrect leverages bug descriptions, execution traces, failing test cases, test coverage, and labeled patches generated by existing APR tools, before deciding the correctness of the unlabeled patches of a new or unseen APR tool. Additionally, LLM4PatchCorrect prioritizes labeled patches from existing APR tools that exhibit semantic similarity to those generated by new APR tools, enhancing the accuracy achieved by LLM4PatchCorrect for patches from new APR tools. Our experimental results showed that LLM4PatchCorrect can achieve an accuracy of 84.4% and an F1-score of 86.5% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique significantly outperformed the prior state-of-the-art. 2024-11-01T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/9917 info:doi/10.1109/TSE.2024.3452252 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Automatic patch correctness assessment Large Language Models of code In-context learning Automated program repair Artificial Intelligence and Robotics Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Automatic patch correctness assessment
Large Language Models of code
In-context learning
Automated program repair
Artificial Intelligence and Robotics
Computer Sciences
spellingShingle Automatic patch correctness assessment
Large Language Models of code
In-context learning
Automated program repair
Artificial Intelligence and Robotics
Computer Sciences
ZHOU, Xin
XU, Bowen
KIM, Kisub
HAN, DongGyun
NGUYEN, Hung Huu
LE-CONG, Thanh
HE, Junda
LE, Bach
David LO,
Leveraging Large Language Model for automatic patch correctness assessment
description Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all available test cases. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. These approaches are mainly evaluated within the cross-validation setting. However, for patches generated by a new or unseen APR tool, users are implicitly required to manually label a significant portion of these patches (e.g., 90% in 10-fold cross-validation) in the cross-validation setting before inferring the remaining patches (e.g., 10% in 10-fold cross-validation). To mitigate the issue, in this study, we propose LLM4PatchCorrect, the patch correctness assessment by adopting a large language model for code. Specifically, for patches generated by a new or unseen APR tool, LLM4PatchCorrect does not need labeled patches of this new or unseen APR tool for training but directly queries the large language model for code to get predictions on the correctness labels without training. In this way, LLM4PatchCorrect can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment (APCA) task to the large language model for code, LLM4PatchCorrect leverages bug descriptions, execution traces, failing test cases, test coverage, and labeled patches generated by existing APR tools, before deciding the correctness of the unlabeled patches of a new or unseen APR tool. Additionally, LLM4PatchCorrect prioritizes labeled patches from existing APR tools that exhibit semantic similarity to those generated by new APR tools, enhancing the accuracy achieved by LLM4PatchCorrect for patches from new APR tools. Our experimental results showed that LLM4PatchCorrect can achieve an accuracy of 84.4% and an F1-score of 86.5% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique significantly outperformed the prior state-of-the-art.
format text
author ZHOU, Xin
XU, Bowen
KIM, Kisub
HAN, DongGyun
NGUYEN, Hung Huu
LE-CONG, Thanh
HE, Junda
LE, Bach
David LO,
author_facet ZHOU, Xin
XU, Bowen
KIM, Kisub
HAN, DongGyun
NGUYEN, Hung Huu
LE-CONG, Thanh
HE, Junda
LE, Bach
David LO,
author_sort ZHOU, Xin
title Leveraging Large Language Model for automatic patch correctness assessment
title_short Leveraging Large Language Model for automatic patch correctness assessment
title_full Leveraging Large Language Model for automatic patch correctness assessment
title_fullStr Leveraging Large Language Model for automatic patch correctness assessment
title_full_unstemmed Leveraging Large Language Model for automatic patch correctness assessment
title_sort leveraging large language model for automatic patch correctness assessment
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9917
_version_ 1821237284969644032