ContraBERT: Enhancing code pre-trained models via contrastive learning

Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code transl...

Full description

Saved in:
Bibliographic Details
Main Authors: LIU, Shangqing, WU, Bozhi, XIE, Xiaofei, MENG, Guozhu, LIU, Yang.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8228
https://ink.library.smu.edu.sg/context/sis_research/article/9231/viewcontent/2301.09072.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9231
record_format dspace
spelling sg-smu-ink.sis_research-92312023-10-26T03:46:21Z ContraBERT: Enhancing code pre-trained models via contrastive learning LIU, Shangqing WU, Bozhi XIE, Xiaofei MENG, Guozhu LIU, Yang. Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive ex-periments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks. 2023-05-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8228 info:doi/10.1109/ICSE48619.2023.00207 https://ink.library.smu.edu.sg/context/sis_research/article/9231/viewcontent/2301.09072.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University industries computer languages codes perturbation methods natural languages cloning data augmentation Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic industries
computer languages
codes
perturbation methods
natural languages
cloning
data augmentation
Artificial Intelligence and Robotics
spellingShingle industries
computer languages
codes
perturbation methods
natural languages
cloning
data augmentation
Artificial Intelligence and Robotics
LIU, Shangqing
WU, Bozhi
XIE, Xiaofei
MENG, Guozhu
LIU, Yang.
ContraBERT: Enhancing code pre-trained models via contrastive learning
description Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive ex-periments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks.
format text
author LIU, Shangqing
WU, Bozhi
XIE, Xiaofei
MENG, Guozhu
LIU, Yang.
author_facet LIU, Shangqing
WU, Bozhi
XIE, Xiaofei
MENG, Guozhu
LIU, Yang.
author_sort LIU, Shangqing
title ContraBERT: Enhancing code pre-trained models via contrastive learning
title_short ContraBERT: Enhancing code pre-trained models via contrastive learning
title_full ContraBERT: Enhancing code pre-trained models via contrastive learning
title_fullStr ContraBERT: Enhancing code pre-trained models via contrastive learning
title_full_unstemmed ContraBERT: Enhancing code pre-trained models via contrastive learning
title_sort contrabert: enhancing code pre-trained models via contrastive learning
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8228
https://ink.library.smu.edu.sg/context/sis_research/article/9231/viewcontent/2301.09072.pdf
_version_ 1781793967752347648