On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code

Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases pose...

Full description

Saved in:
Bibliographic Details
Main Authors: WEYSSOW, Martin, ZHOU, Xin, KIM, Kisub, LO, David, SAHRAOUI, Houari A.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8574
https://ink.library.smu.edu.sg/context/sis_research/article/9577/viewcontent/On_the_Usage_of_Continual_Learning_for_Out_of_Distribution_Generalization_in_Pre_trained_Language_Models_of_Code.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9577
record_format dspace
spelling sg-smu-ink.sis_research-95772024-01-25T08:59:02Z On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code WEYSSOW, Martin ZHOU, Xin KIM, Kisub LO, David SAHRAOUI, Houari A. Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases poses a challenge to the effectiveness and robustness of PLMs. In particular, world-realistic scenarios potentially lead to significant differences between the distribution of the pre-training and test data, i.e., distribution shift, resulting in a degradation of the PLM's performance on downstream tasks. In this paper, we stress the need for adapting PLMs of code to software data whose distribution changes over time, a crucial problem that has been overlooked in previous works. The motivation of this work is to consider the PLM in a non-stationary environment, where fine-tuning data evolves over time according to a software evolution scenario. Specifically, we design a scenario where the model needs to learn from a stream of programs containing new, unseen APIs over time. We study two widely used PLM architectures, i.e., a GPT2 decoder and a RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We demonstrate that the most commonly used fine-tuning technique from prior work is not robust enough to handle the dynamic nature of APIs, leading to the loss of previously acquired knowledge i.e., catastrophic forgetting. To address these issues, we implement five continual learning approaches, including replay-based and regularization-based methods. Our findings demonstrate that utilizing these straightforward methods effectively mitigates catastrophic forgetting in PLMs across both downstream tasks while achieving comparable or superior performance. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8574 info:doi/10.1145/3611643.3616244 https://ink.library.smu.edu.sg/context/sis_research/article/9577/viewcontent/On_the_Usage_of_Continual_Learning_for_Out_of_Distribution_Generalization_in_Pre_trained_Language_Models_of_Code.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Continual learning Deep learning for code Down-stream Dynamic nature Fine tuning Generalisation Language model Out-of-distribution generalization Pre-trained language model Pre-training Databases and Information Systems Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Continual learning
Deep learning for code
Down-stream
Dynamic nature
Fine tuning
Generalisation
Language model
Out-of-distribution generalization
Pre-trained language model
Pre-training
Databases and Information Systems
Software Engineering
spellingShingle Continual learning
Deep learning for code
Down-stream
Dynamic nature
Fine tuning
Generalisation
Language model
Out-of-distribution generalization
Pre-trained language model
Pre-training
Databases and Information Systems
Software Engineering
WEYSSOW, Martin
ZHOU, Xin
KIM, Kisub
LO, David
SAHRAOUI, Houari A.
On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
description Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases poses a challenge to the effectiveness and robustness of PLMs. In particular, world-realistic scenarios potentially lead to significant differences between the distribution of the pre-training and test data, i.e., distribution shift, resulting in a degradation of the PLM's performance on downstream tasks. In this paper, we stress the need for adapting PLMs of code to software data whose distribution changes over time, a crucial problem that has been overlooked in previous works. The motivation of this work is to consider the PLM in a non-stationary environment, where fine-tuning data evolves over time according to a software evolution scenario. Specifically, we design a scenario where the model needs to learn from a stream of programs containing new, unseen APIs over time. We study two widely used PLM architectures, i.e., a GPT2 decoder and a RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We demonstrate that the most commonly used fine-tuning technique from prior work is not robust enough to handle the dynamic nature of APIs, leading to the loss of previously acquired knowledge i.e., catastrophic forgetting. To address these issues, we implement five continual learning approaches, including replay-based and regularization-based methods. Our findings demonstrate that utilizing these straightforward methods effectively mitigates catastrophic forgetting in PLMs across both downstream tasks while achieving comparable or superior performance.
format text
author WEYSSOW, Martin
ZHOU, Xin
KIM, Kisub
LO, David
SAHRAOUI, Houari A.
author_facet WEYSSOW, Martin
ZHOU, Xin
KIM, Kisub
LO, David
SAHRAOUI, Houari A.
author_sort WEYSSOW, Martin
title On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
title_short On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
title_full On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
title_fullStr On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
title_full_unstemmed On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
title_sort on the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8574
https://ink.library.smu.edu.sg/context/sis_research/article/9577/viewcontent/On_the_Usage_of_Continual_Learning_for_Out_of_Distribution_Generalization_in_Pre_trained_Language_Models_of_Code.pdf
_version_ 1789483278834597888