On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases pose...
Saved in:
Main Authors: | WEYSSOW, Martin, ZHOU, Xin, KIM, Kisub, LO, David, SAHRAOUI, Houari A. |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8574 https://ink.library.smu.edu.sg/context/sis_research/article/9577/viewcontent/On_the_Usage_of_Continual_Learning_for_Out_of_Distribution_Generalization_in_Pre_trained_Language_Models_of_Code.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Retrieval based code summarisation using code pre-trained models
by: Gupta, Sahaj
Published: (2024) -
On the transferability of pre-trained language models for low-resource programming languages
by: CHEN, Fuxiang, et al.
Published: (2022) -
Sentiment analysis for software engineering: How far can pre-trained transformer models go?
by: ZHANG, Ting, et al.
Published: (2020) -
An exploratory study on code attention in BERT
by: SHARMA, Rishab, et al.
Published: (2022) -
Compressing pre-trained models of code into 3 MB
by: SHI, Jieke, et al.
Published: (2022)