Assessing generalizability of CodeBERT
Pre-trained models like BERT have achieved strong improvements on many natural language processing (NLP) tasks, showing their great generalizability. The success of pre-trained models in NLP inspires pre-trained models for programming language. Recently, CodeBERT, a model for both natural language (...
Saved in:
Main Authors: | ZHOU, Xin, HAN, DongGyun, LO, David |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6854 https://ink.library.smu.edu.sg/context/sis_research/article/7857/viewcontent/288200a425.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
An exploratory study on code attention in BERT
by: SHARMA, Rishab, et al.
Published: (2022) -
Interpreting CodeBERT for semantic code clone detection
by: ABID, Shamsa, et al.
Published: (2023) -
On the generalizability of Neural Program Models with respect to semantic-preserving program transformations
by: RABIN, Md Rafiqul Islam, et al.
Published: (2021) -
Using CodeBERT model for vulnerability detection
by: Zhou, ZhiWei
Published: (2022) -
The generalizability of leadership across activity domains and time periods
by: Park, K.W., et al.
Published: (2014)