Assessing generalizability of CodeBERT
Pre-trained models like BERT have achieved strong improvements on many natural language processing (NLP) tasks, showing their great generalizability. The success of pre-trained models in NLP inspires pre-trained models for programming language. Recently, CodeBERT, a model for both natural language (...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6854 https://ink.library.smu.edu.sg/context/sis_research/article/7857/viewcontent/288200a425.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Pre-trained models like BERT have achieved strong improvements on many natural language processing (NLP) tasks, showing their great generalizability. The success of pre-trained models in NLP inspires pre-trained models for programming language. Recently, CodeBERT, a model for both natural language (NL) and programming language (PL), pre-trained on code search dataset, is proposed. Although promising, CodeBERT has not been evaluated beyond its pre-trained dataset for NL-PL tasks. Also, it has only been shown effective on two tasks that are close in nature to its pre-trained data. This raises two questions: Can CodeBERT generalize beyond its pre-trained data? Can it generalize to various software engineering tasks involving NL and PL? Our work answers these questions by performing an empirical investigation into the generalizability of CodeBERT. First, we assess the generalizability of CodeBERT to datasets other than its pre-training data. Specifically, considering the code search task, we conduct experiments on another dataset containing Python code snippets and their corresponding documentation. We also consider yet another dataset of questions and answers collected from Stack Overflow about Python programming. Second, to assess the generalizability of CodeBERT to various software engineering tasks, we apply CodeBERT to the just-in-time defect prediction task. Our empirical results support the generalizability of CodeBERT on the additional data and task. CodeBERT-based solutions can achieve higher or comparable performance than specialized solutions designed for the code search and just-intime defect prediction tasks. However, the superior performance of the CodeBERT requires a tradeoff; for example, it requires much more computation resources as compared to specialized code search approaches. |
---|