Using CodeBERT model for vulnerability detection

This report presents on the experimental study that was done based on the aim of achieving a deeper understanding of the parameters used in fine-tuning of the pre-trained model, while also trying to achieve a model with the same or even better accuracy than what was stated in the repository, thro...

Full description

Saved in:
Bibliographic Details
Main Author: Zhou, ZhiWei
Other Authors: Liu Yang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156815
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This report presents on the experimental study that was done based on the aim of achieving a deeper understanding of the parameters used in fine-tuning of the pre-trained model, while also trying to achieve a model with the same or even better accuracy than what was stated in the repository, through fine-tuning it by varying various parameter settings. Based on existing research, there have been a clear and growing need for these models to detect vulnerabilities in code intelligence tasks with decent accuracies in order to ultimately increase productivity of programmers and also reduce the risks of using codes that are already available online on code sharing platforms. CodeBERT is a BERT-style (Bidirectional Encoder Representations from Transformers) pretrained model for Natural Language (NL) and Programming Language (PL) which learns general-purpose representations, that supports downstream NL-PL applications such as natural language code search, code documentation generation, etc. It is developed with a Transformerbased neural architecture and trained with a hybrid objective function which enables the utilization of both “bimodal” data and “unimodal” data. CodeBERT is evaluated by fine-tuning the model’s parameters Results show that fine-tuning the parameters of CodeBERT achieves state-of-the-art performance on both NL code search and code documentation generation. Furthermore, CodeBERT is evaluated in a zero-shot setting where parameters of pre-trained models are fixed to find out what type of knowledge is learnt. Results show that CodeBERT constantly performs better than previous pre-trained models on NL-PL probing. With the benchmarks of CodeBERT already in the repository, the purpose of this experimental study is to achieve the benchmarks and possibly exceed it by researching about the parameters, in order to get a better understanding, then changing the various parameters singly, graphing its results, and studying its effects on the fine-tuned process and in turn, the final accuracy of the model.