Compressing pre-trained models of code into 3 MB

Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers’ daily workflow: these large models consume hundreds of megabytes of memory and r...

Full description

Saved in:
Bibliographic Details
Main Authors: SHI, Jieke, YANG, Zhou, XU, Bowen, KANG, Hong Jin, LO, David
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7725
https://ink.library.smu.edu.sg/context/sis_research/article/8728/viewcontent/3551349.3556964_pvoa.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8728
record_format dspace
spelling sg-smu-ink.sis_research-87282023-09-12T07:36:26Z Compressing pre-trained models of code into 3 MB SHI, Jieke YANG, Zhou XU, Bowen KANG, Hong Jin LO, David Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers’ daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model’s Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160× smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31× and 4.15× faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task 2022-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7725 info:doi/10.1145/3551349.3556964 https://ink.library.smu.edu.sg/context/sis_research/article/8728/viewcontent/3551349.3556964_pvoa.pdf http://creativecommons.org/licenses/by/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Model compression Genetic algorithm Pre-trained models Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Model compression
Genetic algorithm
Pre-trained models
Software Engineering
spellingShingle Model compression
Genetic algorithm
Pre-trained models
Software Engineering
SHI, Jieke
YANG, Zhou
XU, Bowen
KANG, Hong Jin
LO, David
Compressing pre-trained models of code into 3 MB
description Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers’ daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model’s Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160× smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31× and 4.15× faster than the original model at inference, respectively. More importantly, they maintain 96.15% and 97.74% of the original performance on the vulnerability prediction task. They even maintain higher ratios (99.20% and 97.52%) of the original performance on the clone detection task
format text
author SHI, Jieke
YANG, Zhou
XU, Bowen
KANG, Hong Jin
LO, David
author_facet SHI, Jieke
YANG, Zhou
XU, Bowen
KANG, Hong Jin
LO, David
author_sort SHI, Jieke
title Compressing pre-trained models of code into 3 MB
title_short Compressing pre-trained models of code into 3 MB
title_full Compressing pre-trained models of code into 3 MB
title_fullStr Compressing pre-trained models of code into 3 MB
title_full_unstemmed Compressing pre-trained models of code into 3 MB
title_sort compressing pre-trained models of code into 3 mb
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7725
https://ink.library.smu.edu.sg/context/sis_research/article/8728/viewcontent/3551349.3556964_pvoa.pdf
_version_ 1779157129900326912