Greening large language models of code

Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors suc...

Full description

Saved in:
Bibliographic Details
Main Authors: SHI, Jieke, YANG, Zhou, KANG, Hong Jin, XU, Bowen, HE, Junda, LO, David
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9249
https://ink.library.smu.edu.sg/context/sis_research/article/10249/viewcontent/3639475.3640097.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10249
record_format dspace
spelling sg-smu-ink.sis_research-102492024-09-02T06:40:47Z Greening large language models of code SHI, Jieke YANG, Zhou KANG, Hong Jin XU, Bowen HE, Junda LO, David Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors such as restricted or unreliable internet access, institutional privacy policies that prohibit external transmission of code to third-party vendors, and more. Therefore, developing a compact, efficient, and yet energy-saving model for deployment on developers' devices becomes essential.To this aim, we propose Avatar, a novel approach that crafts a deployable model from a large language model of code by optimizing it in terms of model size, inference latency, energy consumption, and carbon footprint while maintaining a comparable level of effectiveness (e.g., prediction accuracy on downstream tasks). The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of a Satisfiability Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT solver is used to form an appropriate configuration space, while the optimization algorithm identifies the Pareto-optimal set of configurations for training the optimized models using knowledge distillation. We evaluate Avatar with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160× smaller than the original large models. On the two tasks, the optimized models significantly reduce the energy consumption (up to 184× less), carbon footprint (up to 157× less), and inference latency (up to 76× faster), with only a negligible loss in effectiveness (1.67%). 2024-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9249 info:doi/10.1145/3639475.3640097 https://ink.library.smu.edu.sg/context/sis_research/article/10249/viewcontent/3639475.3640097.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Language Models of Code Configuration Tuning Multi-Objective Optimization Programming Languages and Compilers Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Language Models of Code
Configuration Tuning
Multi-Objective Optimization
Programming Languages and Compilers
Software Engineering
spellingShingle Language Models of Code
Configuration Tuning
Multi-Objective Optimization
Programming Languages and Compilers
Software Engineering
SHI, Jieke
YANG, Zhou
KANG, Hong Jin
XU, Bowen
HE, Junda
LO, David
Greening large language models of code
description Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors such as restricted or unreliable internet access, institutional privacy policies that prohibit external transmission of code to third-party vendors, and more. Therefore, developing a compact, efficient, and yet energy-saving model for deployment on developers' devices becomes essential.To this aim, we propose Avatar, a novel approach that crafts a deployable model from a large language model of code by optimizing it in terms of model size, inference latency, energy consumption, and carbon footprint while maintaining a comparable level of effectiveness (e.g., prediction accuracy on downstream tasks). The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of a Satisfiability Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT solver is used to form an appropriate configuration space, while the optimization algorithm identifies the Pareto-optimal set of configurations for training the optimized models using knowledge distillation. We evaluate Avatar with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160× smaller than the original large models. On the two tasks, the optimized models significantly reduce the energy consumption (up to 184× less), carbon footprint (up to 157× less), and inference latency (up to 76× faster), with only a negligible loss in effectiveness (1.67%).
format text
author SHI, Jieke
YANG, Zhou
KANG, Hong Jin
XU, Bowen
HE, Junda
LO, David
author_facet SHI, Jieke
YANG, Zhou
KANG, Hong Jin
XU, Bowen
HE, Junda
LO, David
author_sort SHI, Jieke
title Greening large language models of code
title_short Greening large language models of code
title_full Greening large language models of code
title_fullStr Greening large language models of code
title_full_unstemmed Greening large language models of code
title_sort greening large language models of code
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9249
https://ink.library.smu.edu.sg/context/sis_research/article/10249/viewcontent/3639475.3640097.pdf
_version_ 1814047844475076608