Compressing pre-trained models of code into 3 MB
Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers’ daily workflow: these large models consume hundreds of megabytes of memory and r...
Saved in:
Main Authors: | SHI, Jieke, YANG, Zhou, XU, Bowen, KANG, Hong Jin, LO, David |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2022
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7725 https://ink.library.smu.edu.sg/context/sis_research/article/8728/viewcontent/3551349.3556964_pvoa.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Natural attack for pre-trained models of code
by: YANG, Zhou, et al.
Published: (2022) -
PTM4Tag: sharpening tag recommendation of stack overflow posts with pre-trained models
by: HE, Junda, et al.
Published: (2022) -
Retrieval based code summarisation using code pre-trained models
by: Gupta, Sahaj
Published: (2024) -
Sentiment analysis for software engineering: How far can pre-trained transformer models go?
by: ZHANG, Ting, et al.
Published: (2020) -
Stealthy backdoor attack for code models
by: YANG, Zhou, et al.
Published: (2024)