FT2Ra: A fine-tuning-inspired approach to retrieval-augmented code completion
The rise of code pre-trained models has significantly enhanced various coding tasks, such as code completion, and tools like GitHub Copilot. However, the substantial size of these models, especially large models, poses a significant challenge when it comes to fine-tuning them for specific downstream...
Saved in:
Main Authors: | GUO, Qi, LIU, Shangqing, XIE, Xiaofei, TANG, Ze Tang |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9444 https://ink.library.smu.edu.sg/context/sis_research/article/10444/viewcontent/FT2Ra__A_Fine_Tuning_Inspired_Approach_to_Retrieval_Augmented_Code_Completion.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
SIBO : A simple booster for parameter-efficient fine-tuning
by: WEN, Zhihao, et al.
Published: (2024) -
MCQGen: a large language model-driven MCQ generator for personalized learning
by: Hang, Ching Nam, et al.
Published: (2024) -
On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code
by: WEYSSOW, Martin, et al.
Published: (2023) -
Evaluating the carbon footprint of code implementation
by: Tar, Sreeja
Published: (2024) -
Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites
by: WANG, Lei, et al.
Published: (2024)