SIBO : A simple booster for parameter-efficient fine-tuning
Fine-tuning all parameters of large language models (LLMs) necessitates substantial computational power and extended time. Latest advancements in parameter-efficient fine-tuning (PEFT) techniques, such as Adapter tuning and LoRA, allow for adjustments to only a minor fraction of the parameters of th...
Saved in:
Main Authors: | WEN, Zhihao, ZHANG, Jie, FANG, Yuan |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9624 https://ink.library.smu.edu.sg/context/sis_research/article/10624/viewcontent/ACL24Findings_SIBO__1_.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Automated Parameter Tuning Framework for Heterogeneous and Large Instances: Case study in Quadratic Assignment Problem
by: LINDAWATI, Linda, et al.
Published: (2013) -
Fine-tuning algorithm parameters using the design of experiments approach
by: GUNAWAN, Aldy, et al.
Published: (2011) -
Thoughts to target : enhance planning for target-driven conversation
by: ZHENG, Zhonghua, et al.
Published: (2024) -
A survey of ontology expansion for conversational understanding
by: LIANG, Jinggui, et al.
Published: (2024) -
Ask-before-plan : proactive language agents for real-world planning
by: ZHANG, Xuan, et al.
Published: (2024)