LLM-adapters: An adapter family for parameter-efficient fine-tuning of large language models
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e.g., ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine...
Saved in:
Main Authors: | HU, Zhiqiang, WANG, Lei, LAN, Yihuai, XU, Wanyu, LIM, Ee-peng, BING, Lidong, XU, Xing, PORIA, Soujanya, LEE, Roy Ka-Wei |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8324 https://ink.library.smu.edu.sg/context/sis_research/article/9327/viewcontent/2304.01933.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models
by: WANG, Lei, et al.
Published: (2023) -
Towards LLM-based fact verification on news claims with a hierarchical step-by-step prompting method
by: ZHANG, Xuan, et al.
Published: (2023) -
Scaling human activity recognition via deep learning-based domain adaptation
by: KHAN, Md Abdullah Hafiz, et al.
Published: (2018) -
MolCA: Molecular graph-language modeling with cross-modal projector and uni-modal adapter
by: LIU, Zhiyuan, et al.
Published: (2023) -
PrivacyCanary: Privacy-aware recommenders with adaptive input obfuscation
by: KANDAPPU, Thivya, et al.
Published: (2015)