LLM-adapters: An adapter family for parameter-efficient fine-tuning of large language models

The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e.g., ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine...

Full description

Saved in:
Bibliographic Details
Main Authors: HU, Zhiqiang, WANG, Lei, LAN, Yihuai, XU, Wanyu, LIM, Ee-peng, BING, Lidong, XU, Xing, PORIA, Soujanya, LEE, Roy Ka-Wei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8324
https://ink.library.smu.edu.sg/context/sis_research/article/9327/viewcontent/2304.01933.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Be the first to leave a comment!
You must be logged in first