Zero-to-strong generalization: eliciting strong capabilities of large language models iteratively without gold labels

Large Language Models (LLMs) have demonstrated remarkable performance through supervised fine-tuning or in-context learning using gold labels. However, this paradigm is limited by the availability of gold labels, while in certain scenarios, LLMs may need to perform tasks that are too complex for hum...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Chaoqun, Chao, Qin, Zhang, Wenxuan, Wu, Xiaobao, Li, Boyang, Luu, Anh Tuan, Bing, Lidong
Other Authors: Interdisciplinary Graduate School (IGS)
Format: Conference or Workshop Item
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181455
https://coling2025.org/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English