Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
Large Language Models (LLMs) have made remarkable strides in various tasks. However, whether they are competitive few-shot solvers for information extraction (IE) tasks and surpass fine-tuned small Pre-trained Language Models (SLMs) remains an open problem. This paper aims to provide a thorough answ...
Saved in:
Main Authors: | MA, Yubo, CAO, Yixin, HONG, YongChin, SUN, Aixin |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8388 https://ink.library.smu.edu.sg/context/sis_research/article/9391/viewcontent/llmIE_emnlp_tbu.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Similar Items
-
Large language models for qualitative research in software engineering: exploring opportunities and challenges
by: BANO, Muneera, et al.
Published: (2024) -
Invariant training 2D-3D joint hard samples for few-shot point cloud recognition
by: YI, Xuanyu, et al.
Published: (2023) -
Few-shot event detection: An empirical study and a unified view
by: MA, Yubo, et al.
Published: (2023) -
Virtual prompt pre-training for prototype-based few-shot relation extraction
by: He, Kai, et al.
Published: (2023) -
A comprehensive evaluation of large language models on legal judgment prediction
by: SHUI, Ruihao, et al.
Published: (2023)