Large language model is not a good few-shot information extractor, but a good reranker for hard samples!

Large Language Models (LLMs) have made remarkable strides in various tasks. However, whether they are competitive few-shot solvers for information extraction (IE) tasks and surpass fine-tuned small Pre-trained Language Models (SLMs) remains an open problem. This paper aims to provide a thorough answ...

Full description

Saved in:
Bibliographic Details
Main Authors: MA, Yubo, CAO, Yixin, HONG, YongChin, SUN, Aixin
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8388
https://ink.library.smu.edu.sg/context/sis_research/article/9391/viewcontent/llmIE_emnlp_tbu.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9391
record_format dspace
spelling sg-smu-ink.sis_research-93912024-01-09T03:56:50Z Large language model is not a good few-shot information extractor, but a good reranker for hard samples! MA, Yubo CAO, Yixin HONG, YongChin SUN, Aixin Large Language Models (LLMs) have made remarkable strides in various tasks. However, whether they are competitive few-shot solvers for information extraction (IE) tasks and surpass fine-tuned small Pre-trained Language Models (SLMs) remains an open problem. This paper aims to provide a thorough answer to this problem, and moreover, to explore an approach towards effective and economical IE systems that combine the strengths of LLMs and SLMs. Through extensive experiments on nine datasets across four IE tasks, we show that LLMs are not effective few-shot information extractors in general, given their unsatisfactory performance in most settings and the high latency and budget requirements. However, we demonstrate that LLMs can well complement SLMs and effectively solve hard samples that SLMs struggle with. Building on these findings, we propose an adaptive \textit{filter-then-rerank} paradigm, in which SLMs act as filters and LLMs act as rerankers. By utilizing LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.4% F1-gain on average) on various IE tasks, with acceptable cost of time and money. 2023-12-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8388 https://ink.library.smu.edu.sg/context/sis_research/article/9391/viewcontent/llmIE_emnlp_tbu.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University LLMs Information extraction Databases and Information Systems Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic LLMs
Information extraction
Databases and Information Systems
Programming Languages and Compilers
spellingShingle LLMs
Information extraction
Databases and Information Systems
Programming Languages and Compilers
MA, Yubo
CAO, Yixin
HONG, YongChin
SUN, Aixin
Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
description Large Language Models (LLMs) have made remarkable strides in various tasks. However, whether they are competitive few-shot solvers for information extraction (IE) tasks and surpass fine-tuned small Pre-trained Language Models (SLMs) remains an open problem. This paper aims to provide a thorough answer to this problem, and moreover, to explore an approach towards effective and economical IE systems that combine the strengths of LLMs and SLMs. Through extensive experiments on nine datasets across four IE tasks, we show that LLMs are not effective few-shot information extractors in general, given their unsatisfactory performance in most settings and the high latency and budget requirements. However, we demonstrate that LLMs can well complement SLMs and effectively solve hard samples that SLMs struggle with. Building on these findings, we propose an adaptive \textit{filter-then-rerank} paradigm, in which SLMs act as filters and LLMs act as rerankers. By utilizing LLMs to rerank a small portion of difficult samples identified by SLMs, our preliminary system consistently achieves promising improvements (2.4% F1-gain on average) on various IE tasks, with acceptable cost of time and money.
format text
author MA, Yubo
CAO, Yixin
HONG, YongChin
SUN, Aixin
author_facet MA, Yubo
CAO, Yixin
HONG, YongChin
SUN, Aixin
author_sort MA, Yubo
title Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
title_short Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
title_full Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
title_fullStr Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
title_full_unstemmed Large language model is not a good few-shot information extractor, but a good reranker for hard samples!
title_sort large language model is not a good few-shot information extractor, but a good reranker for hard samples!
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8388
https://ink.library.smu.edu.sg/context/sis_research/article/9391/viewcontent/llmIE_emnlp_tbu.pdf
_version_ 1787590767115501568