Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models

Few-shot or zero-shot fact verification only relies on a few or no labeled training examples. In this paper, we propose a novel method called ProToCo, to Prompt pre-trained language models (PLMs) To be Consistent, for improving the factuality assessment capability of PLMs in the few-shot and zero-sh...

Full description

Saved in:
Bibliographic Details
Main Authors: ZENG, Fengzhu, GAO, Wei
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8452
https://ink.library.smu.edu.sg/context/sis_research/article/9455/viewcontent/2023.findings_acl.278.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9455
record_format dspace
spelling sg-smu-ink.sis_research-94552024-01-04T09:51:24Z Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models ZENG, Fengzhu GAO, Wei Few-shot or zero-shot fact verification only relies on a few or no labeled training examples. In this paper, we propose a novel method called ProToCo, to Prompt pre-trained language models (PLMs) To be Consistent, for improving the factuality assessment capability of PLMs in the few-shot and zero-shot settings. Given a claim-evidence pair, ProToCo generates multiple variants of the claim with different relations and frames a simple consistency mechanism as constraints for making compatible predictions across these variants. We update PLMs by using parameter-efficient fine-tuning (PEFT), leading to more accurate predictions in few-shot and zero-shot fact verification tasks. Our experiments on three public verification datasets show that ProToCo significantly outperforms state-of-the-art few-shot fact verification baselines. With a small number of unlabeled instances, ProToCo also outperforms the strong zero-shot learner T0 on zero-shot verification. Compared to large PLMs using in-context learning (ICL) method, ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model in both few- and zero-shot settings. 2023-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8452 info:doi/10.18653/v1/2023.findings-acl.278 https://ink.library.smu.edu.sg/context/sis_research/article/9455/viewcontent/2023.findings_acl.278.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University few-shot fact verification zero-shot fact verification ProToCo Prompt pre-trained language models factuality assessment claim-evidence pair consistency mechanism variants predictions parameter-efficient fine-tuning (PEFT) accurate predictions public verification datasets zero-shot learner in-context learning (ICL) OPT-30B Self-Consistency-enabled OPT-6.7B model Computer Sciences
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic few-shot fact verification
zero-shot fact verification
ProToCo
Prompt pre-trained language models
factuality assessment
claim-evidence pair
consistency mechanism
variants
predictions
parameter-efficient fine-tuning (PEFT)
accurate predictions
public verification datasets
zero-shot learner
in-context learning (ICL)
OPT-30B
Self-Consistency-enabled OPT-6.7B model
Computer Sciences
spellingShingle few-shot fact verification
zero-shot fact verification
ProToCo
Prompt pre-trained language models
factuality assessment
claim-evidence pair
consistency mechanism
variants
predictions
parameter-efficient fine-tuning (PEFT)
accurate predictions
public verification datasets
zero-shot learner
in-context learning (ICL)
OPT-30B
Self-Consistency-enabled OPT-6.7B model
Computer Sciences
ZENG, Fengzhu
GAO, Wei
Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models
description Few-shot or zero-shot fact verification only relies on a few or no labeled training examples. In this paper, we propose a novel method called ProToCo, to Prompt pre-trained language models (PLMs) To be Consistent, for improving the factuality assessment capability of PLMs in the few-shot and zero-shot settings. Given a claim-evidence pair, ProToCo generates multiple variants of the claim with different relations and frames a simple consistency mechanism as constraints for making compatible predictions across these variants. We update PLMs by using parameter-efficient fine-tuning (PEFT), leading to more accurate predictions in few-shot and zero-shot fact verification tasks. Our experiments on three public verification datasets show that ProToCo significantly outperforms state-of-the-art few-shot fact verification baselines. With a small number of unlabeled instances, ProToCo also outperforms the strong zero-shot learner T0 on zero-shot verification. Compared to large PLMs using in-context learning (ICL) method, ProToCo outperforms OPT-30B and the Self-Consistency-enabled OPT-6.7B model in both few- and zero-shot settings.
format text
author ZENG, Fengzhu
GAO, Wei
author_facet ZENG, Fengzhu
GAO, Wei
author_sort ZENG, Fengzhu
title Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models
title_short Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models
title_full Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models
title_fullStr Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models
title_full_unstemmed Prompt to be consistent is better than self-consistent? Few-shot and zero-shot fact verification with pre-trained language models
title_sort prompt to be consistent is better than self-consistent? few-shot and zero-shot fact verification with pre-trained language models
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8452
https://ink.library.smu.edu.sg/context/sis_research/article/9455/viewcontent/2023.findings_acl.278.pdf
_version_ 1787590752165953536