Collective prompt tuning with relation inference for document-level relation extraction

Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt...

Full description

Saved in:
Bibliographic Details
Main Authors: YUAN, Changsen, CAO, Yixin, HUANG, Heyan
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/8298
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-9301
record_format dspace
spelling sg-smu-ink.sis_research-93012023-11-10T01:48:03Z Collective prompt tuning with relation inference for document-level relation extraction YUAN, Changsen CAO, Yixin HUANG, Heyan Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt factual knowledge from PLMs for better reasoning ability has not yet been explored. In this paper, we propose a novel Collective Prompt Tuning with Relation Inference (CPT-RI) for Document-level RE, that improves upon existing models from two aspects. First, considering the long input and various templates, we adopt a collective prompt tuning method, which is an update-and-reuse strategy. A generic prompt is first encoded and then updated with exact entity pairs for relation-specific prompts. Second, we introduce a relation inference module to conduct global reasoning overall relation prompts via constrained semantic segmentation. Extensive experiments on two publicly available benchmark datasets demonstrate the effectiveness of our proposed CPT-RI as compared to the baseline model (ATLOP (Zhou et al., 2021)), which improve the 0.57% on the DocRED dataset, 2.20% on the CDR dataset, and 2.30 on the GDA dataset in the F1 score. In addition, further ablation studies also verify the effects of the collective prompt tuning and relation inference. 2023-09-30T07:00:00Z text https://ink.library.smu.edu.sg/sis_research/8298 info:doi/10.1016/j.ipm.2023.103451 Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Natural language processing Document-level relation extraction Prompt-tuning Various templates Global reasoning Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Natural language processing
Document-level relation extraction
Prompt-tuning
Various templates
Global reasoning
Artificial Intelligence and Robotics
Numerical Analysis and Scientific Computing
spellingShingle Natural language processing
Document-level relation extraction
Prompt-tuning
Various templates
Global reasoning
Artificial Intelligence and Robotics
Numerical Analysis and Scientific Computing
YUAN, Changsen
CAO, Yixin
HUANG, Heyan
Collective prompt tuning with relation inference for document-level relation extraction
description Document-level relation extraction (RE) aims to extract the relation of entities that may be across sentences. Existing methods mainly rely on two types of techniques: Pre-trained language models (PLMs) and reasoning skills. Although various reasoning methods have been proposed, how to elicit learnt factual knowledge from PLMs for better reasoning ability has not yet been explored. In this paper, we propose a novel Collective Prompt Tuning with Relation Inference (CPT-RI) for Document-level RE, that improves upon existing models from two aspects. First, considering the long input and various templates, we adopt a collective prompt tuning method, which is an update-and-reuse strategy. A generic prompt is first encoded and then updated with exact entity pairs for relation-specific prompts. Second, we introduce a relation inference module to conduct global reasoning overall relation prompts via constrained semantic segmentation. Extensive experiments on two publicly available benchmark datasets demonstrate the effectiveness of our proposed CPT-RI as compared to the baseline model (ATLOP (Zhou et al., 2021)), which improve the 0.57% on the DocRED dataset, 2.20% on the CDR dataset, and 2.30 on the GDA dataset in the F1 score. In addition, further ablation studies also verify the effects of the collective prompt tuning and relation inference.
format text
author YUAN, Changsen
CAO, Yixin
HUANG, Heyan
author_facet YUAN, Changsen
CAO, Yixin
HUANG, Heyan
author_sort YUAN, Changsen
title Collective prompt tuning with relation inference for document-level relation extraction
title_short Collective prompt tuning with relation inference for document-level relation extraction
title_full Collective prompt tuning with relation inference for document-level relation extraction
title_fullStr Collective prompt tuning with relation inference for document-level relation extraction
title_full_unstemmed Collective prompt tuning with relation inference for document-level relation extraction
title_sort collective prompt tuning with relation inference for document-level relation extraction
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8298
_version_ 1783955685037309952