Discriminative reasoning with sparse event representation for document-level event-event relation extraction
Document-level Event-Event Relation Extraction (DERE) aims to extract relations between events in a document. It challenges conventional sentence-level task (SERE) with difficult long-text understanding. In this paper, we propose a novel DERE model (SENDIR) for better document-level reasoning. Diffe...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8288 https://ink.library.smu.edu.sg/context/sis_research/article/9291/viewcontent/Discriminative_reasoning_with_sparse_event_representation_for_document_level_event_event_relation_extraction.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Document-level Event-Event Relation Extraction (DERE) aims to extract relations between events in a document. It challenges conventional sentence-level task (SERE) with difficult long-text understanding. In this paper, we propose a novel DERE model (SENDIR) for better document-level reasoning. Different from existing works that build an event graph via linguistic tools, SENDIR does not require any prior knowledge. The basic idea is to discriminate event pairs in the same sentence or span multiple sentences by assuming their different information density: 1) low density in the document suggests sparse attention to skip irrelevant information. Our module 1 designs various types of attention for event representation learning to capture long-distance dependence. 2) High density in a sentence makes SERE relatively easy. Module 2 uses different weights to highlight the roles and contributions of intra- and inter-sentential reasoning, which introduces supportive event pairs for joint modeling. Extensive experiments demonstrate great improvements in SENDIR and the effectiveness of various sparse attention for document-level representations. Codes will be released later. © 2023 Association for Computational Linguistics. |
---|