Learning relation prototype from unlabeled texts for long-tail relation extraction

Relation Extraction (RE) is a vital step to complete Knowledge Graph (KG) by extracting entity relations from texts. However, it usually suffers from the long-tail issue. The training data mainly concentrates on a few types of relations, leading to the lack of sufficient annotations for the remainin...

Full description

Saved in:
Bibliographic Details
Main Authors: CAO, Yixin, KUANG, Jun, GAO, Ming, ZHOU, Aoying, WEN, Yonggang, CHUA, Tat-Seng
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2023
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7319
https://ink.library.smu.edu.sg/context/sis_research/article/8322/viewcontent/2011.13574.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8322
record_format dspace
spelling sg-smu-ink.sis_research-83222024-02-28T00:41:49Z Learning relation prototype from unlabeled texts for long-tail relation extraction CAO, Yixin KUANG, Jun GAO, Ming ZHOU, Aoying WEN, Yonggang CHUA, Tat-Seng Relation Extraction (RE) is a vital step to complete Knowledge Graph (KG) by extracting entity relations from texts. However, it usually suffers from the long-tail issue. The training data mainly concentrates on a few types of relations, leading to the lack of sufficient annotations for the remaining types of relations. In this paper, we propose a general approach to learn relation prototypes from unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient training data. We learn relation prototypes as an implicit factor between entities, which reflects the meanings of relations as well as their proximities for transfer learning. Specifically, we construct a co-occurrence graph from texts, and capture both first-order and second-order entity proximities for embedding learning. Based on this, we further optimize the distance from entity pairs to corresponding prototypes, which can be easily adapted to almost arbitrary RE frameworks. Thus, the learning of infrequent or even unseen relation types will benefit from semantically proximate relations through pairs of entities and large-scale textual information. We have conducted extensive experiments on two publicly available datasets: New York Times and Google Distant Supervision. Compared with eight state-of-the-art baselines, our proposed model achieves significant improvements (4.1% F1 on average). Further results on long-tail relations demonstrate the effectiveness of the learned relation prototypes. We further conduct an ablation study to investigate the impacts of varying components, and apply it to four basic relation extraction models to verify the generalization ability. Finally, we analyze several example cases to give intuitive impressions as qualitative analysis. Our codes will be released later. 2023-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7319 info:doi/10.1109/TKDE.2021.3096200 https://ink.library.smu.edu.sg/context/sis_research/article/8322/viewcontent/2011.13574.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Annotations Data mining Knowledge Graph long-tail Prototype Learning Prototypes Relation Extraction Training Training data Transfer learning Urban areas Databases and Information Systems Data Storage Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Annotations
Data mining
Knowledge Graph
long-tail
Prototype Learning
Prototypes
Relation Extraction
Training
Training data
Transfer learning
Urban areas
Databases and Information Systems
Data Storage Systems
spellingShingle Annotations
Data mining
Knowledge Graph
long-tail
Prototype Learning
Prototypes
Relation Extraction
Training
Training data
Transfer learning
Urban areas
Databases and Information Systems
Data Storage Systems
CAO, Yixin
KUANG, Jun
GAO, Ming
ZHOU, Aoying
WEN, Yonggang
CHUA, Tat-Seng
Learning relation prototype from unlabeled texts for long-tail relation extraction
description Relation Extraction (RE) is a vital step to complete Knowledge Graph (KG) by extracting entity relations from texts. However, it usually suffers from the long-tail issue. The training data mainly concentrates on a few types of relations, leading to the lack of sufficient annotations for the remaining types of relations. In this paper, we propose a general approach to learn relation prototypes from unlabeled texts, to facilitate the long-tail relation extraction by transferring knowledge from the relation types with sufficient training data. We learn relation prototypes as an implicit factor between entities, which reflects the meanings of relations as well as their proximities for transfer learning. Specifically, we construct a co-occurrence graph from texts, and capture both first-order and second-order entity proximities for embedding learning. Based on this, we further optimize the distance from entity pairs to corresponding prototypes, which can be easily adapted to almost arbitrary RE frameworks. Thus, the learning of infrequent or even unseen relation types will benefit from semantically proximate relations through pairs of entities and large-scale textual information. We have conducted extensive experiments on two publicly available datasets: New York Times and Google Distant Supervision. Compared with eight state-of-the-art baselines, our proposed model achieves significant improvements (4.1% F1 on average). Further results on long-tail relations demonstrate the effectiveness of the learned relation prototypes. We further conduct an ablation study to investigate the impacts of varying components, and apply it to four basic relation extraction models to verify the generalization ability. Finally, we analyze several example cases to give intuitive impressions as qualitative analysis. Our codes will be released later.
format text
author CAO, Yixin
KUANG, Jun
GAO, Ming
ZHOU, Aoying
WEN, Yonggang
CHUA, Tat-Seng
author_facet CAO, Yixin
KUANG, Jun
GAO, Ming
ZHOU, Aoying
WEN, Yonggang
CHUA, Tat-Seng
author_sort CAO, Yixin
title Learning relation prototype from unlabeled texts for long-tail relation extraction
title_short Learning relation prototype from unlabeled texts for long-tail relation extraction
title_full Learning relation prototype from unlabeled texts for long-tail relation extraction
title_fullStr Learning relation prototype from unlabeled texts for long-tail relation extraction
title_full_unstemmed Learning relation prototype from unlabeled texts for long-tail relation extraction
title_sort learning relation prototype from unlabeled texts for long-tail relation extraction
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/7319
https://ink.library.smu.edu.sg/context/sis_research/article/8322/viewcontent/2011.13574.pdf
_version_ 1794549717391114240