VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion
The task of Knowledge Graph Completion (KGC) is to infer missing links for Knowledge Graphs (KGs) by analyzing graph structures. However, with increasing sparsity in KGs, this task becomes increasingly challenging. In this paper, we propose VEM2L, a joint learning framework that incorporates structu...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/8664 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-9667 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-96672024-02-22T03:00:04Z VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion HE, Tao LIU, Ming CAO, Yixin QU, Meng ZHENG, Zihao QIN, Bing The task of Knowledge Graph Completion (KGC) is to infer missing links for Knowledge Graphs (KGs) by analyzing graph structures. However, with increasing sparsity in KGs, this task becomes increasingly challenging. In this paper, we propose VEM2L, a joint learning framework that incorporates structure and relevant text information to supplement insufcient features for sparse KGs. We begin by training two pre-existing KGC models: one based on structure and the other based on text. Our ultimate goal is to fuse knowledge acquired by these models. To achieve this, we divide knowledge within the models into two non-overlapping parts: expressive power and generalization ability. We then propose two diferent joint learning methods that co-distill these two kinds of knowledge respectively. For expressive power, we allow each model to learn from and exchange knowledge mutually on training examples. For the generalization ability, we propose a novel co-distillation strategy using the Variational EM algorithm on unobserved queries. Our proposed joint learning framework is supported by both detailed theoretical evidence and qualitative experiments, demonstrating its efectiveness. 2024-01-01T08:00:00Z text https://ink.library.smu.edu.sg/sis_research/8664 info:doi/10.1007/s10618-023-01001-y Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Centralized optimization data-driven optimization distributed optimization evolutionary computation privacy protection Databases and Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Centralized optimization data-driven optimization distributed optimization evolutionary computation privacy protection Databases and Information Systems |
spellingShingle |
Centralized optimization data-driven optimization distributed optimization evolutionary computation privacy protection Databases and Information Systems HE, Tao LIU, Ming CAO, Yixin QU, Meng ZHENG, Zihao QIN, Bing VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
description |
The task of Knowledge Graph Completion (KGC) is to infer missing links for Knowledge Graphs (KGs) by analyzing graph structures. However, with increasing sparsity in KGs, this task becomes increasingly challenging. In this paper, we propose VEM2L, a joint learning framework that incorporates structure and relevant text information to supplement insufcient features for sparse KGs. We begin by training two pre-existing KGC models: one based on structure and the other based on text. Our ultimate goal is to fuse knowledge acquired by these models. To achieve this, we divide knowledge within the models into two non-overlapping parts: expressive power and generalization ability. We then propose two diferent joint learning methods that co-distill these two kinds of knowledge respectively. For expressive power, we allow each model to learn from and exchange knowledge mutually on training examples. For the generalization ability, we propose a novel co-distillation strategy using the Variational EM algorithm on unobserved queries. Our proposed joint learning framework is supported by both detailed theoretical evidence and qualitative experiments, demonstrating its efectiveness. |
format |
text |
author |
HE, Tao LIU, Ming CAO, Yixin QU, Meng ZHENG, Zihao QIN, Bing |
author_facet |
HE, Tao LIU, Ming CAO, Yixin QU, Meng ZHENG, Zihao QIN, Bing |
author_sort |
HE, Tao |
title |
VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
title_short |
VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
title_full |
VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
title_fullStr |
VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
title_full_unstemmed |
VEM 2 L: An easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
title_sort |
vem 2 l: an easy but effective framework for fusing text and structure knowledge on sparse knowledge graph completion |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/8664 |
_version_ |
1794549707683397632 |