MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion
Recent years have witnessed increasing interest in the few-shot knowledge graph completion due to its potential to augment the coverage of few-shot relations in knowledge graphs. Existing methods often use the one-hop neighbors of the entity to enhance its embedding and match the query instance and...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175818 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175818 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1758182024-05-07T04:49:25Z MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion Li, Qianyu Feng, Bozheng Tang, Xiaoli Yu, Han Song, Hengjie School of Computer Science and Engineering Computer and Information Science Knowledge graphs Few-shot learning Recent years have witnessed increasing interest in the few-shot knowledge graph completion due to its potential to augment the coverage of few-shot relations in knowledge graphs. Existing methods often use the one-hop neighbors of the entity to enhance its embedding and match the query instance and support set at the instance level. However, such methods cannot handle inter-neighbor interaction, local entity matching and the varying significance of feature dimensions. To bridge this gap, we propose the Multi-Level Attention-enhanced matching Network (MuLAN) for few-shot knowledge graph completion. In MuLAN, a multi-head self-attention neighbor encoder is designed to capture the inter-neighbor interaction and learn the entity embeddings. Then, entity-level attention and instance-level attention are responsible for matching the query instance and support set from the local and global perspectives, respectively, while feature-level attention is utilized to calculate the weights of the feature dimensions. Furthermore, we design a consistency constraint to ensure the support instance embeddings are close to each other. Extensive experiments based on two well-known datasets (i.e., NELL-One and Wiki-One) demonstrate significant advantages of MuLAN over 11 state-of-the-art competitors. Compared to the best-performing baseline, MuLAN achieves 14.5% higher MRR and 13.3% higher Hits@K on average. National Research Foundation (NRF) This work was supported, in part, by the National Natural Science Foundation of China [grant numbers 71671069]; the Pre-Research Foundation of China [grant number 61400010205]; the National Key Research and Development Program of China [grant number 2018YFC0830900]; National Research Foundation, Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-019); and the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No. A20G8b0102), Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the funding agencies. 2024-05-07T04:49:25Z 2024-05-07T04:49:25Z 2024 Journal Article Li, Q., Feng, B., Tang, X., Yu, H. & Song, H. (2024). MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion. Neural Networks, 174, 106222-. https://dx.doi.org/10.1016/j.neunet.2024.106222 0893-6080 https://hdl.handle.net/10356/175818 10.1016/j.neunet.2024.106222 38442490 2-s2.0-85186650381 174 106222 en AISG2-RP-2020-019 A20G8b0102 Neural Networks © 2024 Elsevier Ltd. All rights reserved. |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Knowledge graphs Few-shot learning |
spellingShingle |
Computer and Information Science Knowledge graphs Few-shot learning Li, Qianyu Feng, Bozheng Tang, Xiaoli Yu, Han Song, Hengjie MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
description |
Recent years have witnessed increasing interest in the few-shot knowledge graph completion due to its potential to augment the coverage of few-shot relations in knowledge graphs. Existing methods often use the one-hop neighbors of the entity to enhance its embedding and match the query instance and support set at the instance level. However, such methods cannot handle inter-neighbor interaction, local entity matching and the varying significance of feature dimensions. To bridge this gap, we propose the Multi-Level Attention-enhanced matching Network (MuLAN) for few-shot knowledge graph completion. In MuLAN, a multi-head self-attention neighbor encoder is designed to capture the inter-neighbor interaction and learn the entity embeddings. Then, entity-level attention and instance-level attention are responsible for matching the query instance and support set from the local and global perspectives, respectively, while feature-level attention is utilized to calculate the weights of the feature dimensions. Furthermore, we design a consistency constraint to ensure the support instance embeddings are close to each other. Extensive experiments based on two well-known datasets (i.e., NELL-One and Wiki-One) demonstrate significant advantages of MuLAN over 11 state-of-the-art competitors. Compared to the best-performing baseline, MuLAN achieves 14.5% higher MRR and 13.3% higher Hits@K on average. |
author2 |
School of Computer Science and Engineering |
author_facet |
School of Computer Science and Engineering Li, Qianyu Feng, Bozheng Tang, Xiaoli Yu, Han Song, Hengjie |
format |
Article |
author |
Li, Qianyu Feng, Bozheng Tang, Xiaoli Yu, Han Song, Hengjie |
author_sort |
Li, Qianyu |
title |
MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
title_short |
MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
title_full |
MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
title_fullStr |
MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
title_full_unstemmed |
MuLAN: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
title_sort |
mulan: multi-level attention-enhanced matching network for few-shot knowledge graph completion |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175818 |
_version_ |
1800916205619380224 |