Attention-based network embedding with higher-order weights and node attributes
Network embedding aspires to learn a low-dimensional vector of each node in networks, which can apply to diverse data mining tasks. In real-life, many networks include rich attributes and temporal information. However, most existing embedding approaches ignore either temporal information or network...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Published: |
Wiley
2024
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/46062/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaya |
id |
my.um.eprints.46062 |
---|---|
record_format |
eprints |
spelling |
my.um.eprints.460622024-07-17T02:00:12Z http://eprints.um.edu.my/46062/ Attention-based network embedding with higher-order weights and node attributes Mo, Xian Wan, Binyuan Tang, Rui Ding, Junkai Liu, Guangdi QA75 Electronic computers. Computer science Network embedding aspires to learn a low-dimensional vector of each node in networks, which can apply to diverse data mining tasks. In real-life, many networks include rich attributes and temporal information. However, most existing embedding approaches ignore either temporal information or network attributes. A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article. A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented. For static attributed networks, the algorithm incorporates first-order to k-order weights, and node attribute similarities into one weighted graph to preserve topological features of networks. For temporal attribute networks, the algorithm incorporates previous snapshots of networks containing first-order to k-order weights, and nodes attribute similarities into one weighted graph. In addition, the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight. Attribute features are then incorporated into topological features. Next, the authors adopt the most advanced architecture, Self-Attention Networks, to learn node representations. Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches. Wiley 2024-04 Article PeerReviewed Mo, Xian and Wan, Binyuan and Tang, Rui and Ding, Junkai and Liu, Guangdi (2024) Attention-based network embedding with higher-order weights and node attributes. CAAI Transactions on Intelligence Technology, 9 (2). pp. 440-451. ISSN 2468-6557, DOI https://doi.org/10.1049/cit2.12215 <https://doi.org/10.1049/cit2.12215>. 10.1049/cit2.12215 |
institution |
Universiti Malaya |
building |
UM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Malaya |
content_source |
UM Research Repository |
url_provider |
http://eprints.um.edu.my/ |
topic |
QA75 Electronic computers. Computer science |
spellingShingle |
QA75 Electronic computers. Computer science Mo, Xian Wan, Binyuan Tang, Rui Ding, Junkai Liu, Guangdi Attention-based network embedding with higher-order weights and node attributes |
description |
Network embedding aspires to learn a low-dimensional vector of each node in networks, which can apply to diverse data mining tasks. In real-life, many networks include rich attributes and temporal information. However, most existing embedding approaches ignore either temporal information or network attributes. A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article. A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented. For static attributed networks, the algorithm incorporates first-order to k-order weights, and node attribute similarities into one weighted graph to preserve topological features of networks. For temporal attribute networks, the algorithm incorporates previous snapshots of networks containing first-order to k-order weights, and nodes attribute similarities into one weighted graph. In addition, the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight. Attribute features are then incorporated into topological features. Next, the authors adopt the most advanced architecture, Self-Attention Networks, to learn node representations. Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches. |
format |
Article |
author |
Mo, Xian Wan, Binyuan Tang, Rui Ding, Junkai Liu, Guangdi |
author_facet |
Mo, Xian Wan, Binyuan Tang, Rui Ding, Junkai Liu, Guangdi |
author_sort |
Mo, Xian |
title |
Attention-based network embedding with higher-order weights and node attributes |
title_short |
Attention-based network embedding with higher-order weights and node attributes |
title_full |
Attention-based network embedding with higher-order weights and node attributes |
title_fullStr |
Attention-based network embedding with higher-order weights and node attributes |
title_full_unstemmed |
Attention-based network embedding with higher-order weights and node attributes |
title_sort |
attention-based network embedding with higher-order weights and node attributes |
publisher |
Wiley |
publishDate |
2024 |
url |
http://eprints.um.edu.my/46062/ |
_version_ |
1805881183939067904 |