Low-rank and global-representation-key-based attention for graph transformer
Transformer architectures have been applied to graph-specific data such as protein structure and shopper lists, and they perform accurately on graph/node classification and prediction tasks. Researchers have proved that the attention matrix in Transformers has low-rank properties, and the self-atten...
Saved in:
Main Authors: | Kong, Lingping, Ojha, Varun, Gao, Ruobin, Suganthan, Ponnuthurai Nagaratnam, Snášel, Václav |
---|---|
Other Authors: | School of Civil and Environmental Engineering |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170863 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
On Point-Color-Symmetric Graphs and Groups with Point-Color-Symmetric Picture Representation
by: Fernandez, Patrick John
Published: (2020) -
ADVANCING GRAPH NEURAL NETWORKS WITH HL-HGAT: A HODGE-LAPLACIAN AND ATTENTION MECHANISM APPROACH FOR HETEROGENEOUS GRAPH-STRUCTURED DATA
by: HUANG JINGHAN
Published: (2024) -
On the feasibility of Simple Transformer for dynamic graph modeling
by: WU, Yuxia, et al.
Published: (2024) -
A class-aware representation refinement framework for graph classification
by: Xu, Jiaxing, et al.
Published: (2024) -
Towards effective graph representations by leveraging geometric concepts
by: Lee, See Hian
Published: (2024)