Rethinking the message passing for graph-level classification tasks in a category-based view

Message-Passing Neural Networks (MPNNs) have emerged as a popular framework for graph representation in recent years. However, the graph readout function in MPNNs often leads to significant information loss, resulting in performance degradation and computational waste for graph-level classification...

Full description

Saved in:
Bibliographic Details
Main Authors: Lei, Han, Xu, Jiaxing, Ni, Jinjie, Ke, Yiping
Other Authors: College of Computing and Data Science
Format: Article
Language:English
Published: 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182539
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Message-Passing Neural Networks (MPNNs) have emerged as a popular framework for graph representation in recent years. However, the graph readout function in MPNNs often leads to significant information loss, resulting in performance degradation and computational waste for graph-level classification tasks. Despite the common explanation of “local information loss,” the underlying essence of this phenomenon and the information that the MPNN framework can capture in graph-level tasks have not been thoroughly analyzed. In this paper, we present a novel analysis of the MPNN framework in graph-level classification tasks from a node category-based perspective. Our analysis reveals that the graph-level embeddings learned by MPNNs essentially correspond to category-based contribution measurements. Building upon this insight, we propose a groundbreaking Category-Based Non-Message-Passing (CANON) paradigm for graph-level representation learning. By leveraging a novel numerical encoding mechanism, CANON achieves superior performance even without incorporating structural information, surpassing state-of-the-art MPNN methods. CANON also offers substantial computational advantages, including a model size that is hundreds of times smaller and reduced time complexity, enabling faster inference and reduced costs for real-world applications, particularly in domains such as chemistry, biology, and computer vision. To further enhance the method's effectiveness, we introduce domain-specific structural incorporation. Our extensive experiments across multiple datasets demonstrate CANON's efficacy and its potential to serve as a highly efficient alternative to MPNNs, opening up new possibilities for graph representation learning and its downstream applications.