DeepIS: Susceptibility estimation on social networks
Influence diffusion estimation is a crucial problem in social network analysis. Most prior works mainly focus on predicting the total influence spread, i.e., the expected number of influenced nodes given an initial set of active nodes (aka. seeds). However, accurate estimation of susceptibility, i.e...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6204 https://ink.library.smu.edu.sg/context/sis_research/article/7207/viewcontent/3437963.3441829.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Influence diffusion estimation is a crucial problem in social network analysis. Most prior works mainly focus on predicting the total influence spread, i.e., the expected number of influenced nodes given an initial set of active nodes (aka. seeds). However, accurate estimation of susceptibility, i.e., the probability of being influenced for each individual, is more appealing and valuable in real-world applications. Previous methods generally adopt Monte Carlo simulation or heuristic rules to estimate the influence, resulting in high computational cost or unsatisfactory estimation error when these methods are used to estimate susceptibility. In this work, we propose to leverage graph neural networks (GNNs) for predicting susceptibility. As GNNs aggregate multi-hop neighbor information and could generate over-smoothed representations, the prediction quality for susceptibility is undesirable. To address the shortcomings of GNNs for susceptibility estimation, we propose a novel DeepIS model with a two-step approach: (1) a coarse-grained step where we estimate each node’s susceptibility coarsely; (2) a fine-grained step where we aggregate neighbors’ coarse-grained susceptibility estimations to compute the fine-grained estimate for each node. The two modules are trained in an end-to-end manner. We conduct extensive experiments and show that on average DeepIS achieves five times smaller estimation error than state-of-the-art GNN approaches and two magnitudes faster than Monte Carlo simulation. |
---|