ThunderRW: An in-memory graph random walk engine
As random walk is a powerful tool in many graph processing, mining and learning applications, this paper proposes an efficient inmemory random walk engine named ThunderRW. Compared with existing parallel systems on improving the performance of a single graph operation, ThunderRW supports massive par...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2021
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6132 https://ink.library.smu.edu.sg/context/sis_research/article/7135/viewcontent/2107.11983.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | As random walk is a powerful tool in many graph processing, mining and learning applications, this paper proposes an efficient inmemory random walk engine named ThunderRW. Compared with existing parallel systems on improving the performance of a single graph operation, ThunderRW supports massive parallel random walks. The core design of ThunderRW is motivated by our profiling results: common RW algorithms have as high as 73.1% CPU pipeline slots stalled due to irregular memory access, which suffers significantly more memory stalls than the conventional graph workloads such as BFS and SSSP. To improve the memory efficiency, we first design a generic step-centric programming model named Gather-Move-Update to abstract different RW algorithms. Based on the programming model, we develop the step interleaving technique to hide memory access latency by switching the executions of different random walk queries. In our experiments, we use four representative RW algorithms including PPR, DeepWalk, Node2Vec and MetaPath to demonstrate the efficiency and programming flexibility of ThunderRW. Experimental results show that ThunderRW outperforms state-of-the-art approaches by an order of magnitude, and the step interleaving technique significantly reduces the CPU pipeline stall from 73.1% to 15.0%. |
---|