Implementation of high-performance graph neural network distributed learning framework

Graph Neural Network (GNN), which uses a neural network architecture to effectively learn information organized in graphs with nodes and edges, has been a popular topic in deep learning research in recent years. Generally, distributed deep learning uses multiple devices to collaboratively train a gl...

Full description

Saved in:
Bibliographic Details
Main Author: Lee, Cheng Han
Other Authors: Luo Siqiang
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166564
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-166564
record_format dspace
spelling sg-ntu-dr.10356-1665642023-05-05T15:41:55Z Implementation of high-performance graph neural network distributed learning framework Lee, Cheng Han Luo Siqiang School of Computer Science and Engineering siqiang.luo@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Graph Neural Network (GNN), which uses a neural network architecture to effectively learn information organized in graphs with nodes and edges, has been a popular topic in deep learning research in recent years. Generally, distributed deep learning uses multiple devices to collaboratively train a global model with relatively low cost and high efficiency. Implementing distributed learning approaches to train GNNs is a promising and challenging task. Compared to traditional distributed learning, distributively training GNNs requires the topology of graph structures to be considered, with the utilization of graph algorithms including graph clustering and partitioning. The goal of this project is to build a distributed framework for training GNNs, and apply graph algorithms to improve learning performance, that is, to make the learning process more efficient and scalable in distributed environments. The project contains research on the current algorithms for high-performance deep learning and development of the framework based on the currently available tools in distributed learning and GNN training. Bachelor of Engineering (Computer Science) 2023-05-05T06:36:49Z 2023-05-05T06:36:49Z 2023 Final Year Project (FYP) Lee, C. H. (2023). Implementation of high-performance graph neural network distributed learning framework. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166564 https://hdl.handle.net/10356/166564 en SCSE22-0413 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Lee, Cheng Han
Implementation of high-performance graph neural network distributed learning framework
description Graph Neural Network (GNN), which uses a neural network architecture to effectively learn information organized in graphs with nodes and edges, has been a popular topic in deep learning research in recent years. Generally, distributed deep learning uses multiple devices to collaboratively train a global model with relatively low cost and high efficiency. Implementing distributed learning approaches to train GNNs is a promising and challenging task. Compared to traditional distributed learning, distributively training GNNs requires the topology of graph structures to be considered, with the utilization of graph algorithms including graph clustering and partitioning. The goal of this project is to build a distributed framework for training GNNs, and apply graph algorithms to improve learning performance, that is, to make the learning process more efficient and scalable in distributed environments. The project contains research on the current algorithms for high-performance deep learning and development of the framework based on the currently available tools in distributed learning and GNN training.
author2 Luo Siqiang
author_facet Luo Siqiang
Lee, Cheng Han
format Final Year Project
author Lee, Cheng Han
author_sort Lee, Cheng Han
title Implementation of high-performance graph neural network distributed learning framework
title_short Implementation of high-performance graph neural network distributed learning framework
title_full Implementation of high-performance graph neural network distributed learning framework
title_fullStr Implementation of high-performance graph neural network distributed learning framework
title_full_unstemmed Implementation of high-performance graph neural network distributed learning framework
title_sort implementation of high-performance graph neural network distributed learning framework
publisher Nanyang Technological University
publishDate 2023
url https://hdl.handle.net/10356/166564
_version_ 1770566854719307776