Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans

In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. C...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Yanteng, He, Xiaohai, Chan, Yi Hao, Teng, Qizhi, Rajapakse, Jagath Chandana
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/170902
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-170902
record_format dspace
spelling sg-ntu-dr.10356-1709022023-10-06T15:36:03Z Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans Zhang, Yanteng He, Xiaohai Chan, Yi Hao Teng, Qizhi Rajapakse, Jagath Chandana School of Computer Science and Engineering Engineering::Computer science and engineering Alzheimer’s Disease Diagnosis Brain Networks In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features leads to improved performance than using a single modality alone in building predictive models for AD diagnosis. However, current multi-modal approaches in deep learning, based on sMRI and PET, are mostly limited to convolutional neural networks, which do not facilitate integration of both image and phenotypic information of subjects. We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains. In this study, we demonstrate how brain networks are created from sMRI or PET images and can be used in a population graph framework that combines phenotypic information with imaging features of the brain networks. Then, we present a multi-modal GNN framework where each modality has its own branch of GNN and a technique that combines the multi-modal data at both the level of node vectors and adjacency matrices. Finally, we perform late fusion to combine the preliminary decisions made in each branch and produce a final prediction. As multi-modality data becomes available, multi-source and multi-modal is the trend of AD diagnosis. We conducted explorative experiments based on multi-modal imaging data combined with non-imaging phenotypic information for AD diagnosis and analyzed the impact of phenotypic information on diagnostic performance. Results from experiments demonstrated that our proposed multi-modal approach improves performance for AD diagnosis. Our study also provides technical reference and support the need for multivariate multi-modal diagnosis methods. Ministry of Education (MOE) Submitted/Accepted version This work was partly supported by the Chengdu Major Technology Application Demonstration Project (Grant No. 2019-YF09-00120-SN), the Key Research and Development Program of Sichuan Province (Grant No. 2022YFS0098), the China Scholarship Council (Grant No. 202106240177). This work was supported by AcRF Tier-2 grant 2EP20121-003 by Ministry of Education, Singapore. 2023-10-06T04:37:58Z 2023-10-06T04:37:58Z 2023 Journal Article Zhang, Y., He, X., Chan, Y. H., Teng, Q. & Rajapakse, J. C. (2023). Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans. Computers in Biology and Medicine, 164, 107328-. https://dx.doi.org/10.1016/j.compbiomed.2023.107328 0010-4825 https://hdl.handle.net/10356/170902 10.1016/j.compbiomed.2023.107328 37573721 2-s2.0-85167566319 164 107328 en MOE-2EP20121-003 Computers in Biology and Medicine © 2023 Elsevier Ltd. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1016/j.compbiomed.2023.107328. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Alzheimer’s Disease Diagnosis
Brain Networks
spellingShingle Engineering::Computer science and engineering
Alzheimer’s Disease Diagnosis
Brain Networks
Zhang, Yanteng
He, Xiaohai
Chan, Yi Hao
Teng, Qizhi
Rajapakse, Jagath Chandana
Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans
description In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features leads to improved performance than using a single modality alone in building predictive models for AD diagnosis. However, current multi-modal approaches in deep learning, based on sMRI and PET, are mostly limited to convolutional neural networks, which do not facilitate integration of both image and phenotypic information of subjects. We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains. In this study, we demonstrate how brain networks are created from sMRI or PET images and can be used in a population graph framework that combines phenotypic information with imaging features of the brain networks. Then, we present a multi-modal GNN framework where each modality has its own branch of GNN and a technique that combines the multi-modal data at both the level of node vectors and adjacency matrices. Finally, we perform late fusion to combine the preliminary decisions made in each branch and produce a final prediction. As multi-modality data becomes available, multi-source and multi-modal is the trend of AD diagnosis. We conducted explorative experiments based on multi-modal imaging data combined with non-imaging phenotypic information for AD diagnosis and analyzed the impact of phenotypic information on diagnostic performance. Results from experiments demonstrated that our proposed multi-modal approach improves performance for AD diagnosis. Our study also provides technical reference and support the need for multivariate multi-modal diagnosis methods.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhang, Yanteng
He, Xiaohai
Chan, Yi Hao
Teng, Qizhi
Rajapakse, Jagath Chandana
format Article
author Zhang, Yanteng
He, Xiaohai
Chan, Yi Hao
Teng, Qizhi
Rajapakse, Jagath Chandana
author_sort Zhang, Yanteng
title Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans
title_short Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans
title_full Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans
title_fullStr Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans
title_full_unstemmed Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans
title_sort multi-modal graph neural network for early diagnosis of alzheimer's disease from smri and pet scans
publishDate 2023
url https://hdl.handle.net/10356/170902
_version_ 1779171100002877440