Synthesising missing modalities for multimodal MRI segmentation

Multiple MRI images modalities are extensively utilised in medical imaging tasks such as tumour segmentation as they account for information variability and image diversity. However, in practice, it is frequent that some modalities are missing in patients' data sources, and there is data imbala...

Full description

Saved in:
Bibliographic Details
Main Author: Rajasekara Pandian Akshaya Muthu
Other Authors: Jagath C Rajapakse
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148195
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Multiple MRI images modalities are extensively utilised in medical imaging tasks such as tumour segmentation as they account for information variability and image diversity. However, in practice, it is frequent that some modalities are missing in patients' data sources, and there is data imbalance due to varying imaging protocols and image corruption. Rather than re-acquiring all patient modality images as a complete set, it is more feasible to use the patients' existing modalities to synthesise the missing modalities and use this improved data for tumour segmentation. Therefore, we propose a generative adversarial network (GAN) that carries out missing modality synthesis for data completion and tumour segmentation. Our experiments were carried out using the Brain Tumour Image Segmentation Benchmark 2019 (BraTS ’19) dataset and our experiments support that the synthesis of the missing modality benefitted the tumour segmentation results and produced better results compared to other experiments with missing modality. Because of an impending technical disclosure being written, some details about the proposed model has been omitted from this report.