Combining multiple image modalities for better image segmentation
Accurate and robust brain/non-brain segmentation is very crucial in brain imaging application. Formerly, brain extraction relied on a single image modality, which limits its performance and accuracy. Nowadays, high resolution of T1- and T2-weighted images can be acquired during the same scanning ses...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2009
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/17003 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Accurate and robust brain/non-brain segmentation is very crucial in brain imaging application. Formerly, brain extraction relied on a single image modality, which limits its performance and accuracy. Nowadays, high resolution of T1- and T2-weighted images can be acquired during the same scanning session. This creates a promising possibility of combining images to improve delineation of brain structures. In this report, we present a novel skull striping algorithm which aims to get more accurate and robust extracted brain image region. The idea is by incorporating the information from T2-weigthed image into the skull striping decision process. In order to achieve this, the pair of images must be brought into strict correspondence. Perfect alignment is required. We also introduce a fresh approach on multi-modal image alignment. This is done by making use of the existing intra-modality image alignment. Hence, conversion from multimodal images into similar modality images is required. Thresholding approach is proposed to accomplish this conversion. |
---|