Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images
The choroid is the vascular layer of the eye that supplies photoreceptors with oxygen. Changes in the choroid are associated with many pathologies including myopia where the choroid progressively thins due to axial elongation. To quantize these changes, there is a need to automatically and accuratel...
Saved in:
Main Authors: | , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156185 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-156185 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1561852022-04-16T20:11:13Z Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images Cahyo, Dheo A. Y. Yow, Ai Ping Saw, Seang-Mei Ang, Marcus Girard, Michael Schmetterer, Leopold Wong, Damon Wing Kee School of Chemical and Biomedical Engineering Singapore National Eye Centre Institute for Digital Molecular Analytics and Science (IDMxS) Singapore Centre for Environmental Life Sciences and Engineering (SCELSE) SERI-NTU Advanced Ocular Engineering (STANCE) Engineering::Chemical engineering Engineering::Electrical and electronic engineering::Optics, optoelectronics, photonics Image Reconstruction Optical Tomography The choroid is the vascular layer of the eye that supplies photoreceptors with oxygen. Changes in the choroid are associated with many pathologies including myopia where the choroid progressively thins due to axial elongation. To quantize these changes, there is a need to automatically and accurately segment the choroidal layer from optical coherence tomography (OCT) images. In this paper, we propose a multi-task learning approach to segment the choroid from three-dimensional OCT images. Our proposed architecture aggregates the spatial context from adjacent cross-sectional slices to reconstruct the central slice. Spatial context learned by this reconstruction mechanism is then fused with a U-Net based architecture for segmentation. The proposed approach was evaluated on volumetric OCT scans of 166 myopic eyes acquired with a commercial OCT system, and achieved a cross-validation Intersection over Union (IoU) score of 94.69% which significantly outperformed (p<0.001) the other state-of-the-art methods on the same data set. Choroidal thickness maps generated by our approach also achieved a better structural similarity index (SSIM) of 72.11% with respect to the groundtruth. In particular, our approach performs well for highly challenging eyes with thinner choroids. Compared to other methods, our proposed approach also requires lesser processing time and has lower computational requirements. The results suggest that our proposed approach could potentially be used as a fast and reliable method for automated choroidal segmentation. Agency for Science, Technology and Research (A*STAR) Nanyang Technological University National Medical Research Council (NMRC) National Research Foundation (NRF) Published version National Medical Research Council (CG/C010A/2017_SERI, MOH-000249-00, MOH-OFIRG20nov-0014, OFIRG/0048/2017, OFLCG/004c/2018, TA/MOH-000249-00/2018); Singapore Eye Research Institute & Nanyang Technological University (SERI-NTU Advanced Ocular Engineering Program); National Research Foundation Singapore (NRF2019-THE002-0006, NRF-CRP24-2020-0001); Agency for Science, Technology and Research (A20H4b0141); SERI-Lee Foundation (LF1019-1); Duke-NUS Medical School (Duke-NUS-KP(Coll)/2018/0009A). 2022-04-11T01:31:55Z 2022-04-11T01:31:55Z 2021 Journal Article Cahyo, D. A. Y., Yow, A. P., Saw, S., Ang, M., Girard, M., Schmetterer, L. & Wong, D. W. K. (2021). Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images. Biomedical Optics Express, 12(12), 7348-7360. https://dx.doi.org/10.1364/BOE.428140 2156-7085 https://hdl.handle.net/10356/156185 10.1364/BOE.428140 35003838 2-s2.0-85120003267 12 12 7348 7360 en CG/C010A/2017_SERI MOH-000249-00 MOH-OFIRG20nov-0014 OFIRG/0048/2017 OFLCG/004c/2018 TA/MOH-000249-00/2018 NRF2019-THE002-0006 NRF-CRP24-2020-0001 A20H4b0141 LF1019-1 Duke-NUS-KP(Coll)/2018/0009A Biomedical Optics Express © 2021 Optical Society of America under the terms of the Open Access Publishing Agreement. Users may use, reuse, and build upon the article, or use the article for text or data mining, so long as such uses are for noncommercial purposes and appropriate attribution is maintained. All other rights are reserved. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Chemical engineering Engineering::Electrical and electronic engineering::Optics, optoelectronics, photonics Image Reconstruction Optical Tomography |
spellingShingle |
Engineering::Chemical engineering Engineering::Electrical and electronic engineering::Optics, optoelectronics, photonics Image Reconstruction Optical Tomography Cahyo, Dheo A. Y. Yow, Ai Ping Saw, Seang-Mei Ang, Marcus Girard, Michael Schmetterer, Leopold Wong, Damon Wing Kee Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images |
description |
The choroid is the vascular layer of the eye that supplies photoreceptors with oxygen. Changes in the choroid are associated with many pathologies including myopia where the choroid progressively thins due to axial elongation. To quantize these changes, there is a need to automatically and accurately segment the choroidal layer from optical coherence tomography (OCT) images. In this paper, we propose a multi-task learning approach to segment the choroid from three-dimensional OCT images. Our proposed architecture aggregates the spatial context from adjacent cross-sectional slices to reconstruct the central slice. Spatial context learned by this reconstruction mechanism is then fused with a U-Net based architecture for segmentation. The proposed approach was evaluated on volumetric OCT scans of 166 myopic eyes acquired with a commercial OCT system, and achieved a cross-validation Intersection over Union (IoU) score of 94.69% which significantly outperformed (p<0.001) the other state-of-the-art methods on the same data set. Choroidal thickness maps generated by our approach also achieved a better structural similarity index (SSIM) of 72.11% with respect to the groundtruth. In particular, our approach performs well for highly challenging eyes with thinner choroids. Compared to other methods, our proposed approach also requires lesser processing time and has lower computational requirements. The results suggest that our proposed approach could potentially be used as a fast and reliable method for automated choroidal segmentation. |
author2 |
School of Chemical and Biomedical Engineering |
author_facet |
School of Chemical and Biomedical Engineering Cahyo, Dheo A. Y. Yow, Ai Ping Saw, Seang-Mei Ang, Marcus Girard, Michael Schmetterer, Leopold Wong, Damon Wing Kee |
format |
Article |
author |
Cahyo, Dheo A. Y. Yow, Ai Ping Saw, Seang-Mei Ang, Marcus Girard, Michael Schmetterer, Leopold Wong, Damon Wing Kee |
author_sort |
Cahyo, Dheo A. Y. |
title |
Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images |
title_short |
Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images |
title_full |
Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images |
title_fullStr |
Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images |
title_full_unstemmed |
Multi-task learning approach for volumetric segmentation and reconstruction in 3D OCT images |
title_sort |
multi-task learning approach for volumetric segmentation and reconstruction in 3d oct images |
publishDate |
2022 |
url |
https://hdl.handle.net/10356/156185 |
_version_ |
1731235699354501120 |