Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images
Purpose: To develop and validate a deep learning system (DLS) for estimation of vertical cup-to-disc ratio (vCDR) in ultra-widefield (UWF) and smartphone-based fundus images. Methods: A DLS consisting of two sequential convolutional neural networks (CNNs) to delineate optic disc (OD) and optic cup (...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/179838 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-179838 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1798382024-08-30T15:40:22Z Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images Yap, Boon Peng Li, Kelvin Zhenghao Toh, En Qi Low, Kok Yao Rani, Sumaya Khan Goh, Eunice Jin Hui Hui, Vivien Yip Cherng Ng, Beng Koon Lim, Tock Han School of Electrical and Electronic Engineering Lee Kong Chian School of Medicine (LKCMedicine) Tan Tock Seng Hospital National Healthcare Group Eye Institute Engineering Deep learning Optic disc Purpose: To develop and validate a deep learning system (DLS) for estimation of vertical cup-to-disc ratio (vCDR) in ultra-widefield (UWF) and smartphone-based fundus images. Methods: A DLS consisting of two sequential convolutional neural networks (CNNs) to delineate optic disc (OD) and optic cup (OC) boundaries was developed using 800 standard fundus images from the public REFUGE data set. The CNNs were tested on 400 test images from the REFUGE data set and 296 UWF and 300 smartphone-based images from a teleophthalmology clinic. vCDRs derived from the delineated OD/OC boundaries were compared with optometrists’ annotations using mean absolute error (MAE). Subgroup analysis was conducted to study the impact of peripapillary atrophy (PPA), and correlation study was performed to investigate potential correlations between sectoral CDR (sCDR) and retinal nerve fiber layer (RNFL) thickness. Results: The system achieved MAEs of 0.040 (95% CI, 0.037–0.043) in the REFUGE test images, 0.068 (95% CI, 0.061–0.075) in the UWF images, and 0.084 (95% CI, 0.075–0.092) in the smartphone-based images. There was no statistical significance in differences between PPA and non-PPA images. Weak correlation (r = −0.4046, P < 0.05) between sCDR and RNFL thickness was found only in the superior sector. Conclusions: We developed a deep learning system that estimates vCDR from standard, UWF, and smartphone-based images. We also described anatomic peripapillary adversarial lesion and its potential impact on OD/OC delineation. Translational Relevance: Artificial intelligence can estimate vCDR from different types of fundus images and may be used as a general and interpretable screening tool to improve community reach for diagnosis and management of glaucoma. Published version Supported by the Ng Teng Fong Healthcare Innovation Programme (Innovation Track), project code NTF_DEC2021_1_C1_D_03. 2024-08-27T03:11:31Z 2024-08-27T03:11:31Z 2024 Journal Article Yap, B. P., Li, K. Z., Toh, E. Q., Low, K. Y., Rani, S. K., Goh, E. J. H., Hui, V. Y. C., Ng, B. K. & Lim, T. H. (2024). Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images. Translational Vision Science & Technology, 13(4), 6-. https://dx.doi.org/10.1167/tvst.13.4.6 2164-2591 https://hdl.handle.net/10356/179838 10.1167/tvst.13.4.6 38568608 2-s2.0-85190086013 4 13 6 en NTF_DEC2021_1_C1_D_03 Translational Vision Science & Technology © 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering Deep learning Optic disc |
spellingShingle |
Engineering Deep learning Optic disc Yap, Boon Peng Li, Kelvin Zhenghao Toh, En Qi Low, Kok Yao Rani, Sumaya Khan Goh, Eunice Jin Hui Hui, Vivien Yip Cherng Ng, Beng Koon Lim, Tock Han Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
description |
Purpose: To develop and validate a deep learning system (DLS) for estimation of vertical cup-to-disc ratio (vCDR) in ultra-widefield (UWF) and smartphone-based fundus images. Methods: A DLS consisting of two sequential convolutional neural networks (CNNs) to delineate optic disc (OD) and optic cup (OC) boundaries was developed using 800 standard fundus images from the public REFUGE data set. The CNNs were tested on 400 test images from the REFUGE data set and 296 UWF and 300 smartphone-based images from a teleophthalmology clinic. vCDRs derived from the delineated OD/OC boundaries were compared with optometrists’ annotations using mean absolute error (MAE). Subgroup analysis was conducted to study the impact of peripapillary atrophy (PPA), and correlation study was performed to investigate potential correlations between sectoral CDR (sCDR) and retinal nerve fiber layer (RNFL) thickness. Results: The system achieved MAEs of 0.040 (95% CI, 0.037–0.043) in the REFUGE test images, 0.068 (95% CI, 0.061–0.075) in the UWF images, and 0.084 (95% CI, 0.075–0.092) in the smartphone-based images. There was no statistical significance in differences between PPA and non-PPA images. Weak correlation (r = −0.4046, P < 0.05) between sCDR and RNFL thickness was found only in the superior sector. Conclusions: We developed a deep learning system that estimates vCDR from standard, UWF, and smartphone-based images. We also described anatomic peripapillary adversarial lesion and its potential impact on OD/OC delineation. Translational Relevance: Artificial intelligence can estimate vCDR from different types of fundus images and may be used as a general and interpretable screening tool to improve community reach for diagnosis and management of glaucoma. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Yap, Boon Peng Li, Kelvin Zhenghao Toh, En Qi Low, Kok Yao Rani, Sumaya Khan Goh, Eunice Jin Hui Hui, Vivien Yip Cherng Ng, Beng Koon Lim, Tock Han |
format |
Article |
author |
Yap, Boon Peng Li, Kelvin Zhenghao Toh, En Qi Low, Kok Yao Rani, Sumaya Khan Goh, Eunice Jin Hui Hui, Vivien Yip Cherng Ng, Beng Koon Lim, Tock Han |
author_sort |
Yap, Boon Peng |
title |
Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
title_short |
Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
title_full |
Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
title_fullStr |
Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
title_full_unstemmed |
Generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
title_sort |
generalizability of deep neural networks for vertical cup-to-disc ratio estimation in ultra-widefield and amartphone-based fundus images |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/179838 |
_version_ |
1814047352440225792 |