Explainable AI for medical over-investigation identification
In the dynamic landscape of machine learning applications spanning diverse sectors, the pursuit of explainability has emerged as important to provide insights into these models deemed as “black boxes”. This paper delves into the relatively unexplored domain within healthcare, specifically targeting...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175038 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-175038 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1750382024-04-19T15:45:40Z Explainable AI for medical over-investigation identification Suresh Kumar Rathika Fan Xiuyi School of Computer Science and Engineering xyfan@ntu.edu.sg Computer and Information Science Explainable AI In the dynamic landscape of machine learning applications spanning diverse sectors, the pursuit of explainability has emerged as important to provide insights into these models deemed as “black boxes”. This paper delves into the relatively unexplored domain within healthcare, specifically targeting the identification of over-investigation in disease diagnoses. Over-investigation poses significant risks to both patients and the efficacy of healthcare systems. Effectively identifying and mitigating these practices holds the promise of streamlining patient care and enhancing both efficiency and cost-effectiveness. Despite the implications, literature remains sparse on leveraging machine learning solutions for effectively identifying instances of over-investigation, particularly through the use of eXplainable Artificial Intelligence (XAI) methods. Thus, our study leverages feature attribution and selection techniques from XAI and models medical investigations as a "feature-finding" problem. By harnessing XAI-based methods, we aim to pinpoint the most pertinent investigations for each patient within the context of ophthalmology. Investigations are identified for diagnosing various eye conditions and determining optimal follow-up schedules tailored to individual patients. Our findings highlight the algorithm’s proficiency in accurately selecting recommended investigations that align with clinical judgment and established diagnostic guidelines. Bachelor's degree 2024-04-18T23:49:00Z 2024-04-18T23:49:00Z 2024 Final Year Project (FYP) Suresh Kumar Rathika (2024). Explainable AI for medical over-investigation identification. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175038 https://hdl.handle.net/10356/175038 en SCSE23-0706 application/pdf Nanyang Technological University |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Computer and Information Science Explainable AI |
spellingShingle |
Computer and Information Science Explainable AI Suresh Kumar Rathika Explainable AI for medical over-investigation identification |
description |
In the dynamic landscape of machine learning applications spanning diverse sectors, the pursuit of explainability has emerged as important to provide insights into these models deemed as “black boxes”. This paper delves into the relatively unexplored domain within healthcare, specifically targeting the identification of over-investigation in disease diagnoses. Over-investigation poses significant risks to both patients and the efficacy of healthcare systems. Effectively identifying and mitigating these practices holds the promise of streamlining patient care and enhancing both efficiency and cost-effectiveness. Despite the implications, literature remains sparse on leveraging machine learning solutions for effectively identifying instances of over-investigation, particularly through the use of eXplainable Artificial Intelligence (XAI) methods. Thus, our study leverages feature attribution and selection techniques from XAI and models medical investigations as a "feature-finding" problem. By harnessing XAI-based methods, we aim to pinpoint the most pertinent investigations for each patient within the context of ophthalmology. Investigations are identified for diagnosing various eye conditions and determining optimal follow-up schedules tailored to individual patients. Our findings highlight the algorithm’s proficiency in accurately selecting recommended investigations that align with clinical judgment and established diagnostic guidelines. |
author2 |
Fan Xiuyi |
author_facet |
Fan Xiuyi Suresh Kumar Rathika |
format |
Final Year Project |
author |
Suresh Kumar Rathika |
author_sort |
Suresh Kumar Rathika |
title |
Explainable AI for medical over-investigation identification |
title_short |
Explainable AI for medical over-investigation identification |
title_full |
Explainable AI for medical over-investigation identification |
title_fullStr |
Explainable AI for medical over-investigation identification |
title_full_unstemmed |
Explainable AI for medical over-investigation identification |
title_sort |
explainable ai for medical over-investigation identification |
publisher |
Nanyang Technological University |
publishDate |
2024 |
url |
https://hdl.handle.net/10356/175038 |
_version_ |
1800916339332743168 |