Explainable AI for medical over-investigation identification

In the dynamic landscape of machine learning applications spanning diverse sectors, the pursuit of explainability has emerged as important to provide insights into these models deemed as “black boxes”. This paper delves into the relatively unexplored domain within healthcare, specifically targeting...

Full description

Saved in:
Bibliographic Details
Main Author: Suresh Kumar Rathika
Other Authors: Fan Xiuyi
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/175038
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In the dynamic landscape of machine learning applications spanning diverse sectors, the pursuit of explainability has emerged as important to provide insights into these models deemed as “black boxes”. This paper delves into the relatively unexplored domain within healthcare, specifically targeting the identification of over-investigation in disease diagnoses. Over-investigation poses significant risks to both patients and the efficacy of healthcare systems. Effectively identifying and mitigating these practices holds the promise of streamlining patient care and enhancing both efficiency and cost-effectiveness. Despite the implications, literature remains sparse on leveraging machine learning solutions for effectively identifying instances of over-investigation, particularly through the use of eXplainable Artificial Intelligence (XAI) methods. Thus, our study leverages feature attribution and selection techniques from XAI and models medical investigations as a "feature-finding" problem. By harnessing XAI-based methods, we aim to pinpoint the most pertinent investigations for each patient within the context of ophthalmology. Investigations are identified for diagnosing various eye conditions and determining optimal follow-up schedules tailored to individual patients. Our findings highlight the algorithm’s proficiency in accurately selecting recommended investigations that align with clinical judgment and established diagnostic guidelines.