EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS
Sepsis early identification is pivotal for improving its management outcomes. Machine learning (ML) has the potential to predict sepsis automatically. However, clinicians don’t fully trust prediction results from ML. Explainable Artificial Intelligence (XAI) serves as a bridge to build clinicians’ t...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/80978 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
id |
id-itb.:80978 |
---|---|
spelling |
id-itb.:809782024-03-17T15:38:21ZEXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS Ariyo Kresnadhi, Gregorius Indonesia Final Project sepsis, explainable artificial intelligence, and explainability INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/80978 Sepsis early identification is pivotal for improving its management outcomes. Machine learning (ML) has the potential to predict sepsis automatically. However, clinicians don’t fully trust prediction results from ML. Explainable Artificial Intelligence (XAI) serves as a bridge to build clinicians’ trust. This work aims to build XAI for sepsis prediction and validate its explainability to clinicians. Based on previous studies in XAI for sepsis, Two XAI architectures (MGP-AttTCN and MGP-LogReg) are replicated. Both architectures are trained and tested on three parameters set (reference, baseline, and local existing practice) and two labels (SOFA and qSOFA-septic shock) derived from the MIMIC-IV electronic medical record. Three model-specific explainability cases are built from trained weight on each architecture: local explainability from MGP-LogReg, local explainability from MGP-AttTCN, and global explainability from MGP-LogReg. In an explainability study with clinicians, these methods are assessed with a control condition (absence of explainability). The study utilizes questionnaires to compare the diagnosis process with and without explainability from XAI. Test results for all combinations show that the SOFA label yields better inference than the qSOFA-septic shock label. SOFA label has a larger case-cohort than qSOFA-septic shock, leading to diverse sepsis cases. Comparison of architecture performance on each parameter set shows that MGP-AttTCN can still perform well on limited information. A review of the contribution of each parameter in each combination with SOFA labels shows that MGP-AttTCN can infer vital and laboratory signs better than MGP-LogReg. Despite this achievement, the highest performing MGP-AttTCN combination is still unoptimal (AUROC: 0.745 and AUPRC: 0.525). The explainability study results showed that global explanation had the best diagnostic consensus across all variations. According to clinicians, the other variation doesn’t show a clinically consistent explanation. Thus, further studies are needed to develop a better-performing XAI with clinically relevant explainability methods. text |
institution |
Institut Teknologi Bandung |
building |
Institut Teknologi Bandung Library |
continent |
Asia |
country |
Indonesia Indonesia |
content_provider |
Institut Teknologi Bandung |
collection |
Digital ITB |
language |
Indonesia |
description |
Sepsis early identification is pivotal for improving its management outcomes. Machine learning (ML) has the potential to predict sepsis automatically. However, clinicians don’t fully trust prediction results from ML. Explainable Artificial Intelligence (XAI) serves as a bridge to build clinicians’ trust. This work aims to build XAI for sepsis prediction and validate its explainability to clinicians. Based on previous studies in XAI for sepsis, Two XAI architectures (MGP-AttTCN and MGP-LogReg) are replicated. Both architectures are trained and tested on three parameters set (reference, baseline, and local existing practice) and two labels (SOFA and qSOFA-septic shock) derived from the MIMIC-IV electronic medical record. Three model-specific explainability cases are built from trained weight on each architecture: local explainability from MGP-LogReg, local explainability from MGP-AttTCN, and global explainability from MGP-LogReg. In an explainability study with clinicians, these methods are assessed with a control condition (absence of explainability). The study utilizes questionnaires to compare the diagnosis process with and without explainability from XAI. Test results for all combinations show that the SOFA label yields better inference than the qSOFA-septic shock label. SOFA label has a larger case-cohort than qSOFA-septic shock, leading to diverse sepsis cases. Comparison of architecture performance on each parameter set shows that MGP-AttTCN can still perform well on limited information. A review of the contribution of each parameter in each combination with SOFA labels shows that MGP-AttTCN can infer vital and laboratory signs better than MGP-LogReg. Despite this achievement, the highest performing MGP-AttTCN combination is still unoptimal (AUROC: 0.745 and AUPRC: 0.525). The explainability study results showed that global explanation had the best diagnostic consensus across all variations. According to clinicians, the other variation doesn’t show a clinically consistent explanation. Thus, further studies are needed to develop a better-performing XAI with clinically relevant explainability methods. |
format |
Final Project |
author |
Ariyo Kresnadhi, Gregorius |
spellingShingle |
Ariyo Kresnadhi, Gregorius EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS |
author_facet |
Ariyo Kresnadhi, Gregorius |
author_sort |
Ariyo Kresnadhi, Gregorius |
title |
EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS |
title_short |
EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS |
title_full |
EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS |
title_fullStr |
EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS |
title_full_unstemmed |
EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) TO PREDICT SEPSIS IN ICU USING ELECTRONIC HEALTH RECORDS |
title_sort |
explainable artificial intelligence (xai) to predict sepsis in icu using electronic health records |
url |
https://digilib.itb.ac.id/gdl/view/80978 |
_version_ |
1822281778941919232 |