Explainable AI for hypertension (HTN) development prediction
Developing trust in Artificial Intelligence (AI) has always been challenging due to the lack of transparency and understanding behind a black-box machine learning model. To address this issue, eXplainable Artificial Intelligence (XAI) has been proposed as a potential solution for achieving more tran...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/166079 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Developing trust in Artificial Intelligence (AI) has always been challenging due to the lack of transparency and understanding behind a black-box machine learning model. To address this issue, eXplainable Artificial Intelligence (XAI) has been proposed as a potential solution for achieving more transparent AI. This report presents a study on the application of XAI methods in explaining heart disease outcomes. The study compares the explanations of two XAI methods, SHAP and LIME, to identify the significant features in predicting the presence of heart disease. The findings of the XAI methods are also compared to those obtained from the traditional feature selection method, LASSO. The global explanations provided by SHAP and LIME are found to be consistent and supported by LASSO's important features. We hope that the insights gained can enable clinicians to make better decisions and provide better patient care. Additionally, further user studies can be conducted to investigate the satisfaction and trustworthiness of these models for implementation in the medical field. |
---|