EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) ON MELANOMA DETECTION MODEL
The growing use of Artificial Intelligence (AI) today makes many people dependent on AI. However, it could be dangerous if AI is used in critical situations such as the medical field. AI can provide answers that humans cannot understand and if they are wrong it can be fatal. Therefore, Explainabl...
Saved in:
Main Author: | |
---|---|
Format: | Final Project |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/78173 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | The growing use of Artificial Intelligence (AI) today makes many people
dependent on AI. However, it could be dangerous if AI is used in critical
situations such as the medical field. AI can provide answers that humans cannot
understand and if they are wrong it can be fatal. Therefore, Explainable Artificial
Intelligence (XAI) research is present to overcome this problem. In the medical
field, melanoma detection is a problem that requires XAI so that its predictions
can be trusted. Melanoma is a skin cancer that is visually difficult to differentiate.
The use of XAI in melanoma detection is important to increase dermatologists'
confidence in using it. Therefore, it is necessary to know best XAI method that
can explain the melanoma detection model for experts.
The process carried out using the CRISP-DM method which is modified
according to the problem.The XAI methods used are SHAP, LIME, RISE and
Grad-CAM. These methods have advantages over previous studies. This method
is implemented on the best melanoma detection model based on Inception-V3.
The melanoma detection model was trained using data from ISIC in 2019 and
2020. Model evaluation was carried out in modeling experiments to determine the
best model.
The results of the implementation of XAI tested on dermatologists show that
SHAP and LIME are the best methods. However, there needs to be improvement
in cleaning the data used so that the explanation results are not spurious
correlations. This spurious correlation occurs in all XAI methods. This is closely
related to training data which has a lot of noise. |
---|