Software testing and explainable: A study for evaluating XAI methods on software testing datasets

Explainable AI(XAI) is defined as a set of tools and frameworks used to make humans understand machine learning models which can often be ambiguous. 2 XAI techniques: SHapley Additive exPlanations(SHAP) and Local Interpretable Model-Agnostic Explanations(LIME) are state-of-the-art XAI tools th...

Full description

Saved in:
Bibliographic Details
Main Author: Tay, Glenn
Other Authors: Fan Xiuyi
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/166087
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Explainable AI(XAI) is defined as a set of tools and frameworks used to make humans understand machine learning models which can often be ambiguous. 2 XAI techniques: SHapley Additive exPlanations(SHAP) and Local Interpretable Model-Agnostic Explanations(LIME) are state-of-the-art XAI tools that are model-agnostic and can be used to explain any Machine Learning Model. This project aims to compare the performance of SHAP and LIME in 4 aspects: local intepretability of test set, global interpretabilty of test set, local interpretability of misclassified observations and global interpretability of misclassified observations vs correctly-classified observations. This project focuses on training a Decision Tree Classifier models for Software Defect Prediction using publicly available datasets, using SHAP and LIME to explain our model’s predictions, and compare between SHAP and LIME in the 4 aspects mentioned