Towards explainable artificial intelligence in the banking sector
This research addresses the imperative need for Explainable AI (XAI) tools tailored to the banking industry, where widespread adoption of AI has led to escalating demands for explainable and transparent models. The methodology involves the implementation of a user-centric system using the DASH frame...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175085 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This research addresses the imperative need for Explainable AI (XAI) tools tailored to the banking industry, where widespread adoption of AI has led to escalating demands for explainable and transparent models. The methodology involves the implementation of a user-centric system using the DASH framework and introduces a novel approach of integrating two XAI techniques, SP-LIME and SHAP, with the aim of capitalizing on the strengths of each tool.
Notably, the system is developed to reduce the time and effort required for users to understand XAI outputs which is a persistent challenge in the current XAI landscape. This research emphasizes a user-friendly approach, with a meticulous design that strives to ensure understandability and avoiding information overload.
Drawing upon the capabilities of language generation models, the system goes beyond current XAI outputs by generating text-readable explanations, establishing a crucial link for users which was absent before. Preliminary survey findings indicate a positive response from respondents, affirming the viability of incorporating such technologies in the system. Several prompt engineering techniques were explored in the development of prompts to optimize the effectiveness of the generated explanations.
Another distinctive feature of this research lies in the usage of perturbed samples to compute quantitative metrics, enhancing the reliability of XAI output, thereby fostering trust among users. This addresses a significant concern prevalent in current practical implementations of XAI, where the lack of such measures lead users to question the reliability of the output, thereby undermining the effectiveness of such tools.
Overall, this research contributes to the ongoing efforts to bridge the gap between complex models and stakeholders in the banking sector by enhancing the comprehensibility and usability of XAI tools. |
---|