Demystifying AI: bridging the explainability gap in LLMs

This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (...

Full description

Saved in:
Bibliographic Details
Main Author: Chan, Darren Inn Siew
Other Authors: Erik Cambria
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
RAG
LLM
XAI
Online Access:https://hdl.handle.net/10356/175340
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-175340
record_format dspace
spelling sg-ntu-dr.10356-1753402024-04-26T15:42:37Z Demystifying AI: bridging the explainability gap in LLMs Chan, Darren Inn Siew Erik Cambria School of Computer Science and Engineering cambria@ntu.edu.sg Computer and Information Science Retrieval augmented generation Large language models Explainability of AI RAG LLM XAI Sustainability reports auditing Explainable AI This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (PoC) web application that combines RAG with LLMs to result in more explainable and understandable AI output. The web application ingests the sustainability reports, which then processes them to answer audit-related queries and highlights relevant material in the documents to show the source of the responses. The implementation involves a technology stack of Python, LlamaIndex, Streamlit and pdf processing libraries. This project demonstrates the web application's ability to ingest, process, and derive responses from a sustainability report to effectively illustrative how RAG and LLMs can be used in the enhancement of explainability and reliability of AI systems in specialised domains. This PoC lays the foundation for further research and development toward better explainability of AI systems that puts forward the possibility of more explainable and, therefore, trustworthy AI applications. Bachelor's degree 2024-04-23T12:00:20Z 2024-04-23T12:00:20Z 2024 Final Year Project (FYP) Chan, D. I. S. (2024). Demystifying AI: bridging the explainability gap in LLMs. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175340 https://hdl.handle.net/10356/175340 en SCSE23-0150 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Retrieval augmented generation
Large language models
Explainability of AI
RAG
LLM
XAI
Sustainability reports auditing
Explainable AI
spellingShingle Computer and Information Science
Retrieval augmented generation
Large language models
Explainability of AI
RAG
LLM
XAI
Sustainability reports auditing
Explainable AI
Chan, Darren Inn Siew
Demystifying AI: bridging the explainability gap in LLMs
description This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (PoC) web application that combines RAG with LLMs to result in more explainable and understandable AI output. The web application ingests the sustainability reports, which then processes them to answer audit-related queries and highlights relevant material in the documents to show the source of the responses. The implementation involves a technology stack of Python, LlamaIndex, Streamlit and pdf processing libraries. This project demonstrates the web application's ability to ingest, process, and derive responses from a sustainability report to effectively illustrative how RAG and LLMs can be used in the enhancement of explainability and reliability of AI systems in specialised domains. This PoC lays the foundation for further research and development toward better explainability of AI systems that puts forward the possibility of more explainable and, therefore, trustworthy AI applications.
author2 Erik Cambria
author_facet Erik Cambria
Chan, Darren Inn Siew
format Final Year Project
author Chan, Darren Inn Siew
author_sort Chan, Darren Inn Siew
title Demystifying AI: bridging the explainability gap in LLMs
title_short Demystifying AI: bridging the explainability gap in LLMs
title_full Demystifying AI: bridging the explainability gap in LLMs
title_fullStr Demystifying AI: bridging the explainability gap in LLMs
title_full_unstemmed Demystifying AI: bridging the explainability gap in LLMs
title_sort demystifying ai: bridging the explainability gap in llms
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/175340
_version_ 1814047182116880384