Demystifying AI: bridging the explainability gap in LLMs
This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175340 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (PoC) web application that combines RAG with LLMs to result in more explainable and understandable AI output. The web application ingests the sustainability reports, which then processes them to answer audit-related queries and highlights relevant material in the documents to show the source of the responses.
The implementation involves a technology stack of Python, LlamaIndex, Streamlit and pdf processing libraries. This project demonstrates the web application's ability to ingest, process, and derive responses from a sustainability report to effectively illustrative how RAG and LLMs can be used in the enhancement of explainability and reliability of AI systems in specialised domains.
This PoC lays the foundation for further research and development toward better explainability of AI systems that puts forward the possibility of more explainable and, therefore, trustworthy AI applications. |
---|