Demystifying AI: bridging the explainability gap in LLMs
This project looks at the exploration of Retrieval-Augmented Generation (RAG) with large language models (LLMs) to try and improve the explainability of AI systems within specialized domains, such as auditing sustainability reports. This project would focus on the development of a Proof of Concept (...
Saved in:
Main Author: | Chan, Darren Inn Siew |
---|---|
Other Authors: | Erik Cambria |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175340 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
TeLLMe what you see: using LLMs to explain neurons in vision models
by: Guertler, Leon
Published: (2024) -
May I ask a follow-up question? Understanding the benefits of conversations inneural network explainability
by: Zhang, Tong, et al.
Published: (2024) -
Towards explainable artificial intelligence in the banking sector
by: Jew, Clarissa Bella
Published: (2024) -
Explainable AI for medical over-investigation identification
by: Suresh Kumar Rathika
Published: (2024) -
Toward conversational interpretations of neural networks: data collection
by: Yeow, Ming Xuan
Published: (2024)