Building more explainable artificial intelligence with argumentation

Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to expla...

Full description

Saved in:
Bibliographic Details
Main Authors: Zeng, Zhiwei, Miao, Chunyan, Leung, Cyril, Chin, Jing Jih
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16762
https://hdl.handle.net/10356/139223
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.