Building more explainable artificial intelligence with argumentation
Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to expla...
Saved in:
Main Authors: | Zeng, Zhiwei, Miao, Chunyan, Leung, Cyril, Chin, Jing Jih |
---|---|
Other Authors: | School of Computer Science and Engineering |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2020
|
Subjects: | |
Online Access: | https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16762 https://hdl.handle.net/10356/139223 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Similar Items
-
Computing argumentative explanations in bipolar argumentation frameworks
by: Miao, Chunyan, et al.
Published: (2019) -
Towards explainable artificial intelligence in the banking sector
by: Jew, Clarissa Bella
Published: (2024) -
Explainable AI for medical over-investigation identification
by: Suresh Kumar Rathika
Published: (2024) -
Context-based and explainable decision making with argumentation
by: Zeng, Zhiwei, et al.
Published: (2019) -
The knowledge argument, the open question argument, and the moral problem
by: Pelczar, M.
Published: (2011)