User centric explanations: A breakthrough for explainable models
Thanks to recent developments in explainable Deep Learning models, researchers have shown that these models can be incredibly successful and provide encouraging results. However, a lack of model interpretability can hinder the efficient implementation of Deep Learning models in real-world applicatio...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference or Workshop Item |
Published: |
2021
|
Subjects: | |
Online Access: | http://eprints.um.edu.my/35950/ https://www.scopus.com/inward/record.uri?eid=2-s2.0-85112177882&doi=10.1109%2fICIT52682.2021.9491641&partnerID=40&md5=7b9459571ec0379673b4765a55181b63 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaya |
Summary: | Thanks to recent developments in explainable Deep Learning models, researchers have shown that these models can be incredibly successful and provide encouraging results. However, a lack of model interpretability can hinder the efficient implementation of Deep Learning models in real-world applications. This has encouraged researchers to develop and design a large number of algorithms to support transparency. Although studies have raised awareness of the importance of explainable artificial intelligence, the question of how to solve the needs of real users to understand artificial intelligence remains unanswered. In this paper, we provide an overview of the current state of the research field at Human-Centered Machine Learning and new methods for user-centric explanations for deep learning models. Furthermore, we outline future directions for interpretable machine learning and discuss the challenges facing this research field, as well as the importance and motivation behind developing user-centric explanations for Deep Learning models. © 2021 IEEE. |
---|