Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective

Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the...

Full description

Saved in:
Bibliographic Details
Main Authors: Bernardo, Ezekiel, Seva, Rosemary R.
Format: text
Published: Animo Repository 2023
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/faculty_research/12563
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
id oai:animorepository.dlsu.edu.ph:faculty_research-14544
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:faculty_research-145442024-06-10T08:12:27Z Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective Bernardo, Ezekiel Seva, Rosemary R. Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the field grew, and development flourished. However, concerns have been expressed that the techniques are limited in terms of to whom they are applicable and how their effect can be leveraged. Currently, most XAI techniques have been designed by developers. Though needed and valuable, XAI is more critical for an end-user, considering transparency cleaves on trust and adoption. This study aims to understand and conceptualize an end-user-centric XAI to fill in the lack of end-user understanding. Considering recent findings of related studies, this study focuses on design conceptualization and affective analysis. Data from 202 participants were collected from an online survey to identify the vital XAI design components and testbed experimentation to explore the affective and trust change per design configuration. The results show that affective is a viable trust calibration route for XAI. In terms of design, explanation form, communication style, and presence of supplementary information are the components users look for in an effective XAI. Lastly, anxiety about AI, incidental emotion, perceived AI reliability, and experience using the system are significant moderators of the trust calibration process for an end-user. 2023-01-01T08:00:00Z text https://animorepository.dlsu.edu.ph/faculty_research/12563 Faculty Research Work Animo Repository Artificial intelligence Deep learning (Machine learning) Artificial Intelligence and Robotics
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
topic Artificial intelligence
Deep learning (Machine learning)
Artificial Intelligence and Robotics
spellingShingle Artificial intelligence
Deep learning (Machine learning)
Artificial Intelligence and Robotics
Bernardo, Ezekiel
Seva, Rosemary R.
Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective
description Explainable Artificial Intelligence (XAI) has successfully solved the black box paradox of Artificial Intelligence (AI). By providing human-level insights on AI, it allowed users to understand its inner workings even with limited knowledge of the machine learning algorithms it uses. As a result, the field grew, and development flourished. However, concerns have been expressed that the techniques are limited in terms of to whom they are applicable and how their effect can be leveraged. Currently, most XAI techniques have been designed by developers. Though needed and valuable, XAI is more critical for an end-user, considering transparency cleaves on trust and adoption. This study aims to understand and conceptualize an end-user-centric XAI to fill in the lack of end-user understanding. Considering recent findings of related studies, this study focuses on design conceptualization and affective analysis. Data from 202 participants were collected from an online survey to identify the vital XAI design components and testbed experimentation to explore the affective and trust change per design configuration. The results show that affective is a viable trust calibration route for XAI. In terms of design, explanation form, communication style, and presence of supplementary information are the components users look for in an effective XAI. Lastly, anxiety about AI, incidental emotion, perceived AI reliability, and experience using the system are significant moderators of the trust calibration process for an end-user.
format text
author Bernardo, Ezekiel
Seva, Rosemary R.
author_facet Bernardo, Ezekiel
Seva, Rosemary R.
author_sort Bernardo, Ezekiel
title Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective
title_short Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective
title_full Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective
title_fullStr Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective
title_full_unstemmed Affective design analysis of explainable artificial intelligence (XAI): A user-centric perspective
title_sort affective design analysis of explainable artificial intelligence (xai): a user-centric perspective
publisher Animo Repository
publishDate 2023
url https://animorepository.dlsu.edu.ph/faculty_research/12563
_version_ 1802997450877698048