Effects of automation transparency on trust and diagnostic decision making with an automated decision aid during medical emergencies

Automated decision aids can improve decision making performance and reduce errors in complex decision making, such as during a medical emergency. However, due to the imperfect reliability of such automated systems, there can be a miscalibration of trust towards the system which leads to problems of...

Full description

Saved in:
Bibliographic Details
Main Author: Mohamed Syahid Hassan
Other Authors: Park Taezoon
Format: Theses and Dissertations
Language:English
Published: 2015
Subjects:
Online Access:https://hdl.handle.net/10356/62933
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Automated decision aids can improve decision making performance and reduce errors in complex decision making, such as during a medical emergency. However, due to the imperfect reliability of such automated systems, there can be a miscalibration of trust towards the system which leads to problems of automation disuse and misuse. Furthermore, when an automation error does occur, trust and acceptance of the system is reduced and recovers slowly. A transparent automation is one that provides the user with insights into the internal processes that produce its outcomes and this is expected to improve trust calibration. This research investigated whether the presence of automation transparency features could increase diagnostic performance by improving trust calibration, trust drop after failure and trust recovery. Novice doctors in a simulated emergency diagnosis task used automated decision aids with different transparency configurations, manipulated by the presence of two transparency features, a list of Key Diagnostic Cues that explain its recommendation and a Likelihood Rating that displays how likely the aid thinks its recommendation is correct. The study did not find any significant evidence that the different transparency configurations improved appropriate trust in the aid’s recommendations. Instead, Likelihood Ratings reduced diagnostic confidence during appropriate reliance. The transparency features also intensified the trust drop and corresponding confidence decrement caused by the experience of error. However, Key Diagnostic Cues decreased confidence during distrust while Likelihood Ratings improved trust recovery to the point where it was better than the initial trust. Although transparency features could intensify mistrust immediately after the automation has committed an error, automation transparency features are still recommended due to its potential to improve overall trust in the system over the long term.