Autonomy vs. artificial intelligence : studies on healthcare work and analytics

With the advance and prevalence of artificial intelligence (AI), many believe that the healthcare industry is ripe for AI disruption, and a wide variety of AI technologies have been piloted within healthcare. While AI, computer scientists and medical informatics researchers have extensively research...

Full description

Saved in:
Bibliographic Details
Main Author: Wang, Le
Other Authors: Goh Kim Huat
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146910
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With the advance and prevalence of artificial intelligence (AI), many believe that the healthcare industry is ripe for AI disruption, and a wide variety of AI technologies have been piloted within healthcare. While AI, computer scientists and medical informatics researchers have extensively researched the roles of AI technologies in assisting clinical judgement and decision making, most initiatives are still in research and development stage and others are facing numerous implementation challenges. This dissertation aims at providing some insights on how to improve the application of AI technologies in healthcare. In the first essay, I developed an artificial intelligence algorithm, SERA algorithm, which uses both structured data and unstructured clinical notes to predict and diagnose sepsis. I test this algorithm with independent, clinical notes and achieve high predictive accuracy 12 hours before the onset of sepsis (AUC 0.94, sensitivity 0.87 and specificity 0.87). I compare the SERA algorithm against physician predictions and show the algorithm’s potential to increase the early detection of sepsis by up to 32% and reduce false positives by up to 17%. Mining unstructured clinical notes is shown to improve the algorithm’s accuracy compared to using only clinical measures for early warning 12 to 48 hours before the onset of sepsis. In addition, I demonstrated the role human experts play in an increasing algorithmic world of artificial intelligence. Except for improving AI algorithms and artifacts themselves, researchers are also trying to improve the actual adoption and usage of these AI tools to improve the effectiveness and efficiency of healthcare AI. I touch this aspect by examining physicians’ behaviour towards AI-enabled clinical alerts, which is one of the most common AI application in current clinical settings. One of the benefits of EMR systems is their ability to leverage on AI to provide automated clinical alerts, and this leads to the ubiquitous use of automated clinical alerts in clinical settings. The excessive use of automated clinical alerts, however, leads to the excessive dismissal of such alerts by physicians in a phenomenon described as alert fatigue. In the second essay, I tracked the actions of 1,152 different physicians when they encountered automated clinical alerts in a hospital over a period of 22 months. I collected a total of 66,320 instances of automated clinical alerts and examined the physicians’ behaviour towards these alerts. This paper posits that the physicians’ dismissal of the alerts is due to more than just alert fatigue, and I argue that the psychological distance of the alert encounters impacts the physicians’ construal of the alerts, i.e., the way in which (or the process of) people perceive, comprehend, and interpret the alerts, and a high-level construal will result in their corresponding excessive dismissal of the alerts. My findings suggest that the context in which the AI-enabled alerts appear influences physicians’ adherence to these alerts, and I examine the boundary conditions that mitigate the biases that cause the excessive dismissal of these alerts.