Developing the lay theory of artificial intelligence scale

Artificial Intelligence (AI) development continues to permeate many aspects of our lives. Organizations that heavily hinge on AI for their bottom-line results are at risk of AI aversion where individuals underutilize AI. While prior studies have developed scales to measure people’s attitudes towards...

Full description

Saved in:
Bibliographic Details
Main Author: Ong, Aaron Wei Jie
Other Authors: Ho Moon-Ho Ringo
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/148148
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Artificial Intelligence (AI) development continues to permeate many aspects of our lives. Organizations that heavily hinge on AI for their bottom-line results are at risk of AI aversion where individuals underutilize AI. While prior studies have developed scales to measure people’s attitudes towards AI, these measurements tend to assume that such attitudes or lay beliefs of AI are fixed. Drawing from the extensive research of lay theories and relevant constructs in the field of AI, we developed a new scale to measure the lay theory of AI. To test the psychometric properties of our scale, we sampled 360 US participants from Amazon M-Turk. As hypothesized, our results yielded a robust psychometric scale that measures two factors, AI entity and incremental theories. The lay theory of AI scale demonstrated good internal consistency reliability, convergent and divergent validities with relevant constructs. The scale had also established incremental validity with its unique utility in predicting an individual’s propensity to trust AI over well-established predictors. Under the broad umbrella of lay theories, we speculate that lay theory of AI is a malleable trait with a wealth of interventions that could be implemented in overcoming AI aversion in organizations. Future research could look into scale replication by cross validating the lay theory of AI and examine its antecedents and consequences.