Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments
Background: Rapid proliferation of mental health interventions delivered through conversational agents (CAs) calls for high-quality evidence to support their implementation and adoption. Selecting appropriate outcomes, instruments for measuring outcomes, and assessment methods are crucial for ensuri...
Saved in:
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/169167 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-169167 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Science::Medicine Conversational Agent Chatbot |
spellingShingle |
Science::Medicine Conversational Agent Chatbot Jabir, Ahmad Ishqi Martinengo, Laura Lin, Xiaowen Torous, John Subramaniam, Mythily Car, Lorainne Tudor Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
description |
Background: Rapid proliferation of mental health interventions delivered through conversational agents (CAs) calls for high-quality evidence to support their implementation and adoption. Selecting appropriate outcomes, instruments for measuring outcomes, and assessment methods are crucial for ensuring that interventions are evaluated effectively and with a high level of quality. Objective: We aimed to identify the types of outcomes, outcome measurement instruments, and assessment methods used to assess the clinical, user experience, and technical outcomes in studies that evaluated the effectiveness of CA interventions for mental health. Methods: We undertook a scoping review of the relevant literature to review the types of outcomes, outcome measurement instruments, and assessment methods in studies that evaluated the effectiveness of CA interventions for mental health. We performed a comprehensive search of electronic databases, including PubMed, Cochrane Central Register of Controlled Trials, Embase (Ovid), PsychINFO, and Web of Science, as well as Google Scholar and Google. We included experimental studies evaluating CA mental health interventions. The screening and data extraction were performed independently by 2 review authors in parallel. Descriptive and thematic analyses of the findings were performed. Results: We included 32 studies that targeted the promotion of mental well-being (17/32, 53%) and the treatment and monitoring of mental health symptoms (21/32, 66%). The studies reported 203 outcome measurement instruments used to measure clinical outcomes (123/203, 60.6%), user experience outcomes (75/203, 36.9%), technical outcomes (2/203, 1.0%), and other outcomes (3/203, 1.5%). Most of the outcome measurement instruments were used in only 1 study (150/203, 73.9%) and were self-reported questionnaires (170/203, 83.7%), and most were delivered electronically via survey platforms (61/203, 30.0%). No validity evidence was cited for more than half of the outcome measurement instruments (107/203, 52.7%), which were largely created or adapted for the study in which they were used (95/107, 88.8%). Conclusions: The diversity of outcomes and the choice of outcome measurement instruments employed in studies on CAs for mental health point to the need for an established minimum core outcome set and greater use of validated instruments. Future studies should also capitalize on the affordances made available by CAs and smartphones to streamline the evaluation and reduce participants’ input burden inherent to self-reporting. |
author2 |
Lee Kong Chian School of Medicine (LKCMedicine) |
author_facet |
Lee Kong Chian School of Medicine (LKCMedicine) Jabir, Ahmad Ishqi Martinengo, Laura Lin, Xiaowen Torous, John Subramaniam, Mythily Car, Lorainne Tudor |
format |
Article |
author |
Jabir, Ahmad Ishqi Martinengo, Laura Lin, Xiaowen Torous, John Subramaniam, Mythily Car, Lorainne Tudor |
author_sort |
Jabir, Ahmad Ishqi |
title |
Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
title_short |
Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
title_full |
Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
title_fullStr |
Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
title_full_unstemmed |
Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
title_sort |
evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/169167 |
_version_ |
1772828553204727808 |
spelling |
sg-ntu-dr.10356-1691672023-07-09T15:38:23Z Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments Jabir, Ahmad Ishqi Martinengo, Laura Lin, Xiaowen Torous, John Subramaniam, Mythily Car, Lorainne Tudor Lee Kong Chian School of Medicine (LKCMedicine) Singapore-ETH Centre, Campus for Research Excellence And Technological Enterprise Science::Medicine Conversational Agent Chatbot Background: Rapid proliferation of mental health interventions delivered through conversational agents (CAs) calls for high-quality evidence to support their implementation and adoption. Selecting appropriate outcomes, instruments for measuring outcomes, and assessment methods are crucial for ensuring that interventions are evaluated effectively and with a high level of quality. Objective: We aimed to identify the types of outcomes, outcome measurement instruments, and assessment methods used to assess the clinical, user experience, and technical outcomes in studies that evaluated the effectiveness of CA interventions for mental health. Methods: We undertook a scoping review of the relevant literature to review the types of outcomes, outcome measurement instruments, and assessment methods in studies that evaluated the effectiveness of CA interventions for mental health. We performed a comprehensive search of electronic databases, including PubMed, Cochrane Central Register of Controlled Trials, Embase (Ovid), PsychINFO, and Web of Science, as well as Google Scholar and Google. We included experimental studies evaluating CA mental health interventions. The screening and data extraction were performed independently by 2 review authors in parallel. Descriptive and thematic analyses of the findings were performed. Results: We included 32 studies that targeted the promotion of mental well-being (17/32, 53%) and the treatment and monitoring of mental health symptoms (21/32, 66%). The studies reported 203 outcome measurement instruments used to measure clinical outcomes (123/203, 60.6%), user experience outcomes (75/203, 36.9%), technical outcomes (2/203, 1.0%), and other outcomes (3/203, 1.5%). Most of the outcome measurement instruments were used in only 1 study (150/203, 73.9%) and were self-reported questionnaires (170/203, 83.7%), and most were delivered electronically via survey platforms (61/203, 30.0%). No validity evidence was cited for more than half of the outcome measurement instruments (107/203, 52.7%), which were largely created or adapted for the study in which they were used (95/107, 88.8%). Conclusions: The diversity of outcomes and the choice of outcome measurement instruments employed in studies on CAs for mental health point to the need for an established minimum core outcome set and greater use of validated instruments. Future studies should also capitalize on the affordances made available by CAs and smartphones to streamline the evaluation and reduce participants’ input burden inherent to self-reporting. Ministry of Education (MOE) National Research Foundation (NRF) Published version This research is supported by the Singapore Ministry of Education under the Singapore Ministry of Education Academic Research Fund Tier 1 (RG36/20). The research was conducted as part of the Future Health Technologies program, which was established collaboratively between ETH Zurich and the National Research Foundation, Singapore. This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore, under its Campus for Research Excellence and Technological Enterprise program. 2023-07-04T06:30:19Z 2023-07-04T06:30:19Z 2023 Journal Article Jabir, A. I., Martinengo, L., Lin, X., Torous, J., Subramaniam, M. & Car, L. T. (2023). Evaluating conversational agents for mental health: scoping review of outcomes and outcome measurement instruments. Journal of Medical Internet Research, 25, e44548-. https://dx.doi.org/10.2196/44548 1438-8871 https://hdl.handle.net/10356/169167 10.2196/44548 37074762 2-s2.0-85153121663 25 e44548 en RG36/20 Journal of Medical Internet Research © Ahmad Ishqi Jabir, Laura Martinengo, Xiaowen Lin, John Torous, Mythily Subramaniam, Lorainne Tudor Car. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 19.04.2023. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included. application/pdf |