The "loci" of misinformation and its correction in peer- and expert-led online communities for mental health: content analysis
Background: Mental health problems are recognized as a pressing public health issue, and an increasing number of individuals are turning to online communities for mental health to search for information and support. Although these virtual platforms have the potential to provide emotional support and...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/171588 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Background: Mental health problems are recognized as a pressing public health issue, and an increasing number of individuals are turning to online communities for mental health to search for information and support. Although these virtual platforms have the potential to provide emotional support and access to anecdotal experiences, they can also present users with large amounts of potentially inaccurate information. Despite the importance of this issue, limited research has been conducted, especially on the differences that might emerge due to the type of content moderation of online communities: peer-led or expert-led. Objective: We aim to fill this gap by examining the prevalence, the communicative context, and the persistence of mental health misinformation on Facebook online communities for mental health, with a focus on understanding the mechanisms that enable effective correction of inaccurate information and differences between expert-led and peer-led groups. Methods: We conducted a content analysis of 1534 statements (from 144 threads) in 2 Italian-speaking Facebook groups. Results: The study found that an alarming number of comments (26.1%) contained medically inaccurate information. Furthermore, nearly 60% of the threads presented at least one misinformation statement without any correction attempt. Moderators were more likely to correct misinformation than members; however, they were not immune to posting content containing misinformation, which was an unexpected finding. Discussions about aspects of treatment (including side effects or treatment interruption) significantly increased the probability of encountering misinformation. Additionally, the study found that misinformation produced in the comments of a thread, rather than as the first post, had a lower probability of being corrected, particularly in peer-led communities. Conclusions: The high prevalence of misinformation in online communities, particularly when left uncorrected, underscores the importance of conducting additional research to identify effective mechanisms to prevent its spread. This is especially important given the study’s finding that misinformation tends to be more prevalent around specific “loci” of discussion that, once identified, can serve as a starting point to develop strategies for preventing and correcting misinformation within them. |
---|