A prompt-based topic-modeling method for depression detection on low-resource data
Depression has a large impact on one’s personal life, especially during the COVID-19 pandemic. People have been trying to develop reliable methods for the depression detection task. Recently, methods based on deep learning have attracted much attention from the research community. However, they stil...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/lkcsb_research/7469 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Depression has a large impact on one’s personal life, especially during the COVID-19 pandemic. People have been trying to develop reliable methods for the depression detection task. Recently, methods based on deep learning have attracted much attention from the research community. However, they still face the challenge that data collection and annotation are difficult and expensive. In many real-world applications, only a small number of or even no training data are available. In this context, we propose a Prompt-based Topic-modeling method for Depression Detection (PTDD) on low-resource data, aiming to establish an effective way of depression detection under the above challenging situation. Instead of learning discriminating features from a small amount of labeled data, the proposed framework turns to leverage the generalization power of pretrained language models. Specifically, based on the question-and-answer routine during the interview, we first reorganize the text data according to the predefined topics for each interviewee. Via the prompt-based framework, we then predict whether the next-sentence prompt is emotionally positive or not. Finally, the depression detection task can be achieved based on the obtained topicwise predictions through a simple voting process. In the experiments, we validate the effectiveness of our model under several low-resource data settings. The results and analysis demonstrate that our PTDD achieves acceptable performance when only a few training samples or even no training samples are available. |
---|