Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository
Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Article |
Published: |
2023
|
Subjects: | |
Online Access: | https://repository.li.mahidol.ac.th/handle/123456789/84248 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Mahidol University |
id |
th-mahidol.84248 |
---|---|
record_format |
dspace |
spelling |
th-mahidol.842482023-06-19T00:01:16Z Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository Gomutbutra P. Mahidol University Computer Science Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater. 2023-06-18T17:01:16Z 2023-06-18T17:01:16Z 2022-10-06 Article Frontiers in Artificial Intelligence Vol.5 (2022) 10.3389/frai.2022.942248 26248212 2-s2.0-85144019487 https://repository.li.mahidol.ac.th/handle/123456789/84248 SCOPUS |
institution |
Mahidol University |
building |
Mahidol University Library |
continent |
Asia |
country |
Thailand Thailand |
content_provider |
Mahidol University Library |
collection |
Mahidol University Institutional Repository |
topic |
Computer Science |
spellingShingle |
Computer Science Gomutbutra P. Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository |
description |
Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater. |
author2 |
Mahidol University |
author_facet |
Mahidol University Gomutbutra P. |
format |
Article |
author |
Gomutbutra P. |
author_sort |
Gomutbutra P. |
title |
Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository |
title_short |
Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository |
title_full |
Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository |
title_fullStr |
Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository |
title_full_unstemmed |
Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository |
title_sort |
classification of elderly pain severity from automated video clip facial action unit analysis: a study from a thai data repository |
publishDate |
2023 |
url |
https://repository.li.mahidol.ac.th/handle/123456789/84248 |
_version_ |
1781414106749730816 |