Gender-specific classifiers in phoneme recognition and academic emotion detection
Gender-specific classifiers are shown to outperform general classifiers. In calibrated experiments designed to demonstrate this, two sets of data were used to build male-specific and female-specific classifiers. The first dataset is used to predict vowel phonemes based on speech signals, and the sec...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Published: |
Animo Repository
2016
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/faculty_research/1280 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
id |
oai:animorepository.dlsu.edu.ph:faculty_research-2279 |
---|---|
record_format |
eprints |
spelling |
oai:animorepository.dlsu.edu.ph:faculty_research-22792022-11-16T02:41:16Z Gender-specific classifiers in phoneme recognition and academic emotion detection Azcarraga, Arnulfo P. Talavera, Arces Azcarraga, Judith Gender-specific classifiers are shown to outperform general classifiers. In calibrated experiments designed to demonstrate this, two sets of data were used to build male-specific and female-specific classifiers. The first dataset is used to predict vowel phonemes based on speech signals, and the second dataset is used to predict negative emotions based on brainwave (EEG) signals. A Multi-Layered-Perceptron (MLP) is first trained as a general classifier, where all data from both male and female users are combined. This general classifier recognizes vowel phonemes with a baseline accuracy of 91.09%, while that for EEG signals has an average baseline accuracy of 58.70%. The experiments show that the performance significantly improves when the classifiers are trained to be gender-specific–that is, there is a separate classifier for male users, and a separate classifier for female users. For the vowel phoneme recognition dataset, the average accuracy increases to 94.20% and 95.60%, for male only users and female-only users, respectively. As for the EEG dataset, the accuracy increases to 65.33% for male-only users and to 70.50% for female-only users. Performance rates using recall and precision show the same trend. A further probe is done using SOM to visualize the distribution of the sub-clusters among male and female users. © Springer International Publishing AG 2016. 2016-01-01T08:00:00Z text text/html https://animorepository.dlsu.edu.ph/faculty_research/1280 Faculty Research Work Animo Repository Phonemic awareness Emotion recognition Sex differences Electroencephalography Classifiers (Linguistics) Computer Sciences |
institution |
De La Salle University |
building |
De La Salle University Library |
continent |
Asia |
country |
Philippines Philippines |
content_provider |
De La Salle University Library |
collection |
DLSU Institutional Repository |
topic |
Phonemic awareness Emotion recognition Sex differences Electroencephalography Classifiers (Linguistics) Computer Sciences |
spellingShingle |
Phonemic awareness Emotion recognition Sex differences Electroencephalography Classifiers (Linguistics) Computer Sciences Azcarraga, Arnulfo P. Talavera, Arces Azcarraga, Judith Gender-specific classifiers in phoneme recognition and academic emotion detection |
description |
Gender-specific classifiers are shown to outperform general classifiers. In calibrated experiments designed to demonstrate this, two sets of data were used to build male-specific and female-specific classifiers. The first dataset is used to predict vowel phonemes based on speech signals, and the second dataset is used to predict negative emotions based on brainwave (EEG) signals. A Multi-Layered-Perceptron (MLP) is first trained as a general classifier, where all data from both male and female users are combined. This general classifier recognizes vowel phonemes with a baseline accuracy of 91.09%, while that for EEG signals has an average baseline accuracy of 58.70%. The experiments show that the performance significantly improves when the classifiers are trained to be gender-specific–that is, there is a separate classifier for male users, and a separate classifier for female users. For the vowel phoneme recognition dataset, the average accuracy increases to 94.20% and 95.60%, for male only users and female-only users, respectively. As for the EEG dataset, the accuracy increases to 65.33% for male-only users and to 70.50% for female-only users. Performance rates using recall and precision show the same trend. A further probe is done using SOM to visualize the distribution of the sub-clusters among male and female users. © Springer International Publishing AG 2016. |
format |
text |
author |
Azcarraga, Arnulfo P. Talavera, Arces Azcarraga, Judith |
author_facet |
Azcarraga, Arnulfo P. Talavera, Arces Azcarraga, Judith |
author_sort |
Azcarraga, Arnulfo P. |
title |
Gender-specific classifiers in phoneme recognition and academic emotion detection |
title_short |
Gender-specific classifiers in phoneme recognition and academic emotion detection |
title_full |
Gender-specific classifiers in phoneme recognition and academic emotion detection |
title_fullStr |
Gender-specific classifiers in phoneme recognition and academic emotion detection |
title_full_unstemmed |
Gender-specific classifiers in phoneme recognition and academic emotion detection |
title_sort |
gender-specific classifiers in phoneme recognition and academic emotion detection |
publisher |
Animo Repository |
publishDate |
2016 |
url |
https://animorepository.dlsu.edu.ph/faculty_research/1280 |
_version_ |
1751550430941282304 |