Discovering emotions in Filipino laughter using audio features

Laughter is one important aspect when it comes to non-verbal communication. Though laughter is often associated with the feeling of happiness, it may not always be the case; laughter may also portray different kinds of emotions. We infer that a variety of other emotions exist during laughter and occ...

Full description

Saved in:
Bibliographic Details
Main Authors: Miranda, Miguel, Alonzo, Julie Ann, Campita, Janelle, Lucila, Stephanie, Suarez, Merlin Teodocia
Format: text
Published: Animo Repository 2010
Subjects:
Online Access:https://animorepository.dlsu.edu.ph/faculty_research/1451
https://animorepository.dlsu.edu.ph/context/faculty_research/article/2450/type/native/viewcontent
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: De La Salle University
id oai:animorepository.dlsu.edu.ph:faculty_research-2450
record_format eprints
spelling oai:animorepository.dlsu.edu.ph:faculty_research-24502021-06-28T08:28:02Z Discovering emotions in Filipino laughter using audio features Miranda, Miguel Alonzo, Julie Ann Campita, Janelle Lucila, Stephanie Suarez, Merlin Teodocia Laughter is one important aspect when it comes to non-verbal communication. Though laughter is often associated with the feeling of happiness, it may not always be the case; laughter may also portray different kinds of emotions. We infer that a variety of other emotions exist during laughter and occurrence and therefore investigate this phenomenon. It is the objective of this research to be able to identify the underlying emotions in Filipino laughter. This research focuses on studying existing machine learning techniques on emotion identification through audio signals from laughter in order to derive more suitable solutions. In this research, we present a comparative study of the performances of Multilayer Perceptron (MLP) and Support Vector Machines (SVM) using our system. Manual segmentation was done on recorded audio and pre-processing was implemented using low-pass filters. The 13 Mel-frequency cepstral coefficients (MFCCs) and prosodic features (pitch, intensity and formants), were extracted from audio signals and were separately fed to the machine classifier. Results had shown that highest rate of correctly classified instances is achieved using prosodic features only. MLP yielded a 44.4444% rate while SVM has a 18.5185% rate. © 2010 IEEE. 2010-10-28T07:00:00Z text text/html https://animorepository.dlsu.edu.ph/faculty_research/1451 https://animorepository.dlsu.edu.ph/context/faculty_research/article/2450/type/native/viewcontent Faculty Research Work Animo Repository Pattern recognition systems Computational auditory scene analysis Laughter Emotion recognition Computer Sciences Software Engineering
institution De La Salle University
building De La Salle University Library
continent Asia
country Philippines
Philippines
content_provider De La Salle University Library
collection DLSU Institutional Repository
topic Pattern recognition systems
Computational auditory scene analysis
Laughter
Emotion recognition
Computer Sciences
Software Engineering
spellingShingle Pattern recognition systems
Computational auditory scene analysis
Laughter
Emotion recognition
Computer Sciences
Software Engineering
Miranda, Miguel
Alonzo, Julie Ann
Campita, Janelle
Lucila, Stephanie
Suarez, Merlin Teodocia
Discovering emotions in Filipino laughter using audio features
description Laughter is one important aspect when it comes to non-verbal communication. Though laughter is often associated with the feeling of happiness, it may not always be the case; laughter may also portray different kinds of emotions. We infer that a variety of other emotions exist during laughter and occurrence and therefore investigate this phenomenon. It is the objective of this research to be able to identify the underlying emotions in Filipino laughter. This research focuses on studying existing machine learning techniques on emotion identification through audio signals from laughter in order to derive more suitable solutions. In this research, we present a comparative study of the performances of Multilayer Perceptron (MLP) and Support Vector Machines (SVM) using our system. Manual segmentation was done on recorded audio and pre-processing was implemented using low-pass filters. The 13 Mel-frequency cepstral coefficients (MFCCs) and prosodic features (pitch, intensity and formants), were extracted from audio signals and were separately fed to the machine classifier. Results had shown that highest rate of correctly classified instances is achieved using prosodic features only. MLP yielded a 44.4444% rate while SVM has a 18.5185% rate. © 2010 IEEE.
format text
author Miranda, Miguel
Alonzo, Julie Ann
Campita, Janelle
Lucila, Stephanie
Suarez, Merlin Teodocia
author_facet Miranda, Miguel
Alonzo, Julie Ann
Campita, Janelle
Lucila, Stephanie
Suarez, Merlin Teodocia
author_sort Miranda, Miguel
title Discovering emotions in Filipino laughter using audio features
title_short Discovering emotions in Filipino laughter using audio features
title_full Discovering emotions in Filipino laughter using audio features
title_fullStr Discovering emotions in Filipino laughter using audio features
title_full_unstemmed Discovering emotions in Filipino laughter using audio features
title_sort discovering emotions in filipino laughter using audio features
publisher Animo Repository
publishDate 2010
url https://animorepository.dlsu.edu.ph/faculty_research/1451
https://animorepository.dlsu.edu.ph/context/faculty_research/article/2450/type/native/viewcontent
_version_ 1703981057097859072