Advances in computer-human interaction for detecting facial expression using dual tree multi band wavelet transform and Gaussian mixture model

In human communication, facial expressions play an important role, which carries enough information about human emotions. Last two decades, it becomes a very active research area in pattern recognition and computer vision. In this type of recognition, there is a drawback of how to extract the featur...

Full description

Saved in:
Bibliographic Details
Main Authors: Kommineni, Jenni, Mandala, Satria, Sunar, Mohd. Shahrizal, Chakravarthy, Parvathaneni Midhu
Format: Article
Published: Springer-Verlag London Ltd. 2020
Subjects:
Online Access:http://eprints.utm.my/id/eprint/93429/
http://dx.doi.org/10.1007/s00521-020-05037-9
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Teknologi Malaysia
Description
Summary:In human communication, facial expressions play an important role, which carries enough information about human emotions. Last two decades, it becomes a very active research area in pattern recognition and computer vision. In this type of recognition, there is a drawback of how to extract the features because of its dynamic nature of facial structures, which are extracted from the facial images and to predict the level of difficulties in the extraction of the facial expressions. In this research, an efficient approach for emotion or facial expression analysis based on dual-tree M-band wavelet transform (DTMBWT) and Gaussian mixture model (GMM) is presented. Different facial expressions are represented by DTMBWT at various decomposition levels from one to six. From the representations, DTMBWT energy and entropy features are extracted as features for the corresponding facial expression. These features are analyzed for the recognition using GMM classifier by varying the number of Gaussians used. Japanese female facial expression database which contains seven facial expressions; happy, sad, angry, fear, neutral, surprise and disgust are employed for the evaluation. Results show that the framework provides 98.14% accuracy using fourth-level decomposition, which is considerably high.