Accurate computing of facial expression recognition using a hybrid feature extraction technique
Facial expression recognition (FER) serves as an essential tool for understanding human emotional behaviors. Facial expressions provide a wealth of information about intentions, emotions, and other inner states. Over the past two decades, the development of an automatic FER device has become one of...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Published: |
Springer
2020
|
Subjects: | |
Online Access: | http://eprints.utm.my/id/eprint/93497/ http://dx.doi.org/10.1007/s11227-020-03468-8 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Teknologi Malaysia |
Summary: | Facial expression recognition (FER) serves as an essential tool for understanding human emotional behaviors. Facial expressions provide a wealth of information about intentions, emotions, and other inner states. Over the past two decades, the development of an automatic FER device has become one of the most demanding multimedia research areas in human–computer interaction systems. Several automatic systems have been introduced and have achieved precise identification accuracies. Due to the complex nature of the human face, however, problems still exist. Researchers are still struggling to develop effective methods for extracting features from images because of unclear features. This work proposes a methodology that improves high-performance computing in terms of the facial expression recognition accuracy. To achieve the goal of high accuracy, a hybrid method is proposed using the dual-tree m-band wavelet transform (DTMBWT) algorithm based on energy, entropy, and gray-level co-occurrence matrix (GLCM). It is accompanied by the use of a Gaussian mixture model (GMM) as the classification scheme to provide efficient identification of database images in terms of facial expressions. Using the DTMBWT, it is possible to derive many expression features from decomposition levels 1 to 6. Moreover, along with the GLCM features, the contrast and homogeneity features can be retrieved. All the features are eventually categorized and recognized with the aid of the GMM classifier. The proposed algorithms are tested using Japanese Female Facial Expression (JAFFE) database with seven different facial expressions: happiness, sadness, anger, fear, neutral, surprise, and disgust. The results of the experiments show that the highest precision of the proposed technique is 99.53%, which is observed at the 4th decomposition level of the DTMBWT. |
---|