Region-based facial expression recognition in still images

In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such area...

Full description

Saved in:
Bibliographic Details
Main Authors: Nagi, Gawed M., O. K. Rahmat, Rahmita Wirza, Khalid, Fatimah, Abdullah, Muhamad Taufik
Format: Article
Language:English
Published: Korea Information Processing Society 2013
Online Access:http://psasir.upm.edu.my/id/eprint/30644/1/Region-based%20facial%20expression%20.pdf
http://psasir.upm.edu.my/id/eprint/30644/
http://koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1JBB0_2013_v9n1_173
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Language: English
Description
Summary:In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.