A novel phase congruency based descriptor for dynamic facial expression analysis

Representation and classification of dynamic visual events in videos have been an active field of research. This work proposed a novel spatio-temporal descriptor based on phase congruency concept and applied it to recognize facial expression from video sequences. The proposed descriptor comprises hi...

Full description

Saved in:
Bibliographic Details
Main Authors: Shojaeilangari, Seyedehsamaneh, Yau, Wei-Yun, Teoh, Eam Khwang
Other Authors: School of Electrical and Electronic Engineering
Format: Article
Language:English
Published: 2016
Subjects:
Online Access:https://hdl.handle.net/10356/81599
http://hdl.handle.net/10220/39591
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Representation and classification of dynamic visual events in videos have been an active field of research. This work proposed a novel spatio-temporal descriptor based on phase congruency concept and applied it to recognize facial expression from video sequences. The proposed descriptor comprises histograms of dominant phase congruency over multiple 3D orientations to describe both spatial and temporal information of a dynamic event. The advantages of our proposed approach are local and dynamic processing, high accuracy, robustness to image scale variation, and illumination changes. We validated the performance of our proposed approach using the Cohn-Kanade (CK+) database where we achieved 95.44% accuracy in detecting six basic emotions. The approach was also shown to increase classification rates over the baseline results for the AVEC 2011 video subchallenge in detecting four emotion dimensions. We also validated its robustness to illumination and scale variation using our own collected dataset.