Multiview face emotion recognition using geometrical and texture features

In the last decade, facial emotion recognition has attracted more and more interest of researchers in the computer vision community. Facial emotions are a form of nonverbal communication, used to exchange social and emotional information in human-human-interaction. By finding the emotion from...

Full description

Saved in:
Bibliographic Details
Main Author: Goodarzi, Farhad
Format: Thesis
Language:English
Published: 2017
Online Access:http://psasir.upm.edu.my/id/eprint/68532/1/FK%202018%2021%20-%20IR.pdf
http://psasir.upm.edu.my/id/eprint/68532/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Language: English
Description
Summary:In the last decade, facial emotion recognition has attracted more and more interest of researchers in the computer vision community. Facial emotions are a form of nonverbal communication, used to exchange social and emotional information in human-human-interaction. By finding the emotion from the human face automatically and reacting proactively, several applications could benefit. The examples of these are the human-computer-interfaces or security systems, driver safety systems and social science's domain. In order to use facial emotion recognition systems in real time situations, it is essential to recognize emotions not only from frontal face images but also from images containing faces with pose variations. Furthermore, facial landmarks have to be located automatically. The degree of intensity of human facial emotions varies from person to person. Some people may express the seven basic emotions more intense than others or they may use it in different ways. In this thesis, a real time emotion recognition system is presented. The system works on both, frontal and non-frontal faces. A 3D face pose estimation algorithm detects head rotations of Yaw, Roll and Pitch for emotion recognition. UPM3D-FE and BU3D-FE databases are used in this research for training purposes which include rotation and capturing of faces in different angles. After detecting the human face, several features are extracted from human face automatically and the geometrical facial features combined with texture features, are given to a back propagation neural network which is trained with various face images. This enables us to determine the emotion in real-time from the face of a person. Basically, the contributions are that the method is capable of detecting the face and facial landmarks in the live video; the landmark detection on the face is done automatically in each frame using both texture of facial points and relative positions of points on the face. Also, the emotion is detected from frontal and angled face and in the case where half of face is not visible (side view) the other half is reconstructed and emotion is detected. Geometrical and texture features are used for emotion recognition and the texture features are taken from specific areas of the face in a novel approach. The results show an improvement over existing approaches in determining emotions for various face poses. The effects of gender, ethnicity, color, mixed emotions and intensity of emotion have been analyzed as well. The resulting face emotion recognition system works real time in less than twenty milliseconds per frame. For UPM3DFE, in case of seven emotions, the accuracy is 63.08% for multiview and 62.19% for near frontal faces for emotion recognition, and for the BU3DFE, 80.61% accuracy was found for near frontal faces and 77.48% for multi view in the case of seven basic emotions. The achieved face emotion recognition method has improved emotion recognition accuracy and also it is able to adapt to the yaw and pitch rotation of face. Both databases (UPM3D and BU3D) were tested for the role of gender, ethnicity, color, mixed emotions and intensity of emotions. After cross validation, for the BU3DFE database, the best results were achieved for Indians and Southeast Asian (56.6% and 50.2%) subjects. In the case of UPM3DFE, the best results were achieved for Middle east and southeast Asians subjects (66.6% and 69.1%), and the lowest results were achieved in both databases for black subjects (45% and 54.54%). With regard to mixed emotions, it has been found that BU3DFE is 67.72% accurate in recognizing mixed emotions and UPM3DFE accuracy is 56.09%. In case of different emotion intensities in BU3DFE, the results for multi view faces manifested 71.11% for 1st emotion intensity, and 73.21% for 2nd emotion intensity, 75.1% for 3rd emotion intensity and 79.31% for 4th emotion intensity.