Deep learning based detector for real-time facial expression recognition
Automated Facial Expression Recognition (AFER) technology has become a hot topic in the field of pattern recognition. An accurate and fast real-time detection of facial expression could significantly bring benefits to important applications in many areas such as HumanComputer Interaction and Compute...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/77018 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Automated Facial Expression Recognition (AFER) technology has become a hot topic in the field of pattern recognition. An accurate and fast real-time detection of facial expression could significantly bring benefits to important applications in many areas such as HumanComputer Interaction and Computer Vision. Leveraging on the recent developments in deep learning techniques (Convolutional Neural Networks) in computer vision, enabling researchers to drastically improve the accuracy and performance of objection detection and recognition systems. In this paper, we use TensorFlow Object Detection (TFOD) API, an open source framework to train and test our end-to-end deep learning-based object detector on Compound Facial Expression of Emotion Database (CFEED). The aim is to develop a robust real-time facial expression detector for detecting and classifying the key seven human emotions: neutrality, happiness, sadness, fear, anger, surprise, and disgust. We employed two different types of meta architectures which are Faster R-CNN and SSD for object detection. Each of these meta-architectures is combined with deep feature extractors such as InceptionNet and MobileNet respectively to extract high-level representation automatically direct from raw images. Furthermore, we focus on reducing the needed amount of training data drastically by exploring transfer learning and fine-tuning the model parameters, while still maintaining high average precision. To aid generalization, data augmentation and dropout techniques were used to avoid overfitting. Our experiments show that with more fine-tuning and depth, the accuracy performance of “SSD_MobileNet_V1_COCO” and “Faster_RCNN_InceptionNet_V2_COCO” achieves 84.85% and 86.42% respectively on the CFEED testing set. |
---|