Machine learning based face expression recognition

Face expression recognition is an active research area in the past two decades. Many attempts have been made to understand how human beings perceive human faces. It is widely accepted that face recognition may rely on both componential cues (such as eyes, mouth, nose, and cheeks) and non compone...

全面介紹

Saved in:
書目詳細資料
主要作者: Paing Thu Thu Aung
其他作者: Jiang Xudong
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2022
主題:
在線閱讀:https://hdl.handle.net/10356/158367
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Face expression recognition is an active research area in the past two decades. Many attempts have been made to understand how human beings perceive human faces. It is widely accepted that face recognition may rely on both componential cues (such as eyes, mouth, nose, and cheeks) and non componential/holistic cues (considering the face as whole rather than as separate parts). However, how these cues should be optimally integrated remains unclear. Most state-of-the-art technologies of face expression recognition employ either componential cues or holistic information. Their recognition performance is therefore limited. This project investigates ways to integrate componential and holistic cues. We deployed a pretrained facial landmark detector to locate 68 landmarks of a face, to extract 8 individual facial components. Next, we utilized a convolutional network (CNN) to extract and learn relevant features from the facial and 8 componential images. Moreover, we deployed a CatBoost classifier to classify the landmark coordinates. Finally, we deployed soft and hard voting to combine all the predictions of the 10 trained models together. The soft voting approach achieved an accuracy of 63.87%, which is comparable to some existing method, considering we deployed fewer data for training. The creative approach may potentially lead to a better face expression recognition technology that outperforms current existing methods.