Facial motion prior networks for facial expression recognition

Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER frame...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Yuedong, Wang, Jianfeng, Chen, Shikai, Shi, Zhongchao, Cai, Jianfei
Other Authors: 2019 IEEE Visual Communications and Image Processing (VCIP)
Format: Conference or Workshop Item
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138945
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-138945
record_format dspace
spelling sg-ntu-dr.10356-1389452020-11-25T08:29:42Z Facial motion prior networks for facial expression recognition Chen, Yuedong Wang, Jianfeng Chen, Shikai Shi, Zhongchao Cai, Jianfei 2019 IEEE Visual Communications and Image Processing (VCIP) Institute for Media Innovation (IMI) Engineering::Computer science and engineering Facial Expression Recognition Deep Learning Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches. Accepted version 2020-05-14T04:44:23Z 2020-05-14T04:44:23Z 2019 Conference Paper Chen, Y., Wang, J., Chen, S., Shi, Z., & Cai, J. (2019). Facial motion prior networks for facial expression recognition. Proceedings of 2019 IEEE Visual Communications and Image Procensing (VCIP). doi:10.1109/VCIP47243.2019.8965826 9781728137230 https://hdl.handle.net/10356/138945 10.1109/VCIP47243.2019.8965826 2-s2.0-85079245655 en © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/VCIP47243.2019.8965826 application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Facial Expression Recognition
Deep Learning
spellingShingle Engineering::Computer science and engineering
Facial Expression Recognition
Deep Learning
Chen, Yuedong
Wang, Jianfeng
Chen, Shikai
Shi, Zhongchao
Cai, Jianfei
Facial motion prior networks for facial expression recognition
description Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.
author2 2019 IEEE Visual Communications and Image Processing (VCIP)
author_facet 2019 IEEE Visual Communications and Image Processing (VCIP)
Chen, Yuedong
Wang, Jianfeng
Chen, Shikai
Shi, Zhongchao
Cai, Jianfei
format Conference or Workshop Item
author Chen, Yuedong
Wang, Jianfeng
Chen, Shikai
Shi, Zhongchao
Cai, Jianfei
author_sort Chen, Yuedong
title Facial motion prior networks for facial expression recognition
title_short Facial motion prior networks for facial expression recognition
title_full Facial motion prior networks for facial expression recognition
title_fullStr Facial motion prior networks for facial expression recognition
title_full_unstemmed Facial motion prior networks for facial expression recognition
title_sort facial motion prior networks for facial expression recognition
publishDate 2020
url https://hdl.handle.net/10356/138945
_version_ 1688665569145913344