Face presentation attack detection based on AI
As face recognition systems become increasingly prevalent in our daily lives, security and robustness in these systems are imperative. Face presentation attack detection research aims to address this by detecting non-bonafide inputs to ensure critical systems are not compromised and constantly being...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/149502 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | As face recognition systems become increasingly prevalent in our daily lives, security and robustness in these systems are imperative. Face presentation attack detection research aims to address this by detecting non-bonafide inputs to ensure critical systems are not compromised and constantly being one step ahead of possible attackers.
With much research having been done in this aspect, models capable of detection within known scenarios given relevant input datasets have been developed. However, cross-domain detection is still a prevalent problem for these models. Changes in environment conditions such as illumination and type of capture device can throw the model off and produce degraded results.
In this project, we explore the possibility of different types of augmentation to supplement existing datasets and provide a more comprehensive set of inputs to increase the generalization ability of different models. The models used are based on state-of-the-art methods, augmented with our techniques to optimize the results.
We then propose the usage of the Pattern of Local Gravitational Force image descriptor that has been unused in the application of face presentation attack detection thus far. The experiment settings and results are discussed and benchmarked against state-of-the-art models to explore the feasibility and benefits of using this novel image descriptor in future works. |
---|