One-class knowledge distillation for face presentation attack detection

Face presentation attack detection (PAD) has been extensively studied by research communities to enhance the security of face recognition systems. Although existing methods have achieved good performance on testing data with similar distribution as the training data, their performance degrades sever...

Full description

Saved in:
Bibliographic Details
Main Authors: Li, Zhi, Cai, Rizhao, Li, Haoliang, Lam, Kwok-Yan, Hu, Yongjian, Kot, Alex Chichung
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/168040
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-168040
record_format dspace
spelling sg-ntu-dr.10356-1680402023-05-26T15:36:15Z One-class knowledge distillation for face presentation attack detection Li, Zhi Cai, Rizhao Li, Haoliang Lam, Kwok-Yan Hu, Yongjian Kot, Alex Chichung School of Computer Science and Engineering School of Electrical and Electronic Engineering China-Singapore International Joint Research Institute Engineering::Computer science and engineering Face Presentation Attack Detection Knowledge Distillation Face presentation attack detection (PAD) has been extensively studied by research communities to enhance the security of face recognition systems. Although existing methods have achieved good performance on testing data with similar distribution as the training data, their performance degrades severely in application scenarios with data of unseen distributions. In situations where the training and testing data are drawn from different domains, a typical approach is to apply domain adaptation techniques to improve face PAD performance with the help of target domain data. However, it has always been a non-trivial challenge to collect sufficient data samples in the target domain, especially for attack samples. This paper introduces a teacher-student framework to improve the cross-domain performance of face PAD with one-class domain adaptation. In addition to the source domain data, the framework utilizes only a few genuine face samples of the target domain. Under this framework, a teacher network is trained with source domain samples to provide discriminative feature representations for face PAD. Student networks are trained to mimic the teacher network and learn similar representations for genuine face samples of the target domain. In the test phase, the similarity score between the representations of the teacher and student networks is used to distinguish attacks from genuine ones. To evaluate the proposed framework under one-class domain adaptation settings, we devised two new protocols and conducted extensive experiments. The experimental results show that our method outperforms baselines under one-class domain adaptation settings and even state-of-the-art methods with unsupervised domain adaptation. Nanyang Technological University National Research Foundation (NRF) Submitted/Accepted version This work was supported in part by the Nanyang Technological University (NTU)-Peking University (PKU) Joint Research Institute (a collaboration between Nanyang Technological University and Peking University that is sponsored by a donation from the Ng Teng Fong Charitable Foundation); in part by the Science and Technology Foundation of Guangzhou Huangpu Development District under Grant 2019GH16; in part by the China-Singapore International Joint Research Institute under Grant 206-A018001; and in part by the National Research Foundation, Prime Minister’s Office, Singapore, under its Strategic Capability Research Centres Funding Initiative. The work of Haoliang Li was supported by the City University of Hong Kong (CityU) New Research Initiatives/Infrastructure Support from Central under Grant APRC 9610528. 2023-05-19T06:26:49Z 2023-05-19T06:26:49Z 2022 Journal Article Li, Z., Cai, R., Li, H., Lam, K., Hu, Y. & Kot, A. C. (2022). One-class knowledge distillation for face presentation attack detection. IEEE Transactions On Information Forensics and Security, 17, 2137-2150. https://dx.doi.org/10.1109/TIFS.2022.3178240 1556-6013 https://hdl.handle.net/10356/168040 10.1109/TIFS.2022.3178240 2-s2.0-85133729685 17 2137 2150 en IEEE Transactions on Information Forensics and Security © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TIFS.2022.3178240. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Face Presentation Attack Detection
Knowledge Distillation
spellingShingle Engineering::Computer science and engineering
Face Presentation Attack Detection
Knowledge Distillation
Li, Zhi
Cai, Rizhao
Li, Haoliang
Lam, Kwok-Yan
Hu, Yongjian
Kot, Alex Chichung
One-class knowledge distillation for face presentation attack detection
description Face presentation attack detection (PAD) has been extensively studied by research communities to enhance the security of face recognition systems. Although existing methods have achieved good performance on testing data with similar distribution as the training data, their performance degrades severely in application scenarios with data of unseen distributions. In situations where the training and testing data are drawn from different domains, a typical approach is to apply domain adaptation techniques to improve face PAD performance with the help of target domain data. However, it has always been a non-trivial challenge to collect sufficient data samples in the target domain, especially for attack samples. This paper introduces a teacher-student framework to improve the cross-domain performance of face PAD with one-class domain adaptation. In addition to the source domain data, the framework utilizes only a few genuine face samples of the target domain. Under this framework, a teacher network is trained with source domain samples to provide discriminative feature representations for face PAD. Student networks are trained to mimic the teacher network and learn similar representations for genuine face samples of the target domain. In the test phase, the similarity score between the representations of the teacher and student networks is used to distinguish attacks from genuine ones. To evaluate the proposed framework under one-class domain adaptation settings, we devised two new protocols and conducted extensive experiments. The experimental results show that our method outperforms baselines under one-class domain adaptation settings and even state-of-the-art methods with unsupervised domain adaptation.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Li, Zhi
Cai, Rizhao
Li, Haoliang
Lam, Kwok-Yan
Hu, Yongjian
Kot, Alex Chichung
format Article
author Li, Zhi
Cai, Rizhao
Li, Haoliang
Lam, Kwok-Yan
Hu, Yongjian
Kot, Alex Chichung
author_sort Li, Zhi
title One-class knowledge distillation for face presentation attack detection
title_short One-class knowledge distillation for face presentation attack detection
title_full One-class knowledge distillation for face presentation attack detection
title_fullStr One-class knowledge distillation for face presentation attack detection
title_full_unstemmed One-class knowledge distillation for face presentation attack detection
title_sort one-class knowledge distillation for face presentation attack detection
publishDate 2023
url https://hdl.handle.net/10356/168040
_version_ 1772828728176410624