Few-shot contrastive transfer learning with pretrained model for masked face verification
Face verification has seen remarkable progress that benefits from large-scale publicly available databases. However, it remains a challenge how to generalize a pretrained face verification model to a new scenario with a limited amount of data. In many real-world applications, the training datab...
Saved in:
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174467 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Face verification has seen remarkable progress
that benefits from large-scale publicly available databases.
However, it remains a challenge how to generalize a pretrained
face verification model to a new scenario with a limited
amount of data. In many real-world applications, the training
database only contains a limited number of identities with
two images for each identity due to the privacy concern.
In this paper, we propose to transfer knowledge from a
pretrained unmasked face verification model to a new model
for verification between masked and unmasked faces, to meet
the application requirements during the COVID-19 pandemic.
To overcome the lack of intra-class diversity resulting from only
a pair of masked and unmasked faces for each identity (i.e.,
two shots for each identity), a static prototype classification
function is designed to learn features for masked faces by
utilizing unmasked face knowledge from the pretrained model.
Meanwhile, a contrastive constrained embedding function is
designed to preserve unmasked face knowledge of the pretrained
model during the transfer learning process. By combining these
two functions, our method uses knowledge acquired from the
pretrained unmasked face verification model to proceed with
verification between masked and unmasked faces with a limited
amount of training data. Extensive experiments demonstrate that
our method can perform better than state-of-the-art methods for
verification between masked and unmasked faces in the few-shot
transfer learning setting. |
---|