Learning modality-invariant features for heterogeneous face recognition

This paper addresses the problem of heterogeneous face recognition where the gallery and probe face samples are captured from two different modalities. Due to large discrepancies yet weak relationships across heterogeneous face image sets, most existing face recognition algorithms usually suffer fro...

Full description

Saved in:
Bibliographic Details
Main Authors: Huang, Likun, Lu, Jiwen, Tan, Yap Peng
Other Authors: School of Electrical and Electronic Engineering
Format: Conference or Workshop Item
Language:English
Published: 2013
Subjects:
Online Access:https://hdl.handle.net/10356/99421
http://hdl.handle.net/10220/12876
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6460472&isnumber=6460043
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:This paper addresses the problem of heterogeneous face recognition where the gallery and probe face samples are captured from two different modalities. Due to large discrepancies yet weak relationships across heterogeneous face image sets, most existing face recognition algorithms usually suffer from this application scenario. To address this problem, we propose in this paper to learn modality-invariant features (MIF) for heterogeneous face recognition. In our proposed method, a pair of heterogeneous face datasets are used as generic training datasets, and the relationship between both gallery and probe samples and generic training datasets are computed as modality-invariant features for matching heterogeneous face images. The rationale of our method is motivated by the fact the local geometrical information of each pair of heterogeneous face samples are usually similar in the corresponding generic training sets. Experimental results are presented to show the efficacy of the proposed method.