Cross-view graph embedding

Recently, more and more approaches are emerging to solve the cross-view matching problem where reference samples and query samples are from different views. In this paper, inspired by Graph Embedding, we propose a unified framework for these cross-view methods called Cross-view Graph Embedding. The...

Full description

Saved in:
Bibliographic Details
Main Authors: HUANG, Zhiwu, SHAN, S., ZHANG, H., LAO, S., CHEN, X.
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2012
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6389
https://ink.library.smu.edu.sg/context/sis_research/article/7392/viewcontent/Cross_view_graph_embedding.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:Recently, more and more approaches are emerging to solve the cross-view matching problem where reference samples and query samples are from different views. In this paper, inspired by Graph Embedding, we propose a unified framework for these cross-view methods called Cross-view Graph Embedding. The proposed framework can not only reformulate most traditional cross-view methods (e.g., CCA, PLS and CDFE), but also extend the typical single-view algorithms (e.g., PCA, LDA and LPP) to cross-view editions. Furthermore, our general framework also facilitates the development of new cross-view methods. In this paper, we present a new algorithm named Cross-view Local Discriminant Analysis (CLODA) under the proposed framework. Different from previous cross-view methods only preserving inter-view discriminant information or the intra-view local structure, CLODA preserves the local structure and the discriminant information of both intra-view and inter-view. Extensive experiments are conducted to evaluate our algorithms on two cross-view face recognition problems: face recognition across poses and face recognition across resolutions. These real-world face recognition experiments demonstrate that our framework achieves impressive performance in the cross-view problems.