Further insights into subspace methods with applications in face recognition

Subspace methods such as Linear Discriminant Analysis (LDA) are efficient in dimension reduction and statistical feature extraction. They are widely applied to multi-class pattern classification problems, such as face recognition, which often involve high dimensional and large data set. In this thes...

Full description

Saved in:
Bibliographic Details
Main Author: Zhu, Yan
Other Authors: Sung Eric
Format: Theses and Dissertations
Language:English
Published: 2009
Subjects:
Online Access:https://hdl.handle.net/10356/15161
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Subspace methods such as Linear Discriminant Analysis (LDA) are efficient in dimension reduction and statistical feature extraction. They are widely applied to multi-class pattern classification problems, such as face recognition, which often involve high dimensional and large data set. In this thesis, we provide further insights into the subspace methods to resolve some prolonged issues. Firstly, we propose the Margin-Maximization Discriminant Analysis (MMDA) based on an additive-form of discriminant function, which can extract features that approximately maximize the average projected margin between the classes. Secondly, an analytical relevance measure of subspace feature vectors is derived and used to weigh the LDA features. This leads to a scheme called Relevance-Weighted Discriminant Analysis (RWDA). It completely eliminates the peaking phenomenon of LDA and also suggests a new insight into the root cause of overfitting for classifiers using distance metric. Finally, 2D subspace methods which represent images as 2D matrices are investigated, in order to tackle the computation intractability of large-scale pattern classification problems.