Further insights into subspace methods with applications in face recognition

Subspace methods such as Linear Discriminant Analysis (LDA) are efficient in dimension reduction and statistical feature extraction. They are widely applied to multi-class pattern classification problems, such as face recognition, which often involve high dimensional and large data set. In this thes...

全面介紹

Saved in:
書目詳細資料
主要作者: Zhu, Yan
其他作者: Sung Eric
格式: Theses and Dissertations
語言:English
出版: 2009
主題:
在線閱讀:https://hdl.handle.net/10356/15161
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Subspace methods such as Linear Discriminant Analysis (LDA) are efficient in dimension reduction and statistical feature extraction. They are widely applied to multi-class pattern classification problems, such as face recognition, which often involve high dimensional and large data set. In this thesis, we provide further insights into the subspace methods to resolve some prolonged issues. Firstly, we propose the Margin-Maximization Discriminant Analysis (MMDA) based on an additive-form of discriminant function, which can extract features that approximately maximize the average projected margin between the classes. Secondly, an analytical relevance measure of subspace feature vectors is derived and used to weigh the LDA features. This leads to a scheme called Relevance-Weighted Discriminant Analysis (RWDA). It completely eliminates the peaking phenomenon of LDA and also suggests a new insight into the root cause of overfitting for classifiers using distance metric. Finally, 2D subspace methods which represent images as 2D matrices are investigated, in order to tackle the computation intractability of large-scale pattern classification problems.