Further insights into subspace methods with applications in face recognition

Subspace methods such as Linear Discriminant Analysis (LDA) are efficient in dimension reduction and statistical feature extraction. They are widely applied to multi-class pattern classification problems, such as face recognition, which often involve high dimensional and large data set. In this thes...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Zhu, Yan
مؤلفون آخرون: Sung Eric
التنسيق: Theses and Dissertations
اللغة:English
منشور في: 2009
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/15161
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:Subspace methods such as Linear Discriminant Analysis (LDA) are efficient in dimension reduction and statistical feature extraction. They are widely applied to multi-class pattern classification problems, such as face recognition, which often involve high dimensional and large data set. In this thesis, we provide further insights into the subspace methods to resolve some prolonged issues. Firstly, we propose the Margin-Maximization Discriminant Analysis (MMDA) based on an additive-form of discriminant function, which can extract features that approximately maximize the average projected margin between the classes. Secondly, an analytical relevance measure of subspace feature vectors is derived and used to weigh the LDA features. This leads to a scheme called Relevance-Weighted Discriminant Analysis (RWDA). It completely eliminates the peaking phenomenon of LDA and also suggests a new insight into the root cause of overfitting for classifiers using distance metric. Finally, 2D subspace methods which represent images as 2D matrices are investigated, in order to tackle the computation intractability of large-scale pattern classification problems.