Feature Level Fusion of Face and Signature Using a Modified Feature Selection Technique
The multimodal biometric which is a combination of two or more modalities of biometric is able to give more assurance for the securities of some systems. Feature level fusion has been shown to provide higher-performance accuracy and provide a more secure recognition system. In this paper, we propo...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | http://umpir.ump.edu.my/id/eprint/5552/1/fskkp-2013-suryanti-feature_level.pdf http://umpir.ump.edu.my/id/eprint/5552/ http://dx.doi.org/10.1109/SITIS.2013.115 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Malaysia Pahang |
Language: | English |
Summary: | The multimodal biometric which is a combination
of two or more modalities of biometric is able to give more
assurance for the securities of some systems. Feature level fusion has been shown to provide higher-performance accuracy and provide a more secure recognition system. In this paper, we propose a feature level fusion of face features which are the physical appearance of a person in image-based and the online handwritten signature features which are the behavioral characteristics of a person in dynamic-based. The problem of high dimensionality of the combined features is overcome by the used of Linear Discriminant Analysis (LDA) in the feature extraction phase. One challenge in multi modal feature level fusion is to maintain the balance of the features selected between the two modalities, otherwise one modality may outweigh another. In order to address this issue, we propose to perform feature fusion in the feature selection phase. Feature selection using GA with modified fitness function is applied to the concatenated features in order to
ensure that only significant and most balanced features are used for classification. Comparison of the performance of the proposed method with other approaches indicates the highest in the recognition accuracy of 97.50%. |
---|