FINGERPRINT INDEX SEARCHING BASED ON RIDGE ORIENTATION AND FREQUENCY ON GPU

Fingerprint is a unique characteristic owned by human which can be used as a form of identity. Identifying a person based on their fingerprint is done through fingerprint matching process. A direct matching, in which a fingerprint is compared to all fingerprints in the database, needs a lot of time....

Full description

Saved in:
Bibliographic Details
Main Author: - NIM : 13514108 , Michael
Format: Final Project
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/28902
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
Description
Summary:Fingerprint is a unique characteristic owned by human which can be used as a form of identity. Identifying a person based on their fingerprint is done through fingerprint matching process. A direct matching, in which a fingerprint is compared to all fingerprints in the database, needs a lot of time. One approach that can be used to speed up the fingerprint matching process is using fingerprint indexing, which consists of fingerprint indexing creation, and fingerprint index searching. Based on the experiments that have been done, the time needed for fingerprint index searching in a database of one million fingerprints is around one second, and the time increases linearly based on the number of fingerprints. Therefore, performance improvement of index searching is needed, one of the ways is through parallel processing. <br /> <br /> <br /> <br /> <br /> This final project looks for the suitable strategies for fingerprint index searching based on ridge orientation and frequency on GPU. Similarity score calculation needs to be done in parallel for every fingerprint index entries in the database, and the calculation for each fingerprint index entry is also done in parallel. <br /> <br /> <br /> <br /> <br /> The result of experiments on different size of database shows that parallel processing on GPU gives a performance improvement for a large size of database, which is more than or equals to 100.000. The strategy of separating similarity scores calculation into a few kernels with different size of blocks gives a speed up of 250 times for database size of 8.000.000.