Feature extraction and localisation using scale-invariant feature transform on 2.5D image

anatomical landmarks, which is a vital initial stage for several applications, such as face recognition, facial analysis and synthesis. Locating facial landmarks in images is an important task in image processing and detecting it automatically still remains challenging. The appearance of facial la...

Full description

Saved in:
Bibliographic Details
Main Authors: Suk, Ting Pui, Jacey-Lynn, Minoi, Terrin, Lim, Fradinho Oliveira, João, Fyfe Gillies, Duncan
Format: Article
Language:English
Published: Vaclav Skala - Union Agency 2015
Subjects:
Online Access:http://ir.unimas.my/id/eprint/12107/1/No%2041%20%28abstrak%29.pdf
http://ir.unimas.my/id/eprint/12107/
http://www.scopus.com/inward/record.url?eid=2-s2.0-84957922716&partnerID=40&md5=997959304b567010c3b50bb171a2f310
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Malaysia Sarawak
Language: English
Description
Summary:anatomical landmarks, which is a vital initial stage for several applications, such as face recognition, facial analysis and synthesis. Locating facial landmarks in images is an important task in image processing and detecting it automatically still remains challenging. The appearance of facial landmarks may vary tremendously due to facial variations. Detecting and extracting landmarks from raw face data is usually done manually by trained and experienced scientists or clinicians, and the landmarking is a laborious process. Hence, we aim to develop methods to automate as much as possible the process of landmarking facial features. In this paper, we present and discuss our new automatic landmarking method on face data using 2.5-dimensional (2.5D) range images. We applied the Scale-invariant Feature Transform (SIFT) method to extract feature vectors and the Otsu’s method to obtain a general threshold value for landmark localisation. We have also developed an interactive tool to ease the visualisation of the overall landmarking process. The interactive visualization tool has a function which allows users to adjust and explore the threshold values for further analysis, thus enabling one to determine the threshold values for the detection and extraction of important keypoints or/and regions of facial features that are suitable to be used later automatically with new datasets with the same controlled lighting and pose restrictions. We measured the accuracy of the automatic landmarking versus manual landmarking and found the differences to be marginal. This paper describes our own implementation of the SIFT and Otsu’s algorithms, analyzes the results of the landmark detection, and highlights future work