Place recognition using semantic concepts of visual words
Applying the ‘bag-of-visual-words’ has recently become popular for image understanding. Although, using the histogram of visual words suffers the problem when the patches of an image faced with similar appearance corresponding to differentiate semantic concepts and vice versa. Due to varying views a...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Academic Journals
2011
|
Online Access: | http://psasir.upm.edu.my/id/eprint/22773/1/22773.pdf http://psasir.upm.edu.my/id/eprint/22773/ https://academicjournals.org/journal/SRE/article-abstract/51C36F134811 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Putra Malaysia |
Language: | English |
Summary: | Applying the ‘bag-of-visual-words’ has recently become popular for image understanding. Although, using the histogram of visual words suffers the problem when the patches of an image faced with similar appearance corresponding to differentiate semantic concepts and vice versa. Due to varying views and dynamic objects, this problem is more complicated in the mobile robot applications such as global localization and place recognition systems. This paper presents a supervised learning framework for place recognition using the semantic concepts of visual words. Specifically, the k-mean algorithm is firstly applied to quantize the low-level visual features as bag-of-visual-words (BOVW). And then the visual latent semantic analysis (VLSA) is introduced to obtain semantic concepts of these words from the correlation of the image patches. Once obtained the semantic concepts, the corresponding of these concepts in a query image are formed as a vector of similarity density, which it can be exploited in the place recognition using the support vector machine (SVM) classifier. Experiments on synthesis and challenging indoor datasets reveal that the average recognition performance in two different datasets is improved from 77.54 to 90.92% using the histogram of BOVW and the proposed method respectively. |
---|