Randomized visual phrases for object search
Accurate matching of local features plays an essential role in visual object search. Instead of matching individual features separately, using the spatial context, e.g., bundling a group of co-located features into a visual phrase, has shown to enable more discriminative matching. Despite previous w...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2013
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/100684 http://hdl.handle.net/10220/17893 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-100684 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-1006842020-03-07T13:24:50Z Randomized visual phrases for object search Jiang, Yuning Meng, Jingjing Yuan, Junsong School of Electrical and Electronic Engineering IEEE Conference on Computer Vision and Pattern Recognition (2012 : Providence, Rhode Island, US) DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Accurate matching of local features plays an essential role in visual object search. Instead of matching individual features separately, using the spatial context, e.g., bundling a group of co-located features into a visual phrase, has shown to enable more discriminative matching. Despite previous work, it remains a challenging problem to extract appropriate spatial context for matching. We propose a randomized approach to deriving visual phrase, in the form of spatial random partition. By averaging the matching scores over multiple randomized visual phrases, our approach offers three benefits: 1) the aggregation of the matching scores over a collection of visual phrases of varying sizes and shapes provides robust local matching; 2) object localization is achieved by simple thresholding on the voting map, which is more efficient than subimage search; 3) our algorithm lends itself to easy parallelization and also allows a flexible trade-off between accuracy and speed by adjusting the number of partition times. Both theoretical studies and experimental comparisons with the state-of-the-art methods validate the advantages of our approach. Accepted version 2013-11-29T02:52:10Z 2019-12-06T20:26:35Z 2013-11-29T02:52:10Z 2019-12-06T20:26:35Z 2012 2012 Conference Paper Jiang, Y., Meng, J., Yuan, J. (2012). Randomized Visual Phrases for Object Search. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.3100-3107. https://hdl.handle.net/10356/100684 http://hdl.handle.net/10220/17893 10.1109/CVPR.2012.6248042 en © 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/CVPR.2012.6248042]. 8 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
country |
Singapore |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition |
spellingShingle |
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Jiang, Yuning Meng, Jingjing Yuan, Junsong Randomized visual phrases for object search |
description |
Accurate matching of local features plays an essential role in visual object search. Instead of matching individual features separately, using the spatial context, e.g., bundling a group of co-located features into a visual phrase, has shown to enable more discriminative matching. Despite previous work, it remains a challenging problem to extract appropriate spatial context for matching. We propose a randomized approach to deriving visual phrase, in the form of spatial random partition. By averaging the matching scores over multiple randomized visual phrases, our approach offers three benefits: 1) the aggregation of the matching scores over a collection of visual phrases of varying sizes and shapes provides robust local matching; 2) object localization is achieved by simple thresholding on the voting map, which is more efficient than subimage search; 3) our algorithm lends itself to easy parallelization and also allows a flexible trade-off between accuracy and speed by adjusting the number of partition times. Both theoretical studies and experimental comparisons with the state-of-the-art methods validate the advantages of our approach. |
author2 |
School of Electrical and Electronic Engineering |
author_facet |
School of Electrical and Electronic Engineering Jiang, Yuning Meng, Jingjing Yuan, Junsong |
format |
Conference or Workshop Item |
author |
Jiang, Yuning Meng, Jingjing Yuan, Junsong |
author_sort |
Jiang, Yuning |
title |
Randomized visual phrases for object search |
title_short |
Randomized visual phrases for object search |
title_full |
Randomized visual phrases for object search |
title_fullStr |
Randomized visual phrases for object search |
title_full_unstemmed |
Randomized visual phrases for object search |
title_sort |
randomized visual phrases for object search |
publishDate |
2013 |
url |
https://hdl.handle.net/10356/100684 http://hdl.handle.net/10220/17893 |
_version_ |
1681037896093532160 |