Fast covariant VLAD for image search
Vector of locally aggregated descriptor (VLAD) is a popular image encoding approach for its simplicity and better scalability over conventional bag-of-visual-word approach. In order to enhance its distinctiveness and geometric invariance, covariant VLAD (CVLAD) is proposed to pool local features bas...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2016
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6310 https://ink.library.smu.edu.sg/context/sis_research/article/7313/viewcontent/07499824.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Vector of locally aggregated descriptor (VLAD) is a popular image encoding approach for its simplicity and better scalability over conventional bag-of-visual-word approach. In order to enhance its distinctiveness and geometric invariance, covariant VLAD (CVLAD) is proposed to pool local features based on their dominant orientations/characteristic scales, which leads to a geometric-aware representation. This representation achieves rotation/scale invariance when being associated with circular matching. However, the circular matching induces several times of computation overhead, which makes CVLAD hardly suitable for large-scale retrieval tasks. In this paper, the issue of computation overhead is alleviated by performing the circular matching in CVLAD's frequency domain. In addition, by operating PCA on CVLAD in its frequency domain, much better scalability is achieved than when it is undertaken in the original feature space. Furthermore, the high-dimensional CVLAD subvectors are converted to dozens of very low-dimensional subvectors, which is possible when transforming the feature into its frequency domain. Nearest neighbor search is therefore undertaken on very low-dimensional subspaces, which becomes easily tractable. The effectiveness of our approach is demonstrated in the retrieval scenario on popular benchmarks comprising up to 1 million database images. |
---|