Find your neighbors (quickly!)
In many computer vision problems, answering the nearest neighbor queries efficiently, especially in higher dimensions over a large dataset is a difficult task and highly time consuming. The brute force method to find the nearest neighbor to a point q requires a linear scan of all objects in S. Howev...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2012
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/48509 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In many computer vision problems, answering the nearest neighbor queries efficiently, especially in higher dimensions over a large dataset is a difficult task and highly time consuming. The brute force method to find the nearest neighbor to a point q requires a linear scan of all objects in S. However this method would prove too inefficient for large datasets with large d dimensional vectors. Therefore in recent years, the approximate nearest neighbor solution was proposed to mitigate the curse of dimensionality issue. These approximate algorithms are known to provide large speedups with a minor tradeoff between the loss of efficiency or accuracy.
In this project, we compare and evaluate 3 approximate nearest neighbor algorithmic implementations against each other as well as the linear brute force search. The 3 algorithms that will be studied intensively throughout are the following:
• ϵ-approximate nearest neighbor method that implements the k-d tree with a priority search tree.
• Randomized k-d tree and Hierarchical kmeans tree algorithm |
---|