Android smart phone based participatory sensing

Global Positioning System (GPS) units are a regular navigational aid for many modern-day drivers. However, they are accurate only to about 3 to 15 metres and are prone to misdirections. The problem is exacerbated by the fact that entry into certain road sections in Singapore is chargeable. Fortunate...

Full description

Saved in:
Bibliographic Details
Main Author: Teo, Kok Hien
Other Authors: Li Mo
Format: Final Year Project
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10356/62547
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-62547
record_format dspace
spelling sg-ntu-dr.10356-625472023-03-03T20:31:58Z Android smart phone based participatory sensing Teo, Kok Hien Li Mo School of Computer Engineering Parallel and Distributed Computing Centre DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition Global Positioning System (GPS) units are a regular navigational aid for many modern-day drivers. However, they are accurate only to about 3 to 15 metres and are prone to misdirections. The problem is exacerbated by the fact that entry into certain road sections in Singapore is chargeable. Fortunately, smartphones are frequently used by drivers as their dedicated GPS units and this has given birth to the idea of embedding a software engine – built atop the open source OpenCV library – within a GPS system to recognize and match buildings captured through the smartphone camera against a pictorial database of buildings in the vicinity to accurate identify the exact location of the vehicle (and phone), thus enhancing the overall accuracy of the GPS system. Various methodologies exist to implement the image matching sub-system, ranging from histogram comparison to template matching to feature analysis. Feature analysis has proven to be the best technique given that it holds the traits of being both photometric and geometric invariant. The combination of (SURF, SURF, FLANNBASED) of feature descriptor, descriptor extractor and descriptor matcher proved to be highly accurate but extremely slow on a mobile device. (ORB, ORB, BRUTEFORCE_L1) was eventually chosen given that it was computationally efficient with a runtime of less than 3 seconds per transaction with only an accuracy drop of 3% in day time building recognition (but with a slight overall improvement in accuracy when accounting for junctions and night time recognition). The experiment utilized an array of images that covered buildings and junctions in day and night settings with the overall ranking of the different methods judged based on the 3 criteria: (1) percentage of true positives; (2) percentage of true negatives; and (3) matching duration. The image matching sub-system is packaged into an easy-to-use function call which takes in two different images as parameters – one from the smartphone’s camera, the other from the pictorial database. The entire database is stored in a list and images are eliminated from the search space once their latitude and longitude values exceed a certain proximity compared to the current location to improve overall efficiency. Bachelor of Engineering (Computer Science) 2015-04-15T06:30:55Z 2015-04-15T06:30:55Z 2015 2015 Final Year Project (FYP) http://hdl.handle.net/10356/62547 en Nanyang Technological University 66 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
spellingShingle DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Teo, Kok Hien
Android smart phone based participatory sensing
description Global Positioning System (GPS) units are a regular navigational aid for many modern-day drivers. However, they are accurate only to about 3 to 15 metres and are prone to misdirections. The problem is exacerbated by the fact that entry into certain road sections in Singapore is chargeable. Fortunately, smartphones are frequently used by drivers as their dedicated GPS units and this has given birth to the idea of embedding a software engine – built atop the open source OpenCV library – within a GPS system to recognize and match buildings captured through the smartphone camera against a pictorial database of buildings in the vicinity to accurate identify the exact location of the vehicle (and phone), thus enhancing the overall accuracy of the GPS system. Various methodologies exist to implement the image matching sub-system, ranging from histogram comparison to template matching to feature analysis. Feature analysis has proven to be the best technique given that it holds the traits of being both photometric and geometric invariant. The combination of (SURF, SURF, FLANNBASED) of feature descriptor, descriptor extractor and descriptor matcher proved to be highly accurate but extremely slow on a mobile device. (ORB, ORB, BRUTEFORCE_L1) was eventually chosen given that it was computationally efficient with a runtime of less than 3 seconds per transaction with only an accuracy drop of 3% in day time building recognition (but with a slight overall improvement in accuracy when accounting for junctions and night time recognition). The experiment utilized an array of images that covered buildings and junctions in day and night settings with the overall ranking of the different methods judged based on the 3 criteria: (1) percentage of true positives; (2) percentage of true negatives; and (3) matching duration. The image matching sub-system is packaged into an easy-to-use function call which takes in two different images as parameters – one from the smartphone’s camera, the other from the pictorial database. The entire database is stored in a list and images are eliminated from the search space once their latitude and longitude values exceed a certain proximity compared to the current location to improve overall efficiency.
author2 Li Mo
author_facet Li Mo
Teo, Kok Hien
format Final Year Project
author Teo, Kok Hien
author_sort Teo, Kok Hien
title Android smart phone based participatory sensing
title_short Android smart phone based participatory sensing
title_full Android smart phone based participatory sensing
title_fullStr Android smart phone based participatory sensing
title_full_unstemmed Android smart phone based participatory sensing
title_sort android smart phone based participatory sensing
publishDate 2015
url http://hdl.handle.net/10356/62547
_version_ 1759857829254004736