Target detection and estimation using a stereo vision camera for autonomous navigation

Stereo based vision cameras are increasingly popular in terms of their usage in the commercial markets. They are relatively inexpensive and can perform an array of functions. Currently, there are many different types of stereo vision cameras available. Such cameras have been used to aid in the navig...

Full description

Saved in:
Bibliographic Details
Main Author: Chow, Song Qian.
Other Authors: Wijerupage Sardha Wijesoma
Format: Final Year Project
Language:English
Published: 2009
Subjects:
Online Access:http://hdl.handle.net/10356/17910
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Stereo based vision cameras are increasingly popular in terms of their usage in the commercial markets. They are relatively inexpensive and can perform an array of functions. Currently, there are many different types of stereo vision cameras available. Such cameras have been used to aid in the navigation of autonomous vehicles like cars and kayaks. One of these functions which this project focuses on is to detect possible targets / objects in a controlled environment and to provide rough distance and direction estimation from the autonomous vehicle to the intended object. This function is exceptionally useful in real-time control of autonomous vehicles. The focus of this project’s experiment is to capture photographic images using Point Grey Stereo Vision Camera Bumblebee® 2. After these images have been captured, software programs like FlyCapture® SDK and Triclops® SDK, which are provided by Point Grey Research will be used to process the raw images. Subsequently, edge detection techniques are applied to these images to detect these objects. This data is captured and disparity maps and point cloud can be obtained. From these processed information, the depth or distance of the objects from the camera can be estimated. The next stage is to the Algorithm Processing Unit (APU) of the autonomous vehicle can then work out collision-avoidance solution so that it can navigate around these objects smoothly. Certainly, all these processes only form the initial stages for the vehicle navigation, and in later stages, other high-level functions like target engagement will be required to fulfill the different roles. This project concludes with a summary of the existing techniques used in edge detection of objects and generation of disparity maps. Lastly, there will be some recommendations for future development in this area of study.