Vision based control for mobile robot

With the advancement of technology, many well-known companies come up with their own autonomous vehicles prototypes which may revolutionize transportation in the near future. Indoor autonomous vehicles possess applications such as surveillance, data collection and rescue mission that consequently im...

Full description

Saved in:
Bibliographic Details
Main Author: Siah, Clarence Jun Da
Other Authors: Suresh Sundaram
Format: Final Year Project
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10356/63477
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:With the advancement of technology, many well-known companies come up with their own autonomous vehicles prototypes which may revolutionize transportation in the near future. Indoor autonomous vehicles possess applications such as surveillance, data collection and rescue mission that consequently improve our quality of lives. To realize these prototypes, there is a need for developing a mobile robot to enable a safer and accurate navigation. Previous studies have introduced different vision based approaches for object tracking and obstacle avoidance during the navigation of the mobile robots in an indoor environment. However, the drawback of those approaches will raise the cost and complexity issue as they require a significant amount of image processing time for a resource-constrained robot. This study sought to study the integration of three properties namely object avoidance and object tracking as well as distance estimation during robot navigation. For object to be tracked in the space, the X/Y position of the object can be determined with the help of color detection by using computer vision library. Whereas, the Z position or distance of the object from the robot can be calculated using angle of depression of the camera which facing towards the interested object. Bumper sensors were used for the obstacle avoidance and turning decision of the mobile robot. The error of the estimation distance from the actual distance and the decision making of the robot were then obtained to evaluate the performance of the navigation. Through the experiment, bumper sensor performed well for evading the obstacle based on the suggested obstacle avoidance scheme. On the other hand, the distance estimation error was bound to increase over the time up to 13.81cm and caused some unpredictable errors. This is because the distance falls between the consecutive angles points cannot be captured by the mathematical model. In our application, the conservative way to overcome with this issue was to constantly updating the distance such that the color ball reached the points nearer to the mobile robot. The integration result based on the suggested solution was promising and the error distance could be achieved approximately less than 5cm. Finally, it is evident from the findings that using bumper sensor together with monocular camera does a good performance in obstacle avoidance and object tracking. This study can be considered as a reference for computer scientists to design a more efficient method for ensuring a safer and accurate navigation.