Implementation of boundary detection for autonomous rescue robot
During times of a natural disaster, urban calamity or explosion, the post-disaster site is most likely to be unsafe, unreachable and strewn with debris and rubble. Such an area poses a threat to all rescue personnel that enter in search for survivors. A rescue robot meant solely for the purpose of e...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2010
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/40797 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | During times of a natural disaster, urban calamity or explosion, the post-disaster site is most likely to be unsafe, unreachable and strewn with debris and rubble. Such an area poses a threat to all rescue personnel that enter in search for survivors. A rescue robot meant solely for the purpose of exploring such territories would result in reduced personnel requirements and reduced fatigue while having the ability to reach inaccessible areas. It also allows rescue personnel to focus their efforts on specific areas marked by the robot, rather than spend time and energy in searching throughout the entire site.
An autonomous rescue robot in unfamiliar terrain needs to be able to maintain its track on site and be aware of its proximity from the boundary edges. For these reasons, a boundary detection program is of importance to an autonomous rescue robot.
This report explores and implements a boundary detection program for an autonomous robot on the assumption that a boundary is a colored line. This is done by making use of various image processing techniques: line, edge, and color and shape detection. This report carries out a comparative study to select the best and most appropriate methods for color, edge and line detection followed by which it gives a brief introduction of these methodologies that explain their working and theory.
The implemented algorithm performs the image analysis on images taken in from a single Firefly camera device in the form of a video. Images are passed through color detection, line detection, and edge & shape detection algorithms. On completion of these procedures, the program is capable of tracking the boundary line. This report majorly looks into the results of these image analysis steps and discusses them while testing the accuracy, performance, efficiency and robust nature of the proposed program/algorithm. |
---|