Slam-based mapping for object recognition

The aim of this project is to map an unknown environment, autonomously navigate to the 2D navigation goal set by user and recognize object placed in object database by using a custom made differential-drive mobile robot that works under the Robot Operating System (ROS) framework. The concept of depl...

Full description

Saved in:
Bibliographic Details
Main Author: Loh, Wan Ying
Format: Final Year Project / Dissertation / Thesis
Published: 2018
Subjects:
Online Access:http://eprints.utar.edu.my/2829/1/EE%2D2018%2D1301857%2D2.pdf
http://eprints.utar.edu.my/2829/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Tunku Abdul Rahman
Description
Summary:The aim of this project is to map an unknown environment, autonomously navigate to the 2D navigation goal set by user and recognize object placed in object database by using a custom made differential-drive mobile robot that works under the Robot Operating System (ROS) framework. The concept of deploying the robot in search and rescue mission, is being implemented so that the efficiency of search and rescue mission can be improved at a lower cost. The custom made robot is able to navigate in an unknown environment and feedback sensory data from Kinect Xbox 360 and odometry data to PC. Therefore, it is important for the robot to feedback a reliable and accurate odometry data efficiently so that the robot is able to localize itself in the unknown environment. The project architecture includes a personal laptop, a Kinect Xbox 360 sensor, the custom made robot and Arduino Mega 2560. The personal laptop acts as the command center where the Simultaneous Localization and Mapping (SLAM) algorithm are run by receiving odometry data from Arduino on the custom made robot. A USB connection is established between the Arduino, custom made robot and PC. After a map of the unknown environment is built, the Adaptive Monte Carlo Localization (AMCL) is used to localize the robot and Dijkstra’s algorithm is deployed to compute the shortest path to the destination goal. The SIFT (Scale-Invariant Feature Transform) is used to extract features from the current frame and match with the object database to identify and recognize the object whenever the robot come across the object. The location of object can also be obtained in respect to the location of Kinect sensor by using 3x3 Homography matrix. Implementation of project has been carried out successfully and the custom made robot is able to map and recognize object accurately.