Enabling human consistent motion behaviors on a mobile robot platform

As modern technology become more and more advancing, robots with artificial intelligence had become a trend. Tour guide robot with various functions has been researched for interaction with human. The functions of localization and navigation indoor are important researches as those are key features...

Full description

Saved in:
Bibliographic Details
Main Author: Leong, Wei Kang
Other Authors: Seet Gim Lee, Gerald
Format: Final Year Project
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10356/64073
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:As modern technology become more and more advancing, robots with artificial intelligence had become a trend. Tour guide robot with various functions has been researched for interaction with human. The functions of localization and navigation indoor are important researches as those are key features for artificial intelligence. In this project, MAVEN, Mobile Avatar for Virtual Engagement by NTU, which has the mobility platform and the Robot Operating System[1] installed on Linux Ubuntu, is developed into a museum tour guide robot. It has all the functions of a modern tour guide robot which included mapping, localization, navigation and docking. Compared to other robots, MAVEN interacted with human by using a monitor screen with an avatar loaded. The avatar and speech system is running on another operating system while synchronizing with ROS navigation system. When MAVEN battery running low, it will activate a docking system to charge itself. This project mainly focused at implementing all the necessary functions and packages from ROS on MAVEN. The objective is to ensure the capability of fully autonomous navigation while also allowed human recognizing function, speech introduction and docking system. The robot which is used by this project is called MAVEN Gray. Its main components consisted of a CPU, GALIL motion controller, four Mechanum wheels, Two Hokuyo laser sensors, one SPATIAL IMU sensor and a Kinect sensor.