An automated dialogue management system for a museum robotic guide

Tour guiding is done in museums and many places of interest. The quality of tour guides is however, unable to be made uniform. The nature of the job can be repetitive and mundane. Therefore, the next step is to replace human tour guides with robots. This is to induce a uniform quality of robotic tou...

Full description

Saved in:
Bibliographic Details
Main Author: Yu, Eugene GuangQian
Other Authors: Seet Gim Lee, Gerald
Format: Final Year Project
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10356/64920
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Tour guiding is done in museums and many places of interest. The quality of tour guides is however, unable to be made uniform. The nature of the job can be repetitive and mundane. Therefore, the next step is to replace human tour guides with robots. This is to induce a uniform quality of robotic tour guides; to remove the need for humans to work such jobs. Since year 1997, the first robotic tour guide, Rhino, was placed in a museum exhibition, robotic tour guides have shifted from a focus of map planning and localization. Currently, more focuses are on content generation, physical gestures and facial expressions. These changes enable users an engaging and interactive experience with the robots. In future, when artificial intelligence is added, a seamless and life-like interaction with a robot is expected. For this project, a museum robotic guide is made from an existing platform, MAVEN. In order to mimic a human tour guide, the fully autonomous robotic guide will give a basic tour by guiding users around the given floor area. The robot will educate the users on displayed items. At the end of the tour, it will conduct a question and answer (QnA) session. To get a fully autonomous robotic guide, we focused on Automated Docking System, Automated Navigation System and Automated Dialogue Management System (ADMS). The ADMS uses the VHToolKit from University of Southern California (USC) Institute of Creative Technologies (ICT). To use the VHToolKit as the ADMS, QnA database is created along with scripting for displayed items and a programming code that uses the Transmission Control Protocol/Internet Protocol (TCP/IP) to communicate. Hence, allowing information to be shared among the 3 FYP students. This makes it fully autonomous and function in synchronization. In order to establish an engaging and interactive experience, the robot will generate responses mated to hand gestures, facial expressions and voice which is lip-synced. Reviewing on a survey conducted, this project has established its project scope and goals. This includes, to allow robot to operate fully autonomous, to deliver scripts at precise locations and to receive satisfaction from the survey participants. In future, gesture sensing, human detection, better Automatic Speech Recognizer (ASR) and Text To Speech (TTS) should be implemented to make user experience more awe inspiring.