A new interaction framework for human and robot

Despite technological advancement in the robotic field, designing a user-friendly robot to interact with human remains one of the most technically challenging problems for researchers. One possible solution is to design a robot that can mimic human appearance and behavior to interact with a human. A...

Full description

Saved in:
Bibliographic Details
Main Author: Dinh, Quang Huy
Other Authors: Seet Gim Lee, Gerald
Format: Theses and Dissertations
Language:English
Published: 2018
Subjects:
Online Access:http://hdl.handle.net/10356/75865
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-75865
record_format dspace
spelling sg-ntu-dr.10356-758652021-03-20T13:50:00Z A new interaction framework for human and robot Dinh, Quang Huy Seet Gim Lee, Gerald Interdisciplinary Graduate School (IGS) DRNTU::Engineering::Mechanical engineering::Robots Despite technological advancement in the robotic field, designing a user-friendly robot to interact with human remains one of the most technically challenging problems for researchers. One possible solution is to design a robot that can mimic human appearance and behavior to interact with a human. Although the idea is very promising, the current technologies still fail to create such a robot due to the complexity of human’s body and brain. This makes the users feel frustrated and annoyed when interacting with these types of robots. In fact, users normally expect humanoids to be more human-like than it really is, ultimately leading to disappointment and uncomfortable when the robot fails to be as intelligent. As a result, the future of humanoid is still unforeseen. On the other hand, the indirect interaction approach provides another way for us to solve the problem. The method’s framework utilizes a mediating object which is normally a standard projector such as a video or a LED projector to facilitate the interaction process. However, this framework also contains some limitations that make it hard to be accepted as a standardized framework for human-robot interaction to be applied to different contexts. Firstly, current indirect interaction interfaces provide only one way to communicate with the robot which is via the standard projector. This prohibits the robot to communicate with multiple users with different access levels. Also, the current framework setup cannot guarantee the safety of the system information since the information is projected on the floor and can be modified by any user. Next, the framework can only be applied to a limited number of applications as a normal projector cannot generate bright and high-contrast images in bright or outdoor environments. Finally, the current framework only presents a single-modal input device for the user to send input commands or to interact with the mediating system. This not only can limit the number of applications the interface can be applied to but also reduce the reliability of the interaction process because the user does not have any other input models to perform a validation process in case the robot answers with inappropriate responses. This proposal aims to identify a framework for indirect interaction interface that recognizes the deficiencies of current systems. To overcome the challenges of the current system, the new framework must be able to support multiple users with different levels of interaction. The mediating channels must be reconfigured to increase the security of exchanging information while allowing the robot to send feedbacks to the surrounding environment to inform people sharing the same working environment of its operational status. Furthermore, the input device must support multimodal input modalities to enhance the robustness of transmitting commands from human to robot which, in turn, allows the system to be deployed in different environments. To do so, the system must combine two augmented reality techniques called: see-through augmented reality and spatial augmented reality. Furthermore, a laser writer and wearable handheld device are specially designed to be included in the interface to facilitate the interacting processes. Finally, a dialog framework between human and robot is also introduced to allow “the human to say less and the robot to do more” during the interaction process. Doctor of Philosophy (IGS) 2018-06-26T08:22:56Z 2018-06-26T08:22:56Z 2018 Thesis Dinh, Q. H. (2018). A new interaction framework for human and robot. Doctoral thesis, Nanyang Technological University, Singapore. http://hdl.handle.net/10356/75865 10.32657/10356/75865 en 159 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Mechanical engineering::Robots
spellingShingle DRNTU::Engineering::Mechanical engineering::Robots
Dinh, Quang Huy
A new interaction framework for human and robot
description Despite technological advancement in the robotic field, designing a user-friendly robot to interact with human remains one of the most technically challenging problems for researchers. One possible solution is to design a robot that can mimic human appearance and behavior to interact with a human. Although the idea is very promising, the current technologies still fail to create such a robot due to the complexity of human’s body and brain. This makes the users feel frustrated and annoyed when interacting with these types of robots. In fact, users normally expect humanoids to be more human-like than it really is, ultimately leading to disappointment and uncomfortable when the robot fails to be as intelligent. As a result, the future of humanoid is still unforeseen. On the other hand, the indirect interaction approach provides another way for us to solve the problem. The method’s framework utilizes a mediating object which is normally a standard projector such as a video or a LED projector to facilitate the interaction process. However, this framework also contains some limitations that make it hard to be accepted as a standardized framework for human-robot interaction to be applied to different contexts. Firstly, current indirect interaction interfaces provide only one way to communicate with the robot which is via the standard projector. This prohibits the robot to communicate with multiple users with different access levels. Also, the current framework setup cannot guarantee the safety of the system information since the information is projected on the floor and can be modified by any user. Next, the framework can only be applied to a limited number of applications as a normal projector cannot generate bright and high-contrast images in bright or outdoor environments. Finally, the current framework only presents a single-modal input device for the user to send input commands or to interact with the mediating system. This not only can limit the number of applications the interface can be applied to but also reduce the reliability of the interaction process because the user does not have any other input models to perform a validation process in case the robot answers with inappropriate responses. This proposal aims to identify a framework for indirect interaction interface that recognizes the deficiencies of current systems. To overcome the challenges of the current system, the new framework must be able to support multiple users with different levels of interaction. The mediating channels must be reconfigured to increase the security of exchanging information while allowing the robot to send feedbacks to the surrounding environment to inform people sharing the same working environment of its operational status. Furthermore, the input device must support multimodal input modalities to enhance the robustness of transmitting commands from human to robot which, in turn, allows the system to be deployed in different environments. To do so, the system must combine two augmented reality techniques called: see-through augmented reality and spatial augmented reality. Furthermore, a laser writer and wearable handheld device are specially designed to be included in the interface to facilitate the interacting processes. Finally, a dialog framework between human and robot is also introduced to allow “the human to say less and the robot to do more” during the interaction process.
author2 Seet Gim Lee, Gerald
author_facet Seet Gim Lee, Gerald
Dinh, Quang Huy
format Theses and Dissertations
author Dinh, Quang Huy
author_sort Dinh, Quang Huy
title A new interaction framework for human and robot
title_short A new interaction framework for human and robot
title_full A new interaction framework for human and robot
title_fullStr A new interaction framework for human and robot
title_full_unstemmed A new interaction framework for human and robot
title_sort new interaction framework for human and robot
publishDate 2018
url http://hdl.handle.net/10356/75865
_version_ 1695706191883141120