Mapping of human gestures to a robotic avatar
Telepresence has become one of the emerging technologies nowadays. To further improve the telepresence technology, much effort has been made in the design of mobile robot avatars which can imitate human actions. In this project, simulations have been done on a robotic head module and also a right ar...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2014
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/60365 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-60365 |
---|---|
record_format |
dspace |
spelling |
sg-ntu-dr.10356-603652023-03-04T19:17:18Z Mapping of human gestures to a robotic avatar Chin, Chong Kheng Seet Gim Lee, Gerald School of Mechanical and Aerospace Engineering Robotics Research Centre DRNTU::Engineering Telepresence has become one of the emerging technologies nowadays. To further improve the telepresence technology, much effort has been made in the design of mobile robot avatars which can imitate human actions. In this project, simulations have been done on a robotic head module and also a right arm module in order to collect the data of dynamic head gestures and arm gestures generated respectively by using Solidworks. Different movements of each dynamic gesture are also simulated in term of its angle of rotation of the degrees of freedom. For the head module, the simulated gestures are mainly made up of 3 axes of rotations: pan, tilt, and yaw. For the arm module, the simulated gestures are mainly made up of 6 degree of freedoms: shoulder flexion and extension, shoulder abduction and adduction, arm external rotation and internal rotation, elbow flexion and extension, wrist pronation and supination, wrist flexion and extension. The data obtained will be used in the future for the purpose of coding for the real head module and also the real arm module in the MAVEN robotic avatar to produce the specific dynamic gestures which mimics human actions. The use of non-verbal communication (gesticulation) onto current MAVEN robotic avatar is meant to further enhance the telepresence technology by empowering the perception of other observant. Although the real head and arm module have many degrees of freedom, it has been proven that 6 degrees of freedom are needed in arm mechanism while 3 DOF for head module are good enough to generate certain human gestures from the simulation. The simulated gestures are able to be recognized by eye observation and almost look alike as a real gesture performed by a human. Bachelor of Engineering (Mechanical Engineering) 2014-05-27T02:18:35Z 2014-05-27T02:18:35Z 2014 2014 Final Year Project (FYP) http://hdl.handle.net/10356/60365 en Nanyang Technological University 51 p. application/pdf |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
DRNTU::Engineering |
spellingShingle |
DRNTU::Engineering Chin, Chong Kheng Mapping of human gestures to a robotic avatar |
description |
Telepresence has become one of the emerging technologies nowadays. To further improve the telepresence technology, much effort has been made in the design of mobile robot avatars which can imitate human actions. In this project, simulations have been done on a robotic head module and also a right arm module in order to collect the data of dynamic head gestures and arm gestures generated respectively by using Solidworks. Different movements of each dynamic gesture are also simulated in term of its angle of rotation of the degrees of freedom. For the head module, the simulated gestures are mainly made up of 3 axes of rotations: pan, tilt, and yaw. For the arm module, the simulated gestures are mainly made up of 6 degree of freedoms: shoulder flexion and extension, shoulder abduction and adduction, arm external rotation and internal rotation, elbow flexion and extension, wrist pronation and supination, wrist flexion and extension.
The data obtained will be used in the future for the purpose of coding for the real head module and also the real arm module in the MAVEN robotic avatar to produce the specific dynamic gestures which mimics human actions. The use of non-verbal communication (gesticulation) onto current MAVEN robotic avatar is meant to further enhance the telepresence technology by empowering the perception of other observant.
Although the real head and arm module have many degrees of freedom, it has been proven that 6 degrees of freedom are needed in arm mechanism while 3 DOF for head module are good enough to generate certain human gestures from the simulation. The simulated gestures are able to be recognized by eye observation and almost look alike as a real gesture performed by a human. |
author2 |
Seet Gim Lee, Gerald |
author_facet |
Seet Gim Lee, Gerald Chin, Chong Kheng |
format |
Final Year Project |
author |
Chin, Chong Kheng |
author_sort |
Chin, Chong Kheng |
title |
Mapping of human gestures to a robotic avatar |
title_short |
Mapping of human gestures to a robotic avatar |
title_full |
Mapping of human gestures to a robotic avatar |
title_fullStr |
Mapping of human gestures to a robotic avatar |
title_full_unstemmed |
Mapping of human gestures to a robotic avatar |
title_sort |
mapping of human gestures to a robotic avatar |
publishDate |
2014 |
url |
http://hdl.handle.net/10356/60365 |
_version_ |
1759854446575091712 |