Mapping of human gestures to a robotic avatar
Telepresence has become one of the emerging technologies nowadays. To further improve the telepresence technology, much effort has been made in the design of mobile robot avatars which can imitate human actions. In this project, simulations have been done on a robotic head module and also a right ar...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2014
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/60365 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Telepresence has become one of the emerging technologies nowadays. To further improve the telepresence technology, much effort has been made in the design of mobile robot avatars which can imitate human actions. In this project, simulations have been done on a robotic head module and also a right arm module in order to collect the data of dynamic head gestures and arm gestures generated respectively by using Solidworks. Different movements of each dynamic gesture are also simulated in term of its angle of rotation of the degrees of freedom. For the head module, the simulated gestures are mainly made up of 3 axes of rotations: pan, tilt, and yaw. For the arm module, the simulated gestures are mainly made up of 6 degree of freedoms: shoulder flexion and extension, shoulder abduction and adduction, arm external rotation and internal rotation, elbow flexion and extension, wrist pronation and supination, wrist flexion and extension.
The data obtained will be used in the future for the purpose of coding for the real head module and also the real arm module in the MAVEN robotic avatar to produce the specific dynamic gestures which mimics human actions. The use of non-verbal communication (gesticulation) onto current MAVEN robotic avatar is meant to further enhance the telepresence technology by empowering the perception of other observant.
Although the real head and arm module have many degrees of freedom, it has been proven that 6 degrees of freedom are needed in arm mechanism while 3 DOF for head module are good enough to generate certain human gestures from the simulation. The simulated gestures are able to be recognized by eye observation and almost look alike as a real gesture performed by a human. |
---|