Gesture-based interactive presentation platform
The different presentation software available today allow users to present in a limited manner since the presenters can now only use the mouse, keyboard, touch screen and remote control to manipulate their presentations. Because of this, the interactions that the presenter can do with the slides do...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Animo Repository
2012
|
Subjects: | |
Online Access: | https://animorepository.dlsu.edu.ph/etd_bachelors/11465 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | De La Salle University |
Language: | English |
Summary: | The different presentation software available today allow users to present in a limited manner since the presenters can now only use the mouse, keyboard, touch screen and remote control to manipulate their presentations. Because of this, the interactions that the presenter can do with the slides do not resemble a person's natural interaction with other people and different objects in the environment. To address this problem, this research has implemented gesture recognition technology in a presentation platform using Microsoft Kinect. To do this, the researchers have gathered thousands of training data from different people using Microsoft Kinect to create a model that contains different gestures that can be integrated to the system. As such, the manner in which people normally interact with each other during a presentation will be manipulated since hand gestures can be recognized by the system. Also, users are able to interact with objects in the presentation using pre-defined hand gestures that correspond to different valid commands. Thus, gesture-based interactive presentation system will not limit users into actions such as next slide and previous slide since object interaction can include more complex commands such as resizing and rotating the pictures and other visual aids that appear in the presentation. To make this possible, two approaches were used by the researchers as feature for the gestures, one was to compute the distance of the points of the upper body from the head and neck, and the other one was to compute for the angle of the different upper body parts. After conducting tests on the approaches, the results have shown that the second approach yielded higher accuracy. |
---|