Kinect-based automatic sign language recognition
Sign language recognition has been gaining popularity over the past few years, it is the natural language and communication tool for hearing and speaking disabilities. For this community to be able to communicate with a normal person will be a difficult task if the latter has never learnt sign langu...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
2018
|
Subjects: | |
Online Access: | http://hdl.handle.net/10356/74051 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Sign language recognition has been gaining popularity over the past few years, it is the natural language and communication tool for hearing and speaking disabilities. For this community to be able to communicate with a normal person will be a difficult task if the latter has never learnt sign language before. Hence, many research and application done to create an intermediate system to help improve the communication between the two parties. This project implements static hand signs recognition that operates interactively with users on depth data. Convolutional Neural Network (CNN) technique is use for model training to perform classification on 31 different alphabets and numbers of hand signs. Using depth sensor from Microsoft Kinect to capture user’s input hand sign according to American Sign Language (ASL) and then perform real-time cropping of hand region in a fixed bounding box from the depth stream. Applying image processing method by removing the background, following by mapping the foreground (hand) image using two different mapping formulas separately. Mapping method 1 uses means and standard deviations from user’s input image and training images. Mapping method 2 uses the pixel values and distance between palm and fingertip in terms of pixel value for both training and user’s input images. Finally, prediction is done by using a trained model. The focus of this paper will be improving the accuracy of hand signs recognition for user’s input data during image mapping phase and improve the model performance by applying different data separation methods. |
---|