Kinect-based automatic sign language recognition

Sign language recognition has been gaining popularity over the past few years, it is the natural language and communication tool for hearing and speaking disabilities. For this community to be able to communicate with a normal person will be a difficult task if the latter has never learnt sign langu...

Full description

Saved in:
Bibliographic Details
Main Author: Khor, Gladys Chiao Yuin
Other Authors: Cham Tat Jen
Format: Final Year Project
Language:English
Published: 2018
Subjects:
Online Access:http://hdl.handle.net/10356/74051
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-74051
record_format dspace
spelling sg-ntu-dr.10356-740512023-03-03T20:36:14Z Kinect-based automatic sign language recognition Khor, Gladys Chiao Yuin Cham Tat Jen School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision Sign language recognition has been gaining popularity over the past few years, it is the natural language and communication tool for hearing and speaking disabilities. For this community to be able to communicate with a normal person will be a difficult task if the latter has never learnt sign language before. Hence, many research and application done to create an intermediate system to help improve the communication between the two parties. This project implements static hand signs recognition that operates interactively with users on depth data. Convolutional Neural Network (CNN) technique is use for model training to perform classification on 31 different alphabets and numbers of hand signs. Using depth sensor from Microsoft Kinect to capture user’s input hand sign according to American Sign Language (ASL) and then perform real-time cropping of hand region in a fixed bounding box from the depth stream. Applying image processing method by removing the background, following by mapping the foreground (hand) image using two different mapping formulas separately. Mapping method 1 uses means and standard deviations from user’s input image and training images. Mapping method 2 uses the pixel values and distance between palm and fingertip in terms of pixel value for both training and user’s input images. Finally, prediction is done by using a trained model. The focus of this paper will be improving the accuracy of hand signs recognition for user’s input data during image mapping phase and improve the model performance by applying different data separation methods. Bachelor of Engineering (Computer Science) 2018-04-24T04:04:18Z 2018-04-24T04:04:18Z 2018 Final Year Project (FYP) http://hdl.handle.net/10356/74051 en Nanyang Technological University 30 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
spellingShingle DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Khor, Gladys Chiao Yuin
Kinect-based automatic sign language recognition
description Sign language recognition has been gaining popularity over the past few years, it is the natural language and communication tool for hearing and speaking disabilities. For this community to be able to communicate with a normal person will be a difficult task if the latter has never learnt sign language before. Hence, many research and application done to create an intermediate system to help improve the communication between the two parties. This project implements static hand signs recognition that operates interactively with users on depth data. Convolutional Neural Network (CNN) technique is use for model training to perform classification on 31 different alphabets and numbers of hand signs. Using depth sensor from Microsoft Kinect to capture user’s input hand sign according to American Sign Language (ASL) and then perform real-time cropping of hand region in a fixed bounding box from the depth stream. Applying image processing method by removing the background, following by mapping the foreground (hand) image using two different mapping formulas separately. Mapping method 1 uses means and standard deviations from user’s input image and training images. Mapping method 2 uses the pixel values and distance between palm and fingertip in terms of pixel value for both training and user’s input images. Finally, prediction is done by using a trained model. The focus of this paper will be improving the accuracy of hand signs recognition for user’s input data during image mapping phase and improve the model performance by applying different data separation methods.
author2 Cham Tat Jen
author_facet Cham Tat Jen
Khor, Gladys Chiao Yuin
format Final Year Project
author Khor, Gladys Chiao Yuin
author_sort Khor, Gladys Chiao Yuin
title Kinect-based automatic sign language recognition
title_short Kinect-based automatic sign language recognition
title_full Kinect-based automatic sign language recognition
title_fullStr Kinect-based automatic sign language recognition
title_full_unstemmed Kinect-based automatic sign language recognition
title_sort kinect-based automatic sign language recognition
publishDate 2018
url http://hdl.handle.net/10356/74051
_version_ 1759857955832856576