Multi-task learning for sign language recognition using IR-UWB radar

In this study, we investigate the use of impulse radio ultra-wideband (IR-UWB) radar technology combined with multitask learning for Sign Language Recognition (SLR). Traditional computer vision-based approaches to SLR face limitations in certain environments, motivating the exploration of radar-base...

Full description

Saved in:
Bibliographic Details
Main Author: Peh, Denzyl David
Other Authors: Luo Jun
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181234
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In this study, we investigate the use of impulse radio ultra-wideband (IR-UWB) radar technology combined with multitask learning for Sign Language Recognition (SLR). Traditional computer vision-based approaches to SLR face limitations in certain environments, motivating the exploration of radar-based alternatives. To assess the viability of this approach, we constructed a dataset of 2808 samples, each annotated with four distinct label categories: Word, Base Handsign, Position, and Movement, to assess the viability of this approach. With data augmentation, feature engineering, and hyperparameter tuning, our model achieved accuracy scores of 94.66%, 95.02%, 99.29%, and 98.93% for these respective tasks. An ablation study revealed that while multitask learning increased model performance and confidence in predictions, it also led to longer convergence times. These results demonstrate the potential of radar-based SLR systems and highlights the benefits of integrating multi-task learning in the training process. This approach offers a promising alternative to vision-based methods, paving the way for more robust, versatile, and accessible sign language recognition technologies.