Multi-task learning for sign language recognition using IR-UWB radar

In this study, we investigate the use of impulse radio ultra-wideband (IR-UWB) radar technology combined with multitask learning for Sign Language Recognition (SLR). Traditional computer vision-based approaches to SLR face limitations in certain environments, motivating the exploration of radar-base...

Full description

Saved in:
Bibliographic Details
Main Author: Peh, Denzyl David
Other Authors: Luo Jun
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/181234
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-181234
record_format dspace
spelling sg-ntu-dr.10356-1812342024-11-19T01:32:21Z Multi-task learning for sign language recognition using IR-UWB radar Peh, Denzyl David Luo Jun College of Computing and Data Science junluo@ntu.edu.sg Computer and Information Science Machine learning Multi-task learning IR-UWB radar X4M05 In this study, we investigate the use of impulse radio ultra-wideband (IR-UWB) radar technology combined with multitask learning for Sign Language Recognition (SLR). Traditional computer vision-based approaches to SLR face limitations in certain environments, motivating the exploration of radar-based alternatives. To assess the viability of this approach, we constructed a dataset of 2808 samples, each annotated with four distinct label categories: Word, Base Handsign, Position, and Movement, to assess the viability of this approach. With data augmentation, feature engineering, and hyperparameter tuning, our model achieved accuracy scores of 94.66%, 95.02%, 99.29%, and 98.93% for these respective tasks. An ablation study revealed that while multitask learning increased model performance and confidence in predictions, it also led to longer convergence times. These results demonstrate the potential of radar-based SLR systems and highlights the benefits of integrating multi-task learning in the training process. This approach offers a promising alternative to vision-based methods, paving the way for more robust, versatile, and accessible sign language recognition technologies. Bachelor's degree 2024-11-19T01:32:21Z 2024-11-19T01:32:21Z 2024 Final Year Project (FYP) Peh, D. D. (2024). Multi-task learning for sign language recognition using IR-UWB radar. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181234 https://hdl.handle.net/10356/181234 en SCSE23-0837 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Computer and Information Science
Machine learning
Multi-task learning
IR-UWB radar
X4M05
spellingShingle Computer and Information Science
Machine learning
Multi-task learning
IR-UWB radar
X4M05
Peh, Denzyl David
Multi-task learning for sign language recognition using IR-UWB radar
description In this study, we investigate the use of impulse radio ultra-wideband (IR-UWB) radar technology combined with multitask learning for Sign Language Recognition (SLR). Traditional computer vision-based approaches to SLR face limitations in certain environments, motivating the exploration of radar-based alternatives. To assess the viability of this approach, we constructed a dataset of 2808 samples, each annotated with four distinct label categories: Word, Base Handsign, Position, and Movement, to assess the viability of this approach. With data augmentation, feature engineering, and hyperparameter tuning, our model achieved accuracy scores of 94.66%, 95.02%, 99.29%, and 98.93% for these respective tasks. An ablation study revealed that while multitask learning increased model performance and confidence in predictions, it also led to longer convergence times. These results demonstrate the potential of radar-based SLR systems and highlights the benefits of integrating multi-task learning in the training process. This approach offers a promising alternative to vision-based methods, paving the way for more robust, versatile, and accessible sign language recognition technologies.
author2 Luo Jun
author_facet Luo Jun
Peh, Denzyl David
format Final Year Project
author Peh, Denzyl David
author_sort Peh, Denzyl David
title Multi-task learning for sign language recognition using IR-UWB radar
title_short Multi-task learning for sign language recognition using IR-UWB radar
title_full Multi-task learning for sign language recognition using IR-UWB radar
title_fullStr Multi-task learning for sign language recognition using IR-UWB radar
title_full_unstemmed Multi-task learning for sign language recognition using IR-UWB radar
title_sort multi-task learning for sign language recognition using ir-uwb radar
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/181234
_version_ 1816858997039300608