Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification

The evolution of Intelligent Vehicles (IV) has enabled various degrees of autonomous driving, aiming to enhance road safety through Advanced Driver Assistance Systems (ADAS). Apart from road obstacle detection, the research on IV extends to driver-state monitoring, specifically on driver distract...

Full description

Saved in:
Bibliographic Details
Main Author: Low, Daniel Teck Fatt
Other Authors: Lyu Chen
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/177419
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-177419
record_format dspace
spelling sg-ntu-dr.10356-1774192024-06-01T16:52:11Z Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification Low, Daniel Teck Fatt Lyu Chen School of Mechanical and Aerospace Engineering lyuchen@ntu.edu.sg Engineering Other Data science The evolution of Intelligent Vehicles (IV) has enabled various degrees of autonomous driving, aiming to enhance road safety through Advanced Driver Assistance Systems (ADAS). Apart from road obstacle detection, the research on IV extends to driver-state monitoring, specifically on driver distraction to promote safe driving and minimise the likelihood of road accidents due to human error. Past studies focused on attaining high accuracy in driver activity recognition through deeper convolutional neural networks (CNN) with more parameters, which require more computational power, making them less viable for real-time classification. This report presents efficient CNN model architectures: MobileNetV3 and MobileVGG, designed for edge and mobile-like system, predominantly for driver activity recognition. Employing transfer learning approach, the models utilised parameters pretrained on large dataset for model training, enhancing data generalisation and model performance. The findings indicate that MobileNetV3 Large is the most effective for driver activity recognition. A dual-stream model, using MobileNetV3 Large as its backbone, has been developed to address occlusion and variations in camera angles by processing images from the driver’s front and side views. This model achieved 81% classification accuracy on real-world data with 10.9M parameters, about 50% less than the state-of-the-art models, and delivered 27 FPS in real-time. Bachelor's degree 2024-05-28T08:19:18Z 2024-05-28T08:19:18Z 2024 Final Year Project (FYP) Low, D. T. F. (2024). Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/177419 https://hdl.handle.net/10356/177419 en C044 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering
Other
Data science
spellingShingle Engineering
Other
Data science
Low, Daniel Teck Fatt
Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
description The evolution of Intelligent Vehicles (IV) has enabled various degrees of autonomous driving, aiming to enhance road safety through Advanced Driver Assistance Systems (ADAS). Apart from road obstacle detection, the research on IV extends to driver-state monitoring, specifically on driver distraction to promote safe driving and minimise the likelihood of road accidents due to human error. Past studies focused on attaining high accuracy in driver activity recognition through deeper convolutional neural networks (CNN) with more parameters, which require more computational power, making them less viable for real-time classification. This report presents efficient CNN model architectures: MobileNetV3 and MobileVGG, designed for edge and mobile-like system, predominantly for driver activity recognition. Employing transfer learning approach, the models utilised parameters pretrained on large dataset for model training, enhancing data generalisation and model performance. The findings indicate that MobileNetV3 Large is the most effective for driver activity recognition. A dual-stream model, using MobileNetV3 Large as its backbone, has been developed to address occlusion and variations in camera angles by processing images from the driver’s front and side views. This model achieved 81% classification accuracy on real-world data with 10.9M parameters, about 50% less than the state-of-the-art models, and delivered 27 FPS in real-time.
author2 Lyu Chen
author_facet Lyu Chen
Low, Daniel Teck Fatt
format Final Year Project
author Low, Daniel Teck Fatt
author_sort Low, Daniel Teck Fatt
title Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_short Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_full Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_fullStr Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_full_unstemmed Driver state monitoring for intelligent vehicles - part I: in-cabin activity identification
title_sort driver state monitoring for intelligent vehicles - part i: in-cabin activity identification
publisher Nanyang Technological University
publishDate 2024
url https://hdl.handle.net/10356/177419
_version_ 1806059903606849536