Driver state monitoring of intelligent vehicles part I: in-cabin activity identification
With growing interests in intelligent vehicles (IV) worldwide, intelligent vehicles are set to replace conventional vehicles soon. Although IVs will bring convenience to the driver, it might also bring about problems of distracted driving. Therefore, to combat the problems of distracted driving, dri...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/157519 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | With growing interests in intelligent vehicles (IV) worldwide, intelligent vehicles are set to replace conventional vehicles soon. Although IVs will bring convenience to the driver, it might also bring about problems of distracted driving. Therefore, to combat the problems of distracted driving, driver state monitoring has been extensively researched. Past research has a myopic focus on the accuracy of the model, utilizing sensors to capture features such as brain waves, heart signals among others. However, the proposed systems typically forgo computational costs and equipment costs which hinders adoption rates.
Therefore, this project aims to propose a system that balance the computational costs and accuracy such that it is commercially viable and thus can be easily adopted to reduce the cases of distracted driving. The project experiments with different types of Neural Networks such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) on 2 datasets; an image dataset and a video dataset. 4 overarching techniques were used, a 2D-CNN end-to-end model and a 2D-CNN with transfer learning model was applied on the image dataset while a naïve 2D-CNN model and a RNN model was applied on the video dataset.
The 2D-CNN end-to-end model performed the best for the image classification task with an accuracy of 0.9946 while the 3Bi-LSTM-BN-DP-H model performed the best on the video dataset with an accuracy 0.6595 .
Real-time data from 10 subjects are collected from 2 different types of vehicles. The data is used to verify only the video classification models such as the 3Bi-LSTM-BN-DP-H and 1BiGRU-BN-DP-H model as the 2D-CNN end-to-end models produces flickering results. |
---|