Driver state monitoring for intelligent vehicles - attention localization
In recent years, there have been increasingly fundamental advances in the implementation of autonomous driving. Many autonomous driving assistance applications gradually reduce the driver's necessary driving tasks by enhancing driver-vehicle interaction, achieving a shared control scheme while...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159157 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | In recent years, there have been increasingly fundamental advances in the implementation of autonomous driving. Many autonomous driving assistance applications gradually reduce the driver's necessary driving tasks by enhancing driver-vehicle interaction, achieving a shared control scheme while improving driving comfort and safety. In order to enable the car to better understand the driver's behaviours and state characteristics, the use of computer vision techniques to detect the driver's visual attention has become a hot research topic.
Driver visual attention detection, an essential tool in assisted driving technology, often requires dedicated and expensive equipment. With the increasing accuracy of feature learning and classification using deep learning techniques, it has become possible to implement driver visual attention estimation using the camera. This final year project uses a low-cost RGB camera for driver visual attention estimation to address the issues involved, and the main work is as follows:
Based on the open-source road driving datasets, a large amount of driver facial data is collected on a laboratory driving simulator for analysis. Based on face detection and pupil localization algorithms, this project seeks to improve previous machine learning
techniques to develop a data-driven CNN architecture for driver attention coordinate
points prediction on the road scene view. Results show that the model can locate the
driver's attention focus point well, with an error within 2.0 - 3.0 cm. |
---|