Driver state monitoring for intelligent vehicles - attention localization

In recent years, there have been increasingly fundamental advances in the implementation of autonomous driving. Many autonomous driving assistance applications gradually reduce the driver's necessary driving tasks by enhancing driver-vehicle interaction, achieving a shared control scheme while...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Cai, Yuxin
مؤلفون آخرون: Lyu Chen
التنسيق: Final Year Project
اللغة:English
منشور في: Nanyang Technological University 2022
الموضوعات:
الوصول للمادة أونلاين:https://hdl.handle.net/10356/159157
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
المؤسسة: Nanyang Technological University
اللغة: English
الوصف
الملخص:In recent years, there have been increasingly fundamental advances in the implementation of autonomous driving. Many autonomous driving assistance applications gradually reduce the driver's necessary driving tasks by enhancing driver-vehicle interaction, achieving a shared control scheme while improving driving comfort and safety. In order to enable the car to better understand the driver's behaviours and state characteristics, the use of computer vision techniques to detect the driver's visual attention has become a hot research topic. Driver visual attention detection, an essential tool in assisted driving technology, often requires dedicated and expensive equipment. With the increasing accuracy of feature learning and classification using deep learning techniques, it has become possible to implement driver visual attention estimation using the camera. This final year project uses a low-cost RGB camera for driver visual attention estimation to address the issues involved, and the main work is as follows: Based on the open-source road driving datasets, a large amount of driver facial data is collected on a laboratory driving simulator for analysis. Based on face detection and pupil localization algorithms, this project seeks to improve previous machine learning techniques to develop a data-driven CNN architecture for driver attention coordinate points prediction on the road scene view. Results show that the model can locate the driver's attention focus point well, with an error within 2.0 - 3.0 cm.