Modelling temporal contextual information in eye movement data with application to gaze gesture recognition

In recent 20 years, technology has expanded on in-car human machine interaction (HMI). However, driver distraction has become a growing safety concern. Scientists try to construct systems to detect driver’s state to prevent driver distraction by tracking driver’s eye movements. Traditionally, eye da...

Full description

Saved in:
Bibliographic Details
Main Author: Du, Weiwei
Other Authors: Lin Zhiping
Format: Final Year Project
Language:English
Published: 2015
Subjects:
Online Access:http://hdl.handle.net/10356/64335
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:In recent 20 years, technology has expanded on in-car human machine interaction (HMI). However, driver distraction has become a growing safety concern. Scientists try to construct systems to detect driver’s state to prevent driver distraction by tracking driver’s eye movements. Traditionally, eye data, such as gaze position, fixations or saccades are usually used as features in monitoring driver’s state. A very robust method is to use temporal contextual information, which is extracted from scan path and can keep more eye movement information. However, there is lack of systematic research into different ways of modelling temporal contextual information in eye movement data. Therefore, the author investigates three methods of modelling temporal contextual information. And to have a better understanding, the author uses the application of eye gaze gesture recognition to compare the methods and algorithms. Furthermore, the author implemented the application of gaze gesture recognition as a pilot research to examine if it is possible to apply in driving. As a result, the author provides insights on different methods and also the application itself can also serve as a prototype for further driving related applications.