Modelling temporal contextual information in eye movement data with application to gaze gesture recognition
Date of Issue2015
School of Electrical and Electronic Engineering
In recent 20 years, technology has expanded on in-car human machine interaction (HMI). However, driver distraction has become a growing safety concern. Scientists try to construct systems to detect driver’s state to prevent driver distraction by tracking driver’s eye movements. Traditionally, eye data, such as gaze position, fixations or saccades are usually used as features in monitoring driver’s state. A very robust method is to use temporal contextual information, which is extracted from scan path and can keep more eye movement information. However, there is lack of systematic research into different ways of modelling temporal contextual information in eye movement data. Therefore, the author investigates three methods of modelling temporal contextual information. And to have a better understanding, the author uses the application of eye gaze gesture recognition to compare the methods and algorithms. Furthermore, the author implemented the application of gaze gesture recognition as a pilot research to examine if it is possible to apply in driving. As a result, the author provides insights on different methods and also the application itself can also serve as a prototype for further driving related applications.
DRNTU::Engineering::Electrical and electronic engineering
Final Year Project (FYP)
Nanyang Technological University