Robust Eye-Based Dwell-Free Typing

For a subset of physically challenged people, assistive technologies, such as alternative form of text entry, can be of tremendous benefit. However, speed of text entry with current methods limits their more widespread adoption. Eye gaze technologies have potential for text entry, but still tend to...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Yi, Lee, Bu-Sung, McKeown, Martin J.
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2016
Subjects:
Online Access:https://hdl.handle.net/10356/83395
http://hdl.handle.net/10220/41434
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:For a subset of physically challenged people, assistive technologies, such as alternative form of text entry, can be of tremendous benefit. However, speed of text entry with current methods limits their more widespread adoption. Eye gaze technologies have potential for text entry, but still tend to be relatively slow. Recently dwell-free eye-typing systems have been proposed, but can be vulnerable to common text entry problems, such as selection of the wrong letters. In this article, a recognition approach for inferring the words which the user intends to type is proposed. The method is robust to missing letters and even when a neighboring letter on the keyboard is incorrectly selected. Simulation and experiment results suggest that our proposed approach has better accuracy and more resilience to common text entry errors than other currently proposed dwell-free systems.