Enhancing multimodal interactions with eye-tracking for virtual reality applications
The motion of dragging is a common yet imperative action in many forms of human-computer interaction, including Virtual Reality. With the growing availability of commercial eye-tracking devices, researchers have begun to investigate eye-based multimodal interactions’ performance in dragging tasks...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/154171 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | The motion of dragging is a common yet imperative action in many forms of human-computer
interaction, including Virtual Reality. With the growing availability of commercial eye-tracking
devices, researchers have begun to investigate eye-based multimodal interactions’ performance in
dragging tasks in desktop settings. However, little is known about the performance of eye-based
multimodal interactions in 3D dragging tasks with Virtual Reality head-mounted displays. 31
participants volunteered in the study which compared the usability of eye-gaze with button click, eyegaze with dwell time and the default Vive controller for 3D dragging tasks in Virtual Reality headmounted displays. Based on the ISO 9241-9 standard, a novel immersive 3D dragging task was designed
and implemented to facilitate the experiment. The task difficulty was varied by adjusting the following
variables: target width, target-destination angular distance, and direction of path
curvature. An additional selection task was implemented along with the dragging task to
investigate multitasking performance. Contrary to our hypothesis, the controller was the fastest,
achieved the highest throughput, and was the most preferred modality among the three modalities. It
also offered the highest precision and accuracy in the dragging task. Notably, gaze with click had
comparable speed and accuracy with the controller. Even though both gaze with click and gaze with
dwell were highly imprecise in the dragging task, they were still well-preferred by participants.
Furthermore, design guidelines were recommended for visual targets’ position in the horizontal field of
view and visual target size in the immersive 3D dragging task. In conclusion, the controller is the most
usable modality for an immersive 3D dragging task. Gaze with click could still suffice as a usable
modality when low precision is required in the dragging task. |
---|