Multimodal user interaction methods for virtual reality headsets

Despite the popularity of VR, there is a lack of research in interaction methods for VR HMDs. In their confined setting, conventional VR headsets provide only limited input modalities. Commonly-supported interaction methods include tracking the user's head orientation, external controllers and...

Full description

Saved in:
Bibliographic Details
Main Author: Pallavi, Mohan
Other Authors: Goh Wooi Boon
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/137005
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Despite the popularity of VR, there is a lack of research in interaction methods for VR HMDs. In their confined setting, conventional VR headsets provide only limited input modalities. Commonly-supported interaction methods include tracking the user's head orientation, external controllers and buttons, and more recently, gaze interaction by eye tracking. This research aims to widen the currently available range of input modalities in virtual reality, by developing novel interaction designs and exploring multimodal interaction in mobile VR. The first work in this research aims to add a novel interaction method for a DIY cardboard VR headset that enfolds a commodity smartphone. A set of novel tap-gesture-based inputs have been developed that take advantage of the motion sensing capabilities available in smartphones. Experimental results have shown promising potential for this system to extend the interactivity of future cardboard-based VR headset applications. The second work in this research explores eye gaze tracking as a feasible interaction modality in VR, and understands some of its challenges. It presents DualGaze, a novel gaze-based interaction method developed to address the Midas Touch problem for gaze mediated VR interaction. In this, users perform a distinctive two-step gaze gesture for object selection. A user study was conducted to compare the accuracy and selection speed of DualGaze and the popular gaze fixation method on a simple gaze-typing task. The results show that DualGaze is significantly more accurate while maintaining a comparable selection speed that was observed to improve with familiarity of use. The next part of the research explores hand and finger gesture-based interaction in VR using the smartphone camera inside a VR HMD, and vision-based tracking methods. The research also explores the effects of combining these modalities together with head gaze-based user input in VR, and proposes a few novel interaction designs. The first interaction design developed in this modality is a novel two-handed gesture interaction design that is invariant to head motion, called Twin-Fingers. The technique provides an interactive 2D cursor control in VR, where the user slides two overlapping splayed fingers over two from the other hand to perform the gesture. In addition, a conventional single-hand gesture design is also considered and its performance is compared with Twin-Fingers on an onscreen cursor-based pointing task that involves both head movement and finger-gesture interaction. Our user study shows that most users prefer the simplicity and convenience of a single-hand gesture for tasks without requiring head movement. However, with limited practice, users performed better with Twin-Fingers when tackling tasks that required both head movement and finger gestures. When it was found that Twin-Fingers required a considerable learning curve, another interaction design was developed in a similar vein, called X-Fingers, which involved only one (index) finger from the dominant hand sliding over the index finger of the non-dominant hand. X-Fingers required less effort to perform the gesture by the user, while retaining Twin-Fingers' properties of invariance to hand and head motion; this led to the idea that this interaction design can be coordinated with the movement of the user's arms or head to provide an additional input modality. The incorporation of the arms or the head provides physically-coupled and physically-decoupled multimodal interactions respectively. Given these two design options, user studies were conducted to understand how the nature of the physical coupling of interactions influences the user's performance with tasks of varying degrees of coordination between the modalities. The results show that physically-decoupled interactions designs are preferred when the degree of coordination is high within the multimodal interaction. Moreover, experiments were also conducted to investigate user performances in object positioning tasks of the coupled interaction modality with arms together versus a decoupled arms-apart modality. Results show that user performances and preferences depend on the target size and task type for each interaction modality. Combining the findings from these user evaluations, the research presents guidelines for VR interaction design for task types and coordination levels based on users' performances and preferences. By presenting designs for hand and head interactions along with the previous work on tap and eye-based interactions, this research broadens the existing input modalities available in VR HMDs, thus allowing for richer user experiences in mobile VR.