MAGI: Enabling multi-device gestural applications

We describe our vision of a multiple mobile or wearable device environment and share our initial exploration of our vision in multi-wrist gesture recognition. We explore how multi-device input and output might look, giving four scenarios of everyday multi-device use that show the technical challenge...

全面介紹

Saved in:
書目詳細資料
Main Authors: TRAN HUY VU, CHOO TSU WEI, KENNY, LEE, Youngki, DAVIS, Richard Christopher, MISRA, Archan
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2016
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/3287
https://ink.library.smu.edu.sg/context/sis_research/article/4289/viewcontent/MAGI.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Singapore Management University
語言: English
實物特徵
總結:We describe our vision of a multiple mobile or wearable device environment and share our initial exploration of our vision in multi-wrist gesture recognition. We explore how multi-device input and output might look, giving four scenarios of everyday multi-device use that show the technical challenges that need to be addressed. We describe our system which allows for recognition to be distributed between multiple devices, fusing recognition streams on a resource-rich device (e.g., mobile phone). An Interactor layer recognises common gestures from the fusion engine, and provides abstract input streams (e.g., scrolling and zooming) to user interface components called Midgets. These take advantage of multi-device input and output, and are designed to simplify the process of implementing multi-device gestural applications. Our initial exploration of multi-device gestures led us to design a modified pipelined HMM with early elimination of candidate gestures that can recognize gestures in almost 0.2 milliseconds and scales well to large numbers of gestures. Finally, we discuss the open problems in multi-device interaction and our research directions.