Using infrastructure-provided context filters for efficient fine-grained activity sensing

While mobile and wearable sensing can capture unique insights into fine-grained activities (such as gestures and limb-based actions) at an individual level, their energy overheads are still prohibitive enough to prevent them from being executed continuously. In this paper, we explore practical alter...

Full description

Saved in:
Bibliographic Details
Main Authors: SUBBARAJU, Vigneshwaran, SEN, Sougata, MISRA, Archan, CHAKRABORTY, Satyadip, BALAN, Rajesh Krishna
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2015
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/2678
https://ink.library.smu.edu.sg/context/sis_research/article/3678/viewcontent/Infrastructure_ProvidedContextFilters_2015.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:While mobile and wearable sensing can capture unique insights into fine-grained activities (such as gestures and limb-based actions) at an individual level, their energy overheads are still prohibitive enough to prevent them from being executed continuously. In this paper, we explore practical alternatives to addressing this challenge-by exploring how cheap infrastructure sensors or information sources (e.g., BLE beacons) can be harnessed with such mobile/wearable sensors to provide an effective solution that reduces energy consumption without sacrificing accuracy. The key idea is that many fine-grained activities that we desire to capture are specific to certain location, movement or background context: infrastructure sensors and information sources (e.g., BLE beacons) offer practical and cheap ways to identify such context. In this paper, we first explore how various infrastructure, mobile & wearable sensors can be used to identify fine-grained location/movement context (e.g., transiting through a door). We then show, using a couple of illustrative examples (specifically, the detection of `switch pressing' before exiting a room and the identification of `water drinking' after approaching a water cooler) to show that such background context can be predicted, with sufficient accuracy, with sufficient lead time to enable a `triggered' model for mobile/wearable sensing of such microscopic, transient gestures and activities. Moreover, such `triggered' sensing also helps to improve the accuracy of such microscopic gesture recognition, by reducing the set of candidate activity labels. Empirical experiments show that we are able to identify 82.2% of switch-pressing and 91.73% of water-drinking activities in a campus lab setting, with a significant reduction in active sensing time (up to 92.9% compared to continuous sensing).