Fast adaptation of activity sensing policies in mobile devices

With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technology to understand and create an engaging user experience. This paper proposes a fast adaptation and learning scheme of activity tracking policies when user statistics...

Full description

Saved in:
Bibliographic Details
Main Authors: ALSHEIKH, Mohammad Abu, NIYATO, Dusit, LIN, Shaowei, TAN, Hwee-Pink, KIM, Dong In
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2017
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3887
https://ink.library.smu.edu.sg/context/sis_research/article/4889/viewcontent/161103202v1.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
Description
Summary:With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technology to understand and create an engaging user experience. This paper proposes a fast adaptation and learning scheme of activity tracking policies when user statistics are unknown a priori, varying with time, and inconsistent for different users. In our stochastic optimization, user activities are required to be synchronized with a backend under a cellular data limit to avoid overcharges from cellular operators. The mobile device is charged intermittently using wireless or wired charging for receiving the required energy for transmission and sensing operations. Firstly, we propose an activity tracking policy by formulating a stochastic optimization as a constrained Markov decision process (CMDP). Secondly, we prove that the optimal policy of the CMDP has a threshold structure using a Lagrangian relaxation approach and the submodularity concept.We accordingly present a fast Q-learning algorithm by considering the policy structure to improve the convergence speed over that of conventional Q-learning. Finally, simulation examples are presented to support the theoretical findings of this paper.