Fast adaptation of activity sensing policies in mobile devices

With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technology to understand and create an engaging user experience. This paper proposes a fast adaptation and learning scheme of activity tracking policies when user statistics...

Full description

Saved in:
Bibliographic Details
Main Authors: ALSHEIKH, Mohammad Abu, NIYATO, Dusit, LIN, Shaowei, TAN, Hwee-Pink, KIM, Dong In
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2017
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/3887
https://ink.library.smu.edu.sg/context/sis_research/article/4889/viewcontent/161103202v1.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-4889
record_format dspace
spelling sg-smu-ink.sis_research-48892020-04-08T08:21:16Z Fast adaptation of activity sensing policies in mobile devices ALSHEIKH, Mohammad Abu NIYATO, Dusit LIN, Shaowei TAN, Hwee-Pink KIM, Dong In With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technology to understand and create an engaging user experience. This paper proposes a fast adaptation and learning scheme of activity tracking policies when user statistics are unknown a priori, varying with time, and inconsistent for different users. In our stochastic optimization, user activities are required to be synchronized with a backend under a cellular data limit to avoid overcharges from cellular operators. The mobile device is charged intermittently using wireless or wired charging for receiving the required energy for transmission and sensing operations. Firstly, we propose an activity tracking policy by formulating a stochastic optimization as a constrained Markov decision process (CMDP). Secondly, we prove that the optimal policy of the CMDP has a threshold structure using a Lagrangian relaxation approach and the submodularity concept.We accordingly present a fast Q-learning algorithm by considering the policy structure to improve the convergence speed over that of conventional Q-learning. Finally, simulation examples are presented to support the theoretical findings of this paper. 2017-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/3887 info:doi/10.1109/TVT.2016.2628966 https://ink.library.smu.edu.sg/context/sis_research/article/4889/viewcontent/161103202v1.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Activity tracking fast adaptation Internet of Things Markov decision processes wireless charging Computer Sciences Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Activity tracking
fast adaptation
Internet of Things
Markov decision processes
wireless charging
Computer Sciences
Software Engineering
spellingShingle Activity tracking
fast adaptation
Internet of Things
Markov decision processes
wireless charging
Computer Sciences
Software Engineering
ALSHEIKH, Mohammad Abu
NIYATO, Dusit
LIN, Shaowei
TAN, Hwee-Pink
KIM, Dong In
Fast adaptation of activity sensing policies in mobile devices
description With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technology to understand and create an engaging user experience. This paper proposes a fast adaptation and learning scheme of activity tracking policies when user statistics are unknown a priori, varying with time, and inconsistent for different users. In our stochastic optimization, user activities are required to be synchronized with a backend under a cellular data limit to avoid overcharges from cellular operators. The mobile device is charged intermittently using wireless or wired charging for receiving the required energy for transmission and sensing operations. Firstly, we propose an activity tracking policy by formulating a stochastic optimization as a constrained Markov decision process (CMDP). Secondly, we prove that the optimal policy of the CMDP has a threshold structure using a Lagrangian relaxation approach and the submodularity concept.We accordingly present a fast Q-learning algorithm by considering the policy structure to improve the convergence speed over that of conventional Q-learning. Finally, simulation examples are presented to support the theoretical findings of this paper.
format text
author ALSHEIKH, Mohammad Abu
NIYATO, Dusit
LIN, Shaowei
TAN, Hwee-Pink
KIM, Dong In
author_facet ALSHEIKH, Mohammad Abu
NIYATO, Dusit
LIN, Shaowei
TAN, Hwee-Pink
KIM, Dong In
author_sort ALSHEIKH, Mohammad Abu
title Fast adaptation of activity sensing policies in mobile devices
title_short Fast adaptation of activity sensing policies in mobile devices
title_full Fast adaptation of activity sensing policies in mobile devices
title_fullStr Fast adaptation of activity sensing policies in mobile devices
title_full_unstemmed Fast adaptation of activity sensing policies in mobile devices
title_sort fast adaptation of activity sensing policies in mobile devices
publisher Institutional Knowledge at Singapore Management University
publishDate 2017
url https://ink.library.smu.edu.sg/sis_research/3887
https://ink.library.smu.edu.sg/context/sis_research/article/4889/viewcontent/161103202v1.pdf
_version_ 1770573896806825984