Fast adaptation of activity sensing policies in mobile devices
With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technologyto understand and create an engaging user experience. This paper proposes afast adaptation and learning scheme of activity tracking policies when userstatistics are...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2017
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/3858 https://ink.library.smu.edu.sg/context/sis_research/article/4860/viewcontent/161103202v1.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-4860 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-48602017-11-30T06:55:13Z Fast adaptation of activity sensing policies in mobile devices ALSHEIKH, Mohammad Abu NIYATO, Dusit LIN, Shaowei TAN, Hwee-Pink KIM, Dong In With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technologyto understand and create an engaging user experience. This paper proposes afast adaptation and learning scheme of activity tracking policies when userstatistics are unknown a priori, varying with time, and inconsistent for differentusers. In our stochastic optimization, user activities are required to besynchronized with a backend under a cellular data limit to avoid overchargesfrom cellular operators. The mobile device is charged intermittently usingwireless or wired charging for receiving the required energy for transmission andsensing operations. Firstly, we propose an activity tracking policy byformulating a stochastic optimization as a constrained Markov decision process(CMDP). Secondly, we prove that the optimal policy of the CMDP has a thresholdstructure using a Lagrangian relaxation approach and the submodularity concept.We accordingly present a fast Q-learning algorithm by considering the policystructure to improve the convergence speed over that of conventionalQ-learning. Finally, simulation examples are presented to support thetheoretical findings of this paper. 2017-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/3858 info:doi/10.1109/TVT.2016.2628966 https://ink.library.smu.edu.sg/context/sis_research/article/4860/viewcontent/161103202v1.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Activity tracking fast adaptation Internet of Things Markov decision processes wireless charging Computer Sciences Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Activity tracking fast adaptation Internet of Things Markov decision processes wireless charging Computer Sciences Software Engineering |
spellingShingle |
Activity tracking fast adaptation Internet of Things Markov decision processes wireless charging Computer Sciences Software Engineering ALSHEIKH, Mohammad Abu NIYATO, Dusit LIN, Shaowei TAN, Hwee-Pink KIM, Dong In Fast adaptation of activity sensing policies in mobile devices |
description |
With the proliferation of sensors, such as accelerometers,in mobile devices, activity and motion tracking has become a viable technologyto understand and create an engaging user experience. This paper proposes afast adaptation and learning scheme of activity tracking policies when userstatistics are unknown a priori, varying with time, and inconsistent for differentusers. In our stochastic optimization, user activities are required to besynchronized with a backend under a cellular data limit to avoid overchargesfrom cellular operators. The mobile device is charged intermittently usingwireless or wired charging for receiving the required energy for transmission andsensing operations. Firstly, we propose an activity tracking policy byformulating a stochastic optimization as a constrained Markov decision process(CMDP). Secondly, we prove that the optimal policy of the CMDP has a thresholdstructure using a Lagrangian relaxation approach and the submodularity concept.We accordingly present a fast Q-learning algorithm by considering the policystructure to improve the convergence speed over that of conventionalQ-learning. Finally, simulation examples are presented to support thetheoretical findings of this paper. |
format |
text |
author |
ALSHEIKH, Mohammad Abu NIYATO, Dusit LIN, Shaowei TAN, Hwee-Pink KIM, Dong In |
author_facet |
ALSHEIKH, Mohammad Abu NIYATO, Dusit LIN, Shaowei TAN, Hwee-Pink KIM, Dong In |
author_sort |
ALSHEIKH, Mohammad Abu |
title |
Fast adaptation of activity sensing policies in mobile devices |
title_short |
Fast adaptation of activity sensing policies in mobile devices |
title_full |
Fast adaptation of activity sensing policies in mobile devices |
title_fullStr |
Fast adaptation of activity sensing policies in mobile devices |
title_full_unstemmed |
Fast adaptation of activity sensing policies in mobile devices |
title_sort |
fast adaptation of activity sensing policies in mobile devices |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2017 |
url |
https://ink.library.smu.edu.sg/sis_research/3858 https://ink.library.smu.edu.sg/context/sis_research/article/4860/viewcontent/161103202v1.pdf |
_version_ |
1770573827705667584 |