Resource constrained deep reinforcement learning
In urban environments, resources have to be constantly matched to the “right” locations where customer demand is present. For instance, ambulances have to be matched to base stations regularly so as to reduce response time for emergency incidents in ERS (Emergency Response Systems); vehicles (cars,...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2019
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/5059 https://ink.library.smu.edu.sg/context/sis_research/article/6062/viewcontent/3528_Article_Text_6577_1_10_20190619.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-6062 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-60622020-03-12T07:56:43Z Resource constrained deep reinforcement learning BHATIA, Abhinav VARAKANTHAM, Pradeep KUMAR, Akshat In urban environments, resources have to be constantly matched to the “right” locations where customer demand is present. For instance, ambulances have to be matched to base stations regularly so as to reduce response time for emergency incidents in ERS (Emergency Response Systems); vehicles (cars, bikes among others) have to be matched to docking stations to reduce lost demand in shared mobility systems. Such problems are challenging owing to the demand uncertainty, combinatorial action spaces and constraints on allocation of resources (e.g., total resources, minimum and maximum number of resources at locations and regions). Existing systems typically employ myopic and greedy optimization approaches to optimize resource allocation. Such approaches typically are unable to handle surges or variances in demand patterns well. Recent work has demonstrated the ability of Deep RL methods in adapting well to highly uncertain environments. However, existing Deep RL methods are unable to handle combinatorial action spaces and constraints on allocation of resources. To that end, we have developed three approaches on top of the well known actor-critic approach, DDPG (Deep Deterministic Policy Gradient) that are able to handle constraints on resource allocation. We also demonstrate that they are able to outperform leading approaches on simulators validated on semi-real and real data sets. 2019-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/5059 https://ink.library.smu.edu.sg/context/sis_research/article/6062/viewcontent/3528_Article_Text_6577_1_10_20190619.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Programming Languages and Compilers Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Programming Languages and Compilers Software Engineering |
spellingShingle |
Programming Languages and Compilers Software Engineering BHATIA, Abhinav VARAKANTHAM, Pradeep KUMAR, Akshat Resource constrained deep reinforcement learning |
description |
In urban environments, resources have to be constantly matched to the “right” locations where customer demand is present. For instance, ambulances have to be matched to base stations regularly so as to reduce response time for emergency incidents in ERS (Emergency Response Systems); vehicles (cars, bikes among others) have to be matched to docking stations to reduce lost demand in shared mobility systems. Such problems are challenging owing to the demand uncertainty, combinatorial action spaces and constraints on allocation of resources (e.g., total resources, minimum and maximum number of resources at locations and regions). Existing systems typically employ myopic and greedy optimization approaches to optimize resource allocation. Such approaches typically are unable to handle surges or variances in demand patterns well. Recent work has demonstrated the ability of Deep RL methods in adapting well to highly uncertain environments. However, existing Deep RL methods are unable to handle combinatorial action spaces and constraints on allocation of resources. To that end, we have developed three approaches on top of the well known actor-critic approach, DDPG (Deep Deterministic Policy Gradient) that are able to handle constraints on resource allocation. We also demonstrate that they are able to outperform leading approaches on simulators validated on semi-real and real data sets. |
format |
text |
author |
BHATIA, Abhinav VARAKANTHAM, Pradeep KUMAR, Akshat |
author_facet |
BHATIA, Abhinav VARAKANTHAM, Pradeep KUMAR, Akshat |
author_sort |
BHATIA, Abhinav |
title |
Resource constrained deep reinforcement learning |
title_short |
Resource constrained deep reinforcement learning |
title_full |
Resource constrained deep reinforcement learning |
title_fullStr |
Resource constrained deep reinforcement learning |
title_full_unstemmed |
Resource constrained deep reinforcement learning |
title_sort |
resource constrained deep reinforcement learning |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2019 |
url |
https://ink.library.smu.edu.sg/sis_research/5059 https://ink.library.smu.edu.sg/context/sis_research/article/6062/viewcontent/3528_Article_Text_6577_1_10_20190619.pdf |
_version_ |
1770575202357346304 |