IRL for restless multi-armed bandits with applications in maternal and child health

Public health practitioners often have the goal of monitoring patients and maximizing patients’ time spent in “favorable” or healthy states while being constrained to using limited resources. Restless multi-armed bandits (RMAB) are an effective model to solve this problem as they are helpful to allo...

Full description

Saved in:
Bibliographic Details
Main Authors: JAIN, Gauri, VARAKANTHAM, Pradeep, XU, Haifeng, TANEJA, Aparna, DOSHI, Prashant, TAMBE, Milind
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9778
https://ink.library.smu.edu.sg/context/sis_research/article/10778/viewcontent/pricai_irl_paper_av.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10778
record_format dspace
spelling sg-smu-ink.sis_research-107782024-12-16T02:07:59Z IRL for restless multi-armed bandits with applications in maternal and child health JAIN, Gauri VARAKANTHAM, Pradeep XU, Haifeng TANEJA, Aparna DOSHI, Prashant TAMBE, Milind Public health practitioners often have the goal of monitoring patients and maximizing patients’ time spent in “favorable” or healthy states while being constrained to using limited resources. Restless multi-armed bandits (RMAB) are an effective model to solve this problem as they are helpful to allocate limited resources among many agents under resource constraints, where patients behave differently depending on whether they are intervened on or not. However, RMABs assume the reward function is known. This is unrealistic in many public health settings because patients face unique challenges and it is impossible for a human to know who is most deserving of any intervention at such a large scale. To address this shortcoming, this paper is the first to present the use of inverse reinforcement learning (IRL) to learn desired rewards for RMABs, and we demonstrate improved outcomes in a maternal and child health telehealth program. First we allow public health experts to specify their goals at an aggregate or population level and propose an algorithm to design expert trajectories at scale based on those goals. Second, our algorithm WHIRL uses gradient updates to optimize the objective, allowing for efficient and accurate learning of RMAB rewards. Third, we compare with existing baselines and outperform those in terms of run-time and accuracy. Finally, we evaluate and show the usefulness of WHIRL on thousands on beneficiaries from a real-world maternal and child health setting in India. 2024-11-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9778 info:doi/10.1007/978-981-96-0128-8_15 https://ink.library.smu.edu.sg/context/sis_research/article/10778/viewcontent/pricai_irl_paper_av.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Health Information Technology Operations Research, Systems Engineering and Industrial Engineering Theory and Algorithms
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Health Information Technology
Operations Research, Systems Engineering and Industrial Engineering
Theory and Algorithms
spellingShingle Health Information Technology
Operations Research, Systems Engineering and Industrial Engineering
Theory and Algorithms
JAIN, Gauri
VARAKANTHAM, Pradeep
XU, Haifeng
TANEJA, Aparna
DOSHI, Prashant
TAMBE, Milind
IRL for restless multi-armed bandits with applications in maternal and child health
description Public health practitioners often have the goal of monitoring patients and maximizing patients’ time spent in “favorable” or healthy states while being constrained to using limited resources. Restless multi-armed bandits (RMAB) are an effective model to solve this problem as they are helpful to allocate limited resources among many agents under resource constraints, where patients behave differently depending on whether they are intervened on or not. However, RMABs assume the reward function is known. This is unrealistic in many public health settings because patients face unique challenges and it is impossible for a human to know who is most deserving of any intervention at such a large scale. To address this shortcoming, this paper is the first to present the use of inverse reinforcement learning (IRL) to learn desired rewards for RMABs, and we demonstrate improved outcomes in a maternal and child health telehealth program. First we allow public health experts to specify their goals at an aggregate or population level and propose an algorithm to design expert trajectories at scale based on those goals. Second, our algorithm WHIRL uses gradient updates to optimize the objective, allowing for efficient and accurate learning of RMAB rewards. Third, we compare with existing baselines and outperform those in terms of run-time and accuracy. Finally, we evaluate and show the usefulness of WHIRL on thousands on beneficiaries from a real-world maternal and child health setting in India.
format text
author JAIN, Gauri
VARAKANTHAM, Pradeep
XU, Haifeng
TANEJA, Aparna
DOSHI, Prashant
TAMBE, Milind
author_facet JAIN, Gauri
VARAKANTHAM, Pradeep
XU, Haifeng
TANEJA, Aparna
DOSHI, Prashant
TAMBE, Milind
author_sort JAIN, Gauri
title IRL for restless multi-armed bandits with applications in maternal and child health
title_short IRL for restless multi-armed bandits with applications in maternal and child health
title_full IRL for restless multi-armed bandits with applications in maternal and child health
title_fullStr IRL for restless multi-armed bandits with applications in maternal and child health
title_full_unstemmed IRL for restless multi-armed bandits with applications in maternal and child health
title_sort irl for restless multi-armed bandits with applications in maternal and child health
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9778
https://ink.library.smu.edu.sg/context/sis_research/article/10778/viewcontent/pricai_irl_paper_av.pdf
_version_ 1819113136028909568