Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.

Many real-world scenarios including fleet management and Ad auctions can be modeled as Stackelberg mean-field games (SMFGs) where a leader aims to incentivize a large number of homogeneous self-interested followers to maximize her utility. Existing works focus on cases with a small number of heterog...

Full description

Saved in:
Bibliographic Details
Main Authors: LI, Pengdeng, YU, Runsheng, WANG, Xinrun, AN, Bo
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9127
https://ink.library.smu.edu.sg/context/sis_research/article/10130/viewcontent/29696_Transition_InformedRL_pvoa.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10130
record_format dspace
spelling sg-smu-ink.sis_research-101302024-08-01T09:33:19Z Transition-informed reinforcement learning for large-scale Stackelberg mean-field games. LI, Pengdeng YU, Runsheng WANG, Xinrun AN, Bo Many real-world scenarios including fleet management and Ad auctions can be modeled as Stackelberg mean-field games (SMFGs) where a leader aims to incentivize a large number of homogeneous self-interested followers to maximize her utility. Existing works focus on cases with a small number of heterogeneous followers, e.g., 5-10, and suffer from scalability issue when the number of followers increases. There are three major challenges in solving large-scale SMFGs: i) classical methods based on solving differential equations fail as they require exact dynamics parameters, ii) learning by interacting with environment is data-inefficient, and iii) complex interaction between the leader and followers makes the learning performance unstable. We address these challenges through transition-informed reinforcement learning. Our main contributions are threefold: i) we first propose an RL framework, the Stackelberg mean-field update, to learn the leader's policy without priors of the environment, ii) to improve the data efficiency and accelerate the learning process, we then propose the Transition-Informed Reinforcement Learning (TIRL) by leveraging the instantiated empirical Fokker-Planck equation, and iii) we develop a regularized TIRL by employing various regularizers to alleviate the sensitivity of the learning performance to the initialization of the leader's policy. Extensive experiments on fleet management and food gathering demonstrate that our approach can scale up to 100,000 followers and significantly outperform existing baselines. 2024-02-01T08:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9127 info:doi/10.1609/aaai.v38i16.29696 https://ink.library.smu.edu.sg/context/sis_research/article/10130/viewcontent/29696_Transition_InformedRL_pvoa.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Multiagent Learning Reinforcement Learning Artificial Intelligence and Robotics Numerical Analysis and Scientific Computing
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Multiagent Learning
Reinforcement Learning
Artificial Intelligence and Robotics
Numerical Analysis and Scientific Computing
spellingShingle Multiagent Learning
Reinforcement Learning
Artificial Intelligence and Robotics
Numerical Analysis and Scientific Computing
LI, Pengdeng
YU, Runsheng
WANG, Xinrun
AN, Bo
Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
description Many real-world scenarios including fleet management and Ad auctions can be modeled as Stackelberg mean-field games (SMFGs) where a leader aims to incentivize a large number of homogeneous self-interested followers to maximize her utility. Existing works focus on cases with a small number of heterogeneous followers, e.g., 5-10, and suffer from scalability issue when the number of followers increases. There are three major challenges in solving large-scale SMFGs: i) classical methods based on solving differential equations fail as they require exact dynamics parameters, ii) learning by interacting with environment is data-inefficient, and iii) complex interaction between the leader and followers makes the learning performance unstable. We address these challenges through transition-informed reinforcement learning. Our main contributions are threefold: i) we first propose an RL framework, the Stackelberg mean-field update, to learn the leader's policy without priors of the environment, ii) to improve the data efficiency and accelerate the learning process, we then propose the Transition-Informed Reinforcement Learning (TIRL) by leveraging the instantiated empirical Fokker-Planck equation, and iii) we develop a regularized TIRL by employing various regularizers to alleviate the sensitivity of the learning performance to the initialization of the leader's policy. Extensive experiments on fleet management and food gathering demonstrate that our approach can scale up to 100,000 followers and significantly outperform existing baselines.
format text
author LI, Pengdeng
YU, Runsheng
WANG, Xinrun
AN, Bo
author_facet LI, Pengdeng
YU, Runsheng
WANG, Xinrun
AN, Bo
author_sort LI, Pengdeng
title Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
title_short Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
title_full Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
title_fullStr Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
title_full_unstemmed Transition-informed reinforcement learning for large-scale Stackelberg mean-field games.
title_sort transition-informed reinforcement learning for large-scale stackelberg mean-field games.
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9127
https://ink.library.smu.edu.sg/context/sis_research/article/10130/viewcontent/29696_Transition_InformedRL_pvoa.pdf
_version_ 1814047749778178048