Reinforcement Nash Equilibrium Solver

Nash Equilibrium (NE) is the canonical solution concept of game theory, which provides an elegant tool to understand the rationalities. Though mixed strategy NE exists in any game with finite players and actions, computing NE in two- or multi-player general-sum games is PPAD-Complete. Various altern...

Full description

Saved in:
Bibliographic Details
Main Authors: WANG, Xinrun, YANG, Chang, LI, Shuxin, LI, Pengdeng, HUANG, Xiao, CHAN, Hau, AN, Bo
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9878
https://ink.library.smu.edu.sg/context/sis_research/article/10878/viewcontent/0030.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10878
record_format dspace
spelling sg-smu-ink.sis_research-108782025-01-02T09:14:16Z Reinforcement Nash Equilibrium Solver WANG, Xinrun YANG, Chang LI, Shuxin LI, Pengdeng HUANG, Xiao CHAN, Hau AN, Bo Nash Equilibrium (NE) is the canonical solution concept of game theory, which provides an elegant tool to understand the rationalities. Though mixed strategy NE exists in any game with finite players and actions, computing NE in two- or multi-player general-sum games is PPAD-Complete. Various alternative solutions, e.g., Correlated Equilibrium (CE), and learning methods, e.g., fictitious play (FP), are proposed to approximate NE. For convenience, we call these methods as "inexact solvers", or "solvers" for short. However, the alternative solutions differ from NE and the learning methods generally fail to converge to NE. Therefore, in this work, we propose REinforcement Nash Equilibrium Solver (RENES), which trains a single policy to modify the games with different sizes and applies the solvers on the modified games where the obtained solution is evaluated on the original games. Specifically, our contributions are threefold. i) We represent the games as α-rank response graphs and leverage graph neural network (GNN) to handle the games with different sizes as inputs; ii) We use tensor decomposition, e.g., canonical polyadic (CP), to make the dimension of modifying actions fixed for games with different sizes; iii) We train the modifying strategy for games with the widely-used proximal policy optimization (PPO) and apply the solvers to solve the modified games, where the obtained solution is evaluated on original games. Extensive experiments on large-scale normal-form games show that our method can further improve the approximation of NE of different solvers, i.e., α-rank, CE, FP and PRD, and can be generalized to unseen games. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9878 info:doi/10.24963/ijcai.2024/30 https://ink.library.smu.edu.sg/context/sis_research/article/10878/viewcontent/0030.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Nash equilibrium game theory reinforcement learning REinforcement Nash Equilibrium Solver (RENES) graph neural networks tensor decomposition proximal policy optimization α-rank correlated equilibrium fictitious play Artificial Intelligence and Robotics
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Nash equilibrium
game theory
reinforcement learning
REinforcement Nash Equilibrium Solver (RENES)
graph neural networks
tensor decomposition
proximal policy optimization
α-rank
correlated equilibrium
fictitious play
Artificial Intelligence and Robotics
spellingShingle Nash equilibrium
game theory
reinforcement learning
REinforcement Nash Equilibrium Solver (RENES)
graph neural networks
tensor decomposition
proximal policy optimization
α-rank
correlated equilibrium
fictitious play
Artificial Intelligence and Robotics
WANG, Xinrun
YANG, Chang
LI, Shuxin
LI, Pengdeng
HUANG, Xiao
CHAN, Hau
AN, Bo
Reinforcement Nash Equilibrium Solver
description Nash Equilibrium (NE) is the canonical solution concept of game theory, which provides an elegant tool to understand the rationalities. Though mixed strategy NE exists in any game with finite players and actions, computing NE in two- or multi-player general-sum games is PPAD-Complete. Various alternative solutions, e.g., Correlated Equilibrium (CE), and learning methods, e.g., fictitious play (FP), are proposed to approximate NE. For convenience, we call these methods as "inexact solvers", or "solvers" for short. However, the alternative solutions differ from NE and the learning methods generally fail to converge to NE. Therefore, in this work, we propose REinforcement Nash Equilibrium Solver (RENES), which trains a single policy to modify the games with different sizes and applies the solvers on the modified games where the obtained solution is evaluated on the original games. Specifically, our contributions are threefold. i) We represent the games as α-rank response graphs and leverage graph neural network (GNN) to handle the games with different sizes as inputs; ii) We use tensor decomposition, e.g., canonical polyadic (CP), to make the dimension of modifying actions fixed for games with different sizes; iii) We train the modifying strategy for games with the widely-used proximal policy optimization (PPO) and apply the solvers to solve the modified games, where the obtained solution is evaluated on original games. Extensive experiments on large-scale normal-form games show that our method can further improve the approximation of NE of different solvers, i.e., α-rank, CE, FP and PRD, and can be generalized to unseen games.
format text
author WANG, Xinrun
YANG, Chang
LI, Shuxin
LI, Pengdeng
HUANG, Xiao
CHAN, Hau
AN, Bo
author_facet WANG, Xinrun
YANG, Chang
LI, Shuxin
LI, Pengdeng
HUANG, Xiao
CHAN, Hau
AN, Bo
author_sort WANG, Xinrun
title Reinforcement Nash Equilibrium Solver
title_short Reinforcement Nash Equilibrium Solver
title_full Reinforcement Nash Equilibrium Solver
title_fullStr Reinforcement Nash Equilibrium Solver
title_full_unstemmed Reinforcement Nash Equilibrium Solver
title_sort reinforcement nash equilibrium solver
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9878
https://ink.library.smu.edu.sg/context/sis_research/article/10878/viewcontent/0030.pdf
_version_ 1821237271542628352