Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication
With the rise of online e-commerce platforms, more and more customers prefer to shop online. To sell more products, online platforms introduce various modules to recommend items with different properties such as huge discounts. A web page often consists of different independent modules. The ranking...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2020
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9143 https://ink.library.smu.edu.sg/context/sis_research/article/10146/viewcontent/3383313.3412233_pv.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10146 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-101462024-08-01T09:22:46Z Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication HE, Xu AN Bo, LI, Yanghua CHEN, Haikai WANG, Rundong WANG, Xinrun YU, Runsheng LI, Xin WANG, Zhirong With the rise of online e-commerce platforms, more and more customers prefer to shop online. To sell more products, online platforms introduce various modules to recommend items with different properties such as huge discounts. A web page often consists of different independent modules. The ranking policies of these modules are decided by different teams and optimized individually without cooperation, which might result in competition between modules. Thus, the global policy of the whole page could be sub-optimal. In this paper, we propose a novel multi-agent cooperative reinforcement learning approach with the restriction that different modules cannot communicate. Our contributions are three-fold. Firstly, inspired by a solution concept in game theory named correlated equilibrium, we design a signal network to promote cooperation of all modules by generating signals (vectors) for different modules. Secondly, an entropy-regularized version of the signal network is proposed to coordinate agents’ exploration of the optimal global policy. Furthermore, experiments based on real-world e-commerce data demonstrate that our algorithm obtains superior performance over baselines. 2020-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9143 info:doi/10.1145/3383313.3412233 https://ink.library.smu.edu.sg/context/sis_research/article/10146/viewcontent/3383313.3412233_pv.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Reinforcement learning Artificial Intelligence and Robotics E-Commerce Numerical Analysis and Scientific Computing |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Reinforcement learning Artificial Intelligence and Robotics E-Commerce Numerical Analysis and Scientific Computing |
spellingShingle |
Reinforcement learning Artificial Intelligence and Robotics E-Commerce Numerical Analysis and Scientific Computing HE, Xu AN Bo, LI, Yanghua CHEN, Haikai WANG, Rundong WANG, Xinrun YU, Runsheng LI, Xin WANG, Zhirong Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
description |
With the rise of online e-commerce platforms, more and more customers prefer to shop online. To sell more products, online platforms introduce various modules to recommend items with different properties such as huge discounts. A web page often consists of different independent modules. The ranking policies of these modules are decided by different teams and optimized individually without cooperation, which might result in competition between modules. Thus, the global policy of the whole page could be sub-optimal. In this paper, we propose a novel multi-agent cooperative reinforcement learning approach with the restriction that different modules cannot communicate. Our contributions are three-fold. Firstly, inspired by a solution concept in game theory named correlated equilibrium, we design a signal network to promote cooperation of all modules by generating signals (vectors) for different modules. Secondly, an entropy-regularized version of the signal network is proposed to coordinate agents’ exploration of the optimal global policy. Furthermore, experiments based on real-world e-commerce data demonstrate that our algorithm obtains superior performance over baselines. |
format |
text |
author |
HE, Xu AN Bo, LI, Yanghua CHEN, Haikai WANG, Rundong WANG, Xinrun YU, Runsheng LI, Xin WANG, Zhirong |
author_facet |
HE, Xu AN Bo, LI, Yanghua CHEN, Haikai WANG, Rundong WANG, Xinrun YU, Runsheng LI, Xin WANG, Zhirong |
author_sort |
HE, Xu |
title |
Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
title_short |
Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
title_full |
Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
title_fullStr |
Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
title_full_unstemmed |
Learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
title_sort |
learning to collaborate in multi-module recommendation via multi-agent reinforcement learning without communication |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2020 |
url |
https://ink.library.smu.edu.sg/sis_research/9143 https://ink.library.smu.edu.sg/context/sis_research/article/10146/viewcontent/3383313.3412233_pv.pdf |
_version_ |
1814047754492575744 |