Data-efficient multi-agent reinforcement learning

With great success in Reinforcement Learning’s application to a suite of single-agent environments, it is natural to consider its application towards environments that mimic the real world to a greater degree. One such class of environments would be decentralised multi-agent environments, mimicking...

Full description

Saved in:
Bibliographic Details
Main Author: Wong, Reuben Yuh Sheng
Other Authors: Bo An
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163136
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-163136
record_format dspace
spelling sg-ntu-dr.10356-1631362022-11-25T00:23:35Z Data-efficient multi-agent reinforcement learning Wong, Reuben Yuh Sheng Bo An School of Computer Science and Engineering boan@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence With great success in Reinforcement Learning’s application to a suite of single-agent environments, it is natural to consider its application towards environments that mimic the real world to a greater degree. One such class of environments would be decentralised multi-agent environments, mimicking the many independent agents, each with their own goals in the real-world. The decentralisation of state information, as well as constraints imposed on the behaviour of agents by local observability make this a challenging problem domain. Thankfully, there currently exists a handful of powerful algorithms operating in the co-operative multi-agent space such as QMIX, which enforce that the joint-action value is monotonic in the per-agent values, allowing the maximisation of the joint-action value in linear time during off-policy learning. This work is, however, interested in exploring a tangent to multi-agent reinforcement learning. In particular, we want to explore the possibility of learning from the environment using fewer samples. We will take a look at multiple approaches in this space, ranging from injecting new learning signals to learning better representations of the state space. For its greater potential in applications to more learning algorithms, we will then take a deeper dive into algorithms based on representation learning. Bachelor of Engineering (Computer Science) 2022-11-25T00:23:35Z 2022-11-25T00:23:35Z 2022 Final Year Project (FYP) Wong, R. Y. S. (2022). Data-efficient multi-agent reinforcement learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/163136 https://hdl.handle.net/10356/163136 en application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Wong, Reuben Yuh Sheng
Data-efficient multi-agent reinforcement learning
description With great success in Reinforcement Learning’s application to a suite of single-agent environments, it is natural to consider its application towards environments that mimic the real world to a greater degree. One such class of environments would be decentralised multi-agent environments, mimicking the many independent agents, each with their own goals in the real-world. The decentralisation of state information, as well as constraints imposed on the behaviour of agents by local observability make this a challenging problem domain. Thankfully, there currently exists a handful of powerful algorithms operating in the co-operative multi-agent space such as QMIX, which enforce that the joint-action value is monotonic in the per-agent values, allowing the maximisation of the joint-action value in linear time during off-policy learning. This work is, however, interested in exploring a tangent to multi-agent reinforcement learning. In particular, we want to explore the possibility of learning from the environment using fewer samples. We will take a look at multiple approaches in this space, ranging from injecting new learning signals to learning better representations of the state space. For its greater potential in applications to more learning algorithms, we will then take a deeper dive into algorithms based on representation learning.
author2 Bo An
author_facet Bo An
Wong, Reuben Yuh Sheng
format Final Year Project
author Wong, Reuben Yuh Sheng
author_sort Wong, Reuben Yuh Sheng
title Data-efficient multi-agent reinforcement learning
title_short Data-efficient multi-agent reinforcement learning
title_full Data-efficient multi-agent reinforcement learning
title_fullStr Data-efficient multi-agent reinforcement learning
title_full_unstemmed Data-efficient multi-agent reinforcement learning
title_sort data-efficient multi-agent reinforcement learning
publisher Nanyang Technological University
publishDate 2022
url https://hdl.handle.net/10356/163136
_version_ 1751548567220125696