Self-organizing neural architectures and multi-agent cooperative reinforcement learning

Multi-agent system, wherein multiple agents work to perform tasks jointly through their interaction, is a fairly well studied problem. Many approaches to multi-agent learning exist, among which, reinforcement learning is widely used, as it does not require an explicit model of the environment. Howev...

Full description

Saved in:
Bibliographic Details
Main Author: Xiao, Dan
Other Authors: Tan Ah Hwee
Format: Theses and Dissertations
Language:English
Published: 2010
Subjects:
Online Access:https://hdl.handle.net/10356/42406
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Multi-agent system, wherein multiple agents work to perform tasks jointly through their interaction, is a fairly well studied problem. Many approaches to multi-agent learning exist, among which, reinforcement learning is widely used, as it does not require an explicit model of the environment. However, limitations remain in current multi-agent reinforcement learning approaches, including adaptability and scalability in complex and specialized multi-agent domains. In any multi-agent reinforcement learning system, two major considerations are the reinforcement learning methods used and the cooperative strategies among agents. In this research work, we propose to adopt a self-organizing neural network model, named Temporal Difference - Fusion Architecture for Learning, COgnition, and Navigation (TD-FALCON), for multi-agent reinforcement learning. TD-FALCON performs online and incremental learning in real-time with and without immediate reward signals. It thus enables an agent to learn effectively in a dynamic environment.