Integrating knowledge compilation with reinforcement learning for routes

Sequential multiagent decision-making under partial observability and uncertainty poses several challenges. Although multiagent reinforcement learning (MARL) approaches have increased the scalability, addressing combinatorial domains is still challenging as random exploration by agents is unlikely t...

Full description

Saved in:
Bibliographic Details
Main Authors: LING, Jiajing, CHANDAK, Kushagra, KUMAR, Akshat
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2021
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/6898
https://ink.library.smu.edu.sg/context/sis_research/article/7901/viewcontent/Integrating_Knowledge_Compilation_with_Reinforcement_Learning_for_Routes.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-7901
record_format dspace
spelling sg-smu-ink.sis_research-79012022-02-07T10:52:59Z Integrating knowledge compilation with reinforcement learning for routes LING, Jiajing CHANDAK, Kushagra KUMAR, Akshat Sequential multiagent decision-making under partial observability and uncertainty poses several challenges. Although multiagent reinforcement learning (MARL) approaches have increased the scalability, addressing combinatorial domains is still challenging as random exploration by agents is unlikely to generate useful reward signals. We address cooperative multiagent pathfinding under uncertainty and partial observability where agents move from their respective sources to destinations while also satisfying constraints (e.g., visiting landmarks). Our main contributions include: (1) compiling domain knowledge such as underlying graph connectivity and domain constraints into propositional logic based decision diagrams, (2) developing modular techniques to integrate such knowledge with deep MARL algorithms, and (3) developing fast algorithms to query the compiled knowledge for accelerated episode simulation in RL. Empirically, our approach can tractably represent various types of domain constraints, and outperforms previous MARL approaches significantly both in terms of sample complexity and solution quality on a number of instances. 2021-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6898 https://ink.library.smu.edu.sg/context/sis_research/article/7901/viewcontent/Integrating_Knowledge_Compilation_with_Reinforcement_Learning_for_Routes.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
spellingShingle Databases and Information Systems
LING, Jiajing
CHANDAK, Kushagra
KUMAR, Akshat
Integrating knowledge compilation with reinforcement learning for routes
description Sequential multiagent decision-making under partial observability and uncertainty poses several challenges. Although multiagent reinforcement learning (MARL) approaches have increased the scalability, addressing combinatorial domains is still challenging as random exploration by agents is unlikely to generate useful reward signals. We address cooperative multiagent pathfinding under uncertainty and partial observability where agents move from their respective sources to destinations while also satisfying constraints (e.g., visiting landmarks). Our main contributions include: (1) compiling domain knowledge such as underlying graph connectivity and domain constraints into propositional logic based decision diagrams, (2) developing modular techniques to integrate such knowledge with deep MARL algorithms, and (3) developing fast algorithms to query the compiled knowledge for accelerated episode simulation in RL. Empirically, our approach can tractably represent various types of domain constraints, and outperforms previous MARL approaches significantly both in terms of sample complexity and solution quality on a number of instances.
format text
author LING, Jiajing
CHANDAK, Kushagra
KUMAR, Akshat
author_facet LING, Jiajing
CHANDAK, Kushagra
KUMAR, Akshat
author_sort LING, Jiajing
title Integrating knowledge compilation with reinforcement learning for routes
title_short Integrating knowledge compilation with reinforcement learning for routes
title_full Integrating knowledge compilation with reinforcement learning for routes
title_fullStr Integrating knowledge compilation with reinforcement learning for routes
title_full_unstemmed Integrating knowledge compilation with reinforcement learning for routes
title_sort integrating knowledge compilation with reinforcement learning for routes
publisher Institutional Knowledge at Singapore Management University
publishDate 2021
url https://ink.library.smu.edu.sg/sis_research/6898
https://ink.library.smu.edu.sg/context/sis_research/article/7901/viewcontent/Integrating_Knowledge_Compilation_with_Reinforcement_Learning_for_Routes.pdf
_version_ 1770576115759316992