Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models

Although Multi-agent Deep Reinforcement Learning (MADRL) has shown promising results in solving complex real-world problems, the applicability and reliability of MADRL models are often limited by a lack of understanding of their inner workings for explaining the decisions made. To address this issue...

全面介紹

Saved in:
書目詳細資料
Main Authors: KHAING, Phyo Wai, GENG, Minghong, SUBAGDJA, Budhitama, PATERIA, Shubham, TAN, Ah-hwee
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2023
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/8076
https://ink.library.smu.edu.sg/context/sis_research/article/9079/viewcontent/p2325.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
id sg-smu-ink.sis_research-9079
record_format dspace
spelling sg-smu-ink.sis_research-90792025-03-10T04:04:11Z Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models KHAING, Phyo Wai GENG, Minghong SUBAGDJA, Budhitama PATERIA, Shubham TAN, Ah-hwee Although Multi-agent Deep Reinforcement Learning (MADRL) has shown promising results in solving complex real-world problems, the applicability and reliability of MADRL models are often limited by a lack of understanding of their inner workings for explaining the decisions made. To address this issue, this paper proposes a novel method for explaining MADRL by generalizing the sequences of action events performed by agents into high-level abstract strategies using a spatio-temporal neural network model. Specifically, an interval-based memory retrieval procedure is developed to generalize the encoded sequences of action events over time into short sequential patterns. In addition, two abstraction algorithms are introduced, one for abstracting action events across multiple agents and the other for further abstracting the episodes over time into short sequential patterns, which can then be translated into symbolic form for interpretation. We evaluate the proposed method using the StarCraft Multi Agent Challenge (SMAC) benchmark task, which shows that the method is able to derive high-level explanations of MADRL models at various levels of granularity. 2023-06-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/8076 info:doi/10.5555/3545946.3598922 https://ink.library.smu.edu.sg/context/sis_research/article/9079/viewcontent/p2325.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Multi Agent Deep Reinforcement Learning Explainable Artificial Intelligence Explainable Deep Reinforcement Learning Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Multi Agent Deep Reinforcement Learning
Explainable Artificial Intelligence
Explainable Deep Reinforcement Learning
Databases and Information Systems
spellingShingle Multi Agent Deep Reinforcement Learning
Explainable Artificial Intelligence
Explainable Deep Reinforcement Learning
Databases and Information Systems
KHAING, Phyo Wai
GENG, Minghong
SUBAGDJA, Budhitama
PATERIA, Shubham
TAN, Ah-hwee
Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models
description Although Multi-agent Deep Reinforcement Learning (MADRL) has shown promising results in solving complex real-world problems, the applicability and reliability of MADRL models are often limited by a lack of understanding of their inner workings for explaining the decisions made. To address this issue, this paper proposes a novel method for explaining MADRL by generalizing the sequences of action events performed by agents into high-level abstract strategies using a spatio-temporal neural network model. Specifically, an interval-based memory retrieval procedure is developed to generalize the encoded sequences of action events over time into short sequential patterns. In addition, two abstraction algorithms are introduced, one for abstracting action events across multiple agents and the other for further abstracting the episodes over time into short sequential patterns, which can then be translated into symbolic form for interpretation. We evaluate the proposed method using the StarCraft Multi Agent Challenge (SMAC) benchmark task, which shows that the method is able to derive high-level explanations of MADRL models at various levels of granularity.
format text
author KHAING, Phyo Wai
GENG, Minghong
SUBAGDJA, Budhitama
PATERIA, Shubham
TAN, Ah-hwee
author_facet KHAING, Phyo Wai
GENG, Minghong
SUBAGDJA, Budhitama
PATERIA, Shubham
TAN, Ah-hwee
author_sort KHAING, Phyo Wai
title Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models
title_short Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models
title_full Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models
title_fullStr Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models
title_full_unstemmed Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models
title_sort towards explaining sequences of actions in multi-agent deep reinforcement learning models
publisher Institutional Knowledge at Singapore Management University
publishDate 2023
url https://ink.library.smu.edu.sg/sis_research/8076
https://ink.library.smu.edu.sg/context/sis_research/article/9079/viewcontent/p2325.pdf
_version_ 1827070828046450688