Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models

Although Multi-agent Deep Reinforcement Learning (MADRL) has shown promising results in solving complex real-world problems, the applicability and reliability of MADRL models are often limited by a lack of understanding of their inner workings for explaining the decisions made. To address this issue...

全面介紹

Saved in:
書目詳細資料
Main Authors: KHAING, Phyo Wai, GENG, Minghong, SUBAGDJA, Budhitama, PATERIA, Shubham, TAN, Ah-hwee
格式: text
語言:English
出版: Institutional Knowledge at Singapore Management University 2023
主題:
在線閱讀:https://ink.library.smu.edu.sg/sis_research/8076
https://ink.library.smu.edu.sg/context/sis_research/article/9079/viewcontent/p2325.pdf
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Although Multi-agent Deep Reinforcement Learning (MADRL) has shown promising results in solving complex real-world problems, the applicability and reliability of MADRL models are often limited by a lack of understanding of their inner workings for explaining the decisions made. To address this issue, this paper proposes a novel method for explaining MADRL by generalizing the sequences of action events performed by agents into high-level abstract strategies using a spatio-temporal neural network model. Specifically, an interval-based memory retrieval procedure is developed to generalize the encoded sequences of action events over time into short sequential patterns. In addition, two abstraction algorithms are introduced, one for abstracting action events across multiple agents and the other for further abstracting the episodes over time into short sequential patterns, which can then be translated into symbolic form for interpretation. We evaluate the proposed method using the StarCraft Multi Agent Challenge (SMAC) benchmark task, which shows that the method is able to derive high-level explanations of MADRL models at various levels of granularity.