Optimizing evasive strategies for an evader with imperfect vision capacity

The multiagent pursuit-evasion problem has attracted considerable interest during recent years, and a general assumption is that the evader has perfect vision capacity. However, in the real world, the vision capacity of the evader is always imperfect, and it may have noisy observation within its lim...

Full description

Saved in:
Bibliographic Details
Main Authors: Di, Kai, Yang, Shaofu, Wang, Wanyuan, Yan, Fuhan, Xing, Haokun, Jiang, Jiuchuan, Jiang, Yichuan
Other Authors: School of Computer Science and Engineering
Format: Article
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/151334
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:The multiagent pursuit-evasion problem has attracted considerable interest during recent years, and a general assumption is that the evader has perfect vision capacity. However, in the real world, the vision capacity of the evader is always imperfect, and it may have noisy observation within its limited field of view. Such an imperfect vision capacity makes the evader sense incomplete and inaccurate information from the environment, and thus, the evader will achieve suboptimal decisions. To address this challenge, we decompose this problem into two subproblems: 1) optimizing evasive strategies with a limited field of view, and 2) optimizing evasive strategies with noisy observation. For the evader with a limited field of view, we propose a memory-based ‘worst case’ algorithm, the idea of which is to store the locations of the pursuers seen before and estimate the possible region of the pursuers outside the sight of the evader. For the evader with noisy observation, we propose a value-based reinforcement learning algorithm that trains the evader offline and applies the learned strategy to the actual environment, aiming at reducing the impact of uncertainty created by inaccurate information. Furthermore, we combine and make a trade-off between the above two algorithms and propose a memory-based reinforcement learning algorithm that utilizes the estimated locations to modify the input of the state set in the reinforcement learning algorithm. Finally, we extensively evaluate our algorithms in simulation, concluding that in this imperfect vision capacity setting, our algorithms significantly improve the escape success rate of the evader.