Optimal persistent monitoring using reinforcement learning
When monitoring a dynamically changing environment where a stationary group of agents cannot fully cover, a persistent monitoring problem (PMP) arises. In contrast to constantly monitoring, where every target must be monitored simultaneously, persistent monitoring requires a smaller number of agents...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/149369 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | When monitoring a dynamically changing environment where a stationary group of agents cannot fully cover, a persistent monitoring problem (PMP) arises. In contrast to constantly monitoring, where every target must be monitored simultaneously, persistent monitoring requires a smaller number of agents and provides an effective and reliable prediction with a minimized uncertainty metric. This project aims to implement Reinforcement Learning (RL) in the multiple targets monitoring simulation with a single agent. This paper presents a comparative analysis of five implementations in Reinforcement Learning: Deep Q Network (DQN), Double Deep Q Network (DDQN), Dueling Deep Q Network (Dueling DQN), Multi-Objective Deep Reinforcement Learning (MODRL) and Hierarchical Deep Q Network (HDQN). Different designs of the reward function and stop condition are tested and evaluated to improve models’ decision capability. This paper presents experiences in applying the goal decomposition, a new approach to feature extension to solve the persistent monitoring problem without modifying images, and an improved method for a highly dynamic environment. These proposed approaches significantly enhance the model’s performance and stability. |
---|