Autonomous agents in snake game via deep reinforcement learning
Since DeepMind pioneered a deep reinforcement learning (DRL) model to play the Atari games, DRL has become a commonly adopted method to enable the agents to learn complex control policies in various video games. However, similar approaches may still need to be improved when applied to more challengi...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2018
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/6073 https://ink.library.smu.edu.sg/context/sis_research/article/7076/viewcontent/ICA2018SnakeGame.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-7076 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-70762021-09-29T13:06:21Z Autonomous agents in snake game via deep reinforcement learning WEI, Zhepei WANG, Di ZHANG, Ming TAN, Ah-hwee MIAO, Chunyan ZHOU, You Since DeepMind pioneered a deep reinforcement learning (DRL) model to play the Atari games, DRL has become a commonly adopted method to enable the agents to learn complex control policies in various video games. However, similar approaches may still need to be improved when applied to more challenging scenarios, where reward signals are sparse and delayed. In this paper, we develop a refined DRL model to enable our autonomous agent to play the classical Snake Game, whose constraint gets stricter as the game progresses. Specifically, we employ a convolutional neural network (CNN) trained with a variant of Q-learning. Moreover, we propose a carefully designed reward mechanism to properly train the network, adopt a training gap strategy to temporarily bypass training after the location of the target changes, and introduce a dual experience replay method to categorize different experiences for better training efficacy. The experimental results show that our agent outperforms the baseline model and surpasses human-level performance in terms of playing the Snake Game. 2018-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/6073 info:doi/10.1109/AGENTS.2018.8460004 https://ink.library.smu.edu.sg/context/sis_research/article/7076/viewcontent/ICA2018SnakeGame.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Deep reinforcement learning Snake Game autonomous agent experience replay Databases and Information Systems Software Engineering |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Deep reinforcement learning Snake Game autonomous agent experience replay Databases and Information Systems Software Engineering |
spellingShingle |
Deep reinforcement learning Snake Game autonomous agent experience replay Databases and Information Systems Software Engineering WEI, Zhepei WANG, Di ZHANG, Ming TAN, Ah-hwee MIAO, Chunyan ZHOU, You Autonomous agents in snake game via deep reinforcement learning |
description |
Since DeepMind pioneered a deep reinforcement learning (DRL) model to play the Atari games, DRL has become a commonly adopted method to enable the agents to learn complex control policies in various video games. However, similar approaches may still need to be improved when applied to more challenging scenarios, where reward signals are sparse and delayed. In this paper, we develop a refined DRL model to enable our autonomous agent to play the classical Snake Game, whose constraint gets stricter as the game progresses. Specifically, we employ a convolutional neural network (CNN) trained with a variant of Q-learning. Moreover, we propose a carefully designed reward mechanism to properly train the network, adopt a training gap strategy to temporarily bypass training after the location of the target changes, and introduce a dual experience replay method to categorize different experiences for better training efficacy. The experimental results show that our agent outperforms the baseline model and surpasses human-level performance in terms of playing the Snake Game. |
format |
text |
author |
WEI, Zhepei WANG, Di ZHANG, Ming TAN, Ah-hwee MIAO, Chunyan ZHOU, You |
author_facet |
WEI, Zhepei WANG, Di ZHANG, Ming TAN, Ah-hwee MIAO, Chunyan ZHOU, You |
author_sort |
WEI, Zhepei |
title |
Autonomous agents in snake game via deep reinforcement learning |
title_short |
Autonomous agents in snake game via deep reinforcement learning |
title_full |
Autonomous agents in snake game via deep reinforcement learning |
title_fullStr |
Autonomous agents in snake game via deep reinforcement learning |
title_full_unstemmed |
Autonomous agents in snake game via deep reinforcement learning |
title_sort |
autonomous agents in snake game via deep reinforcement learning |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2018 |
url |
https://ink.library.smu.edu.sg/sis_research/6073 https://ink.library.smu.edu.sg/context/sis_research/article/7076/viewcontent/ICA2018SnakeGame.pdf |
_version_ |
1770575810082635776 |