PARALLEL MONTE CARLO METHOD IN GRID WORLD (REINFORCEMENT LEARNING) USING CUDA DYNAMIC PARALLELISM
Parallel Monte Carlo method for reinforcement learning problem has been shown to be able to accelerate agents’ experience quality gain per episode by increasing number of agents. Previous researches have experimented on this with up to 16 parallel agents. The rapid development of GPGPU, especiall...
Saved in:
Main Author: | |
---|---|
Format: | Theses |
Language: | Indonesia |
Online Access: | https://digilib.itb.ac.id/gdl/view/39712 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Institut Teknologi Bandung |
Language: | Indonesia |
Summary: | Parallel Monte Carlo method for reinforcement learning problem has been shown
to be able to accelerate agents’ experience quality gain per episode by increasing
number of agents. Previous researches have experimented on this with up to 16
parallel agents. The rapid development of GPGPU, especially NVIDIA CUDA, has
opened new possibilities to use higher number of parallel agents. But this also
reveals new problem as the increase of agent number is also followed by higher
load of experience sharing needed for each agents. In this research, we propose two
implementations using CUDA Dynamic Parallelism (CDP) to answer this problem
on grid world. The two proposed solutions are asynchronous parallel Monte Carlo
and nested-asynchronous parallel Monte Carlo. The experiments showed the implemented
solutions gave up to 22% performance gain. But as the number of agents
and episodes increased the overhead caused by CDP kernel calls will overshadow
the performance gained. |
---|