Baconian : a unified model-based reinforcement learning library

Reinforcement Learning (RL) has become a trending research topic with great success in outperforming humans on many tasks including video games, board games, and robotics control. By leveraging Deep Learning (DL), RL algorithms can consume a large volume of data without any prior knowledge of the sy...

Full description

Saved in:
Bibliographic Details
Main Author: Dong, Linsen
Other Authors: Wen Yonggang
Format: Thesis-Master by Research
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146557
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Reinforcement Learning (RL) has become a trending research topic with great success in outperforming humans on many tasks including video games, board games, and robotics control. By leveraging Deep Learning (DL), RL algorithms can consume a large volume of data without any prior knowledge of the system dynamics. However, requiring a large amount of data also limits the applicability in many fields where data is costly to obtain. Model-based Reinforcement Learning (MBRL) is regarded as a promising way to achieve high data efficiency while maintaining comparable performance. MBRL equips a dynamic transition model to facilitate and speed up the policy searching by learning the system dynamics. But there are no satisfying open-sourced libraries for the RL community to conduct MBRL research. Therefore, to fill the gap, we propose an open-sourced, flexible, and user-friendly MBRL library, Baconian, to facilitate the research on MBRL. In this thesis, we illustrate the library from the aspects of design principle, implementations, and the programming guide. Various benchmark results are also given. To reach high flexibility, modularized design is applied by separating the library into three components: Experiment Manager, Training Engine, and Monitor. For implementations, we provide commonly used functionalities including parameter management, TensorFlow integration etc. Moreover, we utilize Baconian to conduct RL experiments in real research topics at the case study section. First, we utilize Baconian as the framework to tune the Dyna-style MBRL hyper-parameters in an online fashion. Our proposed method reaches a similar or better performance out of all five tasks compared to three baseline methods. Second, we use Baconian to apply RL algorithms for online video bitrate selection optimization where our method outperforms the best baseline method on average bitrate metric by 7.8%.