Financial portfolio optimization: an autoregressive deep reinforcement learning algorithm with learned intrinsic rewards

Deep Reinforcement Learning (DRL) has had notable success in sequential learning tasks in applied settings involving high-dimensional state-action spaces, sparking the interest of the finance research community. DRL strategies have been applied to the classical portfolio optimization problem − a...

全面介紹

Saved in:
書目詳細資料
主要作者: Lim, Magdalene Hui Qi
其他作者: Patrick Pun Chi Seng
格式: Final Year Project
語言:English
出版: Nanyang Technological University 2024
主題:
在線閱讀:https://hdl.handle.net/10356/175650
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
機構: Nanyang Technological University
語言: English
實物特徵
總結:Deep Reinforcement Learning (DRL) has had notable success in sequential learning tasks in applied settings involving high-dimensional state-action spaces, sparking the interest of the finance research community. DRL strategies have been applied to the classical portfolio optimization problem − a dynamic, inter-temporal process of determining optimal portfolio allocations to maximize long-run returns. However, all existing DRL portfolio management strategies overlook the underlying interdependencies between subactions that exist in this specific task. We propose a unified framework of 2 existing concepts − autoregressive DRL architectures and learned intrinsic rewards − in order to integrate the benefits of modelling subaction dependencies and modifying the reward function to guide learning. We backtest our proposed strategy against 7 other benchmark strategies, and empirically demonstrate that ours achieves the best risk-adjusted returns. Most remarkably, from median testing results, our proposed strategy is 1 of only 2 approaches that beat market returns, while being exposed to less than a third of market risk. Moreover, we provide insights on the effects of learned intrinsic rewards against the backdrop of the autoregressive DRL architecture, which enables individual intrinsic rewards to be learned at the level of subactions, potentially addressing the credit assignment problem in RL.