Financial portfolio optimization: an autoregressive deep reinforcement learning algorithm with learned intrinsic rewards
Deep Reinforcement Learning (DRL) has had notable success in sequential learning tasks in applied settings involving high-dimensional state-action spaces, sparking the interest of the finance research community. DRL strategies have been applied to the classical portfolio optimization problem − a...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175650 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep Reinforcement Learning (DRL) has had notable success in sequential learning tasks
in applied settings involving high-dimensional state-action spaces, sparking the interest of
the finance research community. DRL strategies have been applied to the classical portfolio
optimization problem − a dynamic, inter-temporal process of determining optimal portfolio
allocations to maximize long-run returns. However, all existing DRL portfolio management
strategies overlook the underlying interdependencies between subactions that exist in this
specific task. We propose a unified framework of 2 existing concepts − autoregressive DRL
architectures and learned intrinsic rewards − in order to integrate the benefits of modelling
subaction dependencies and modifying the reward function to guide learning. We backtest
our proposed strategy against 7 other benchmark strategies, and empirically demonstrate
that ours achieves the best risk-adjusted returns. Most remarkably, from median testing
results, our proposed strategy is 1 of only 2 approaches that beat market returns, while being
exposed to less than a third of market risk. Moreover, we provide insights on the effects
of learned intrinsic rewards against the backdrop of the autoregressive DRL architecture,
which enables individual intrinsic rewards to be learned at the level of subactions, potentially
addressing the credit assignment problem in RL. |
---|