Reinforcement trading for multi-market portfolio with crisis avoidance

The global financial market comes to a new crisis in 2020 triggered by the COVID-19 pandemic. During such a period, it is crucial for a portfolio manager to adopt policies that can preserve the value of the portfolio. Although innovations in computational finance using Machine Learning emerge rapidl...

Full description

Saved in:
Bibliographic Details
Main Author: Cai, Lingzhi
Other Authors: Quek Hiok Chai
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/139001
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-139001
record_format dspace
spelling sg-ntu-dr.10356-1390012020-05-14T09:22:59Z Reinforcement trading for multi-market portfolio with crisis avoidance Cai, Lingzhi Quek Hiok Chai School of Computer Science and Engineering ashcquek@ntu.edu.sg Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence The global financial market comes to a new crisis in 2020 triggered by the COVID-19 pandemic. During such a period, it is crucial for a portfolio manager to adopt policies that can preserve the value of the portfolio. Although innovations in computational finance using Machine Learning emerge rapidly, many of the works are using off-line supervised learning that is dependent on the training data in a specific period, and the model is not capable of direct trading. Additionally, many other works using Reinforcement Learning approaches are built with in-house tools and is lack of extensibility. As such, these models are neither transferrable to greater markets in a longer time range, nor are they capable to handle the black swan or grey rhino events that reappear almost every decade. In this paper, we proposed a Reinforcement Learning trading framework with a crisis avoidance algorithm. The framework adopts the open-sourced OpenAI Gym standard and Stable Baseline model that are open for third-party tools and future extension. We invented a Reinforcement Learning Environment to describe the market behavior with technical analysis and finite rule-based action sets. The framework further implements a crisis detection and avoidance algorithm. The experiment result shows that the models trained by the framework performed as well as buy-and-hold strategy benchmark in the bullish period of 2015-2019. Furthermore, very much accredited to the crisis avoidance algorithm, the models acted 17% better than buy-and-hold during all testing windows no less than 5 years in 2000-2019. Bachelor of Engineering (Computer Science) 2020-05-14T09:22:59Z 2020-05-14T09:22:59Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/139001 en SCSE19-0525 application/pdf Nanyang Technological University
institution Nanyang Technological University
building NTU Library
country Singapore
collection DR-NTU
language English
topic Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
spellingShingle Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Cai, Lingzhi
Reinforcement trading for multi-market portfolio with crisis avoidance
description The global financial market comes to a new crisis in 2020 triggered by the COVID-19 pandemic. During such a period, it is crucial for a portfolio manager to adopt policies that can preserve the value of the portfolio. Although innovations in computational finance using Machine Learning emerge rapidly, many of the works are using off-line supervised learning that is dependent on the training data in a specific period, and the model is not capable of direct trading. Additionally, many other works using Reinforcement Learning approaches are built with in-house tools and is lack of extensibility. As such, these models are neither transferrable to greater markets in a longer time range, nor are they capable to handle the black swan or grey rhino events that reappear almost every decade. In this paper, we proposed a Reinforcement Learning trading framework with a crisis avoidance algorithm. The framework adopts the open-sourced OpenAI Gym standard and Stable Baseline model that are open for third-party tools and future extension. We invented a Reinforcement Learning Environment to describe the market behavior with technical analysis and finite rule-based action sets. The framework further implements a crisis detection and avoidance algorithm. The experiment result shows that the models trained by the framework performed as well as buy-and-hold strategy benchmark in the bullish period of 2015-2019. Furthermore, very much accredited to the crisis avoidance algorithm, the models acted 17% better than buy-and-hold during all testing windows no less than 5 years in 2000-2019.
author2 Quek Hiok Chai
author_facet Quek Hiok Chai
Cai, Lingzhi
format Final Year Project
author Cai, Lingzhi
author_sort Cai, Lingzhi
title Reinforcement trading for multi-market portfolio with crisis avoidance
title_short Reinforcement trading for multi-market portfolio with crisis avoidance
title_full Reinforcement trading for multi-market portfolio with crisis avoidance
title_fullStr Reinforcement trading for multi-market portfolio with crisis avoidance
title_full_unstemmed Reinforcement trading for multi-market portfolio with crisis avoidance
title_sort reinforcement trading for multi-market portfolio with crisis avoidance
publisher Nanyang Technological University
publishDate 2020
url https://hdl.handle.net/10356/139001
_version_ 1681057135277899776