MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading
High-frequency trading (HFT) that executes algorithmic trading in short time scales, has recently occupied the majority of cryptocurrency market. Besides traditional quantitative trading methods, reinforcement learning (RL) has become another appealing approach for HFT due to its terrific ability of...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9831 https://ink.library.smu.edu.sg/context/sis_research/article/10831/viewcontent/3637528.3672064.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10831 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-108312024-12-24T03:34:58Z MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading ZONG, Chuqiao WANG, Chaojie QIN, Molei FENG, Lei WANG, Xinrun WANG, Xinrun High-frequency trading (HFT) that executes algorithmic trading in short time scales, has recently occupied the majority of cryptocurrency market. Besides traditional quantitative trading methods, reinforcement learning (RL) has become another appealing approach for HFT due to its terrific ability of handling high-dimensional financial data and solving sophisticated sequential decision-making problems, e.g., hierarchical reinforcement learning (HRL) has shown its promising performance on second-level HFT by training a router to select only one sub-agent from the agent pool to execute the current transaction. However, existing RL methods for HFT still have some defects: 1) standard RL-based trading agents suffer from the overfitting issue, preventing them from making effective policy adjustments based on financial context; 2) due to the rapid changes in market conditions, investment decisions made by an individual agent are usually one-sided and highly biased, which might lead to significant loss in extreme markets. To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, a.k.a. MacroHFT, which consists of two training phases: 1) we first train multiple types of sub-agents with the market data decomposed according to various financial indicators, specifically market trend and volatility, where each agent owns a conditional adapter to adjust its trading policy according to market conditions; 2) then we train a hyper-agent to mix the decisions from these sub-agents and output a consistently profitable meta-policy to handle rapid market fluctuations, equipped with a memory mechanism to enhance the capability of decision-making. Extensive experiments on various cryptocurrency markets demonstrate that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks. Code has been released in https://github.com/ZONG0004/MacroHFT. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9831 info:doi/10.1145/3637528.3672064 https://ink.library.smu.edu.sg/context/sis_research/article/10831/viewcontent/3637528.3672064.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Reinforcement learning High-frequency trading Dynamic programming Markov decision processes Electronic commerce Artificial Intelligence and Robotics Management Information Systems |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Reinforcement learning High-frequency trading Dynamic programming Markov decision processes Electronic commerce Artificial Intelligence and Robotics Management Information Systems |
spellingShingle |
Reinforcement learning High-frequency trading Dynamic programming Markov decision processes Electronic commerce Artificial Intelligence and Robotics Management Information Systems ZONG, Chuqiao WANG, Chaojie QIN, Molei FENG, Lei WANG, Xinrun WANG, Xinrun MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading |
description |
High-frequency trading (HFT) that executes algorithmic trading in short time scales, has recently occupied the majority of cryptocurrency market. Besides traditional quantitative trading methods, reinforcement learning (RL) has become another appealing approach for HFT due to its terrific ability of handling high-dimensional financial data and solving sophisticated sequential decision-making problems, e.g., hierarchical reinforcement learning (HRL) has shown its promising performance on second-level HFT by training a router to select only one sub-agent from the agent pool to execute the current transaction. However, existing RL methods for HFT still have some defects: 1) standard RL-based trading agents suffer from the overfitting issue, preventing them from making effective policy adjustments based on financial context; 2) due to the rapid changes in market conditions, investment decisions made by an individual agent are usually one-sided and highly biased, which might lead to significant loss in extreme markets. To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, a.k.a. MacroHFT, which consists of two training phases: 1) we first train multiple types of sub-agents with the market data decomposed according to various financial indicators, specifically market trend and volatility, where each agent owns a conditional adapter to adjust its trading policy according to market conditions; 2) then we train a hyper-agent to mix the decisions from these sub-agents and output a consistently profitable meta-policy to handle rapid market fluctuations, equipped with a memory mechanism to enhance the capability of decision-making. Extensive experiments on various cryptocurrency markets demonstrate that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks. Code has been released in https://github.com/ZONG0004/MacroHFT. |
format |
text |
author |
ZONG, Chuqiao WANG, Chaojie QIN, Molei FENG, Lei WANG, Xinrun WANG, Xinrun |
author_facet |
ZONG, Chuqiao WANG, Chaojie QIN, Molei FENG, Lei WANG, Xinrun WANG, Xinrun |
author_sort |
ZONG, Chuqiao |
title |
MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading |
title_short |
MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading |
title_full |
MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading |
title_fullStr |
MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading |
title_full_unstemmed |
MacroHFT : Memory augmented context-aware reinforcement learning on high frequency trading |
title_sort |
macrohft : memory augmented context-aware reinforcement learning on high frequency trading |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9831 https://ink.library.smu.edu.sg/context/sis_research/article/10831/viewcontent/3637528.3672064.pdf |
_version_ |
1820027793964007424 |