CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting

Deep learning has been actively studied for time series forecasting, and the mainstream paradigm is based on the end-to-end training of neural network architectures, ranging from classical LSTM/RNNs to more recent TCNs and Transformers. Motivated by the recent success of representation learning in c...

Full description

Saved in:
Bibliographic Details
Main Authors: WOO, Gerald, LIU, Chenghao, SAHOO, Doyen, KUMAR, Akshat, HOI, Steven
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2022
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/7702
https://ink.library.smu.edu.sg/context/sis_research/article/8705/viewcontent/cost.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-8705
record_format dspace
spelling sg-smu-ink.sis_research-87052023-01-10T03:07:09Z CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting WOO, Gerald LIU, Chenghao SAHOO, Doyen KUMAR, Akshat HOI, Steven Deep learning has been actively studied for time series forecasting, and the mainstream paradigm is based on the end-to-end training of neural network architectures, ranging from classical LSTM/RNNs to more recent TCNs and Transformers. Motivated by the recent success of representation learning in computer vision and natural language processing, we argue that a more promising paradigm for time series forecasting, is to first learn disentangled feature representations, followed by a simple regression fine-tuning step – we justify such a paradigm from a causal perspective. Following this principle, we propose a new time series representation learning framework for long sequence time series forecasting named CoST, which applies contrastive learning methods to learn disentangled seasonal-trend representations. CoST comprises both time domain and frequency domain contrastive losses to learn discriminative trend and seasonal representations, respectively. Extensive experiments on real-world datasets show that CoST consistently outperforms the state-of-the-art methods by a considerable margin, achieving a 21.3% improvement in MSE on multivariate benchmarks. It is also robust to various choices of backbone encoders, as well as downstream regressors. 2022-04-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/7702 https://ink.library.smu.edu.sg/context/sis_research/article/8705/viewcontent/cost.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Self-supervised learning Forecasting Representation learning Time series Databases and Information Systems
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Self-supervised learning
Forecasting
Representation learning
Time series
Databases and Information Systems
spellingShingle Self-supervised learning
Forecasting
Representation learning
Time series
Databases and Information Systems
WOO, Gerald
LIU, Chenghao
SAHOO, Doyen
KUMAR, Akshat
HOI, Steven
CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting
description Deep learning has been actively studied for time series forecasting, and the mainstream paradigm is based on the end-to-end training of neural network architectures, ranging from classical LSTM/RNNs to more recent TCNs and Transformers. Motivated by the recent success of representation learning in computer vision and natural language processing, we argue that a more promising paradigm for time series forecasting, is to first learn disentangled feature representations, followed by a simple regression fine-tuning step – we justify such a paradigm from a causal perspective. Following this principle, we propose a new time series representation learning framework for long sequence time series forecasting named CoST, which applies contrastive learning methods to learn disentangled seasonal-trend representations. CoST comprises both time domain and frequency domain contrastive losses to learn discriminative trend and seasonal representations, respectively. Extensive experiments on real-world datasets show that CoST consistently outperforms the state-of-the-art methods by a considerable margin, achieving a 21.3% improvement in MSE on multivariate benchmarks. It is also robust to various choices of backbone encoders, as well as downstream regressors.
format text
author WOO, Gerald
LIU, Chenghao
SAHOO, Doyen
KUMAR, Akshat
HOI, Steven
author_facet WOO, Gerald
LIU, Chenghao
SAHOO, Doyen
KUMAR, Akshat
HOI, Steven
author_sort WOO, Gerald
title CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting
title_short CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting
title_full CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting
title_fullStr CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting
title_full_unstemmed CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting
title_sort cost: contrastive learning of disentangled seasonal-trend representations for time series forecasting
publisher Institutional Knowledge at Singapore Management University
publishDate 2022
url https://ink.library.smu.edu.sg/sis_research/7702
https://ink.library.smu.edu.sg/context/sis_research/article/8705/viewcontent/cost.pdf
_version_ 1770576417225965568