Unified training of universal time series forecasting transformers
Deep learning for time series forecasting has traditionally operated within a one-model-per-dataset framework, limiting its potential to leverage the game-changing impact of large pre-trained models. The concept of universal forecasting, emerging from pre-training on a vast collection of time series...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9906 https://ink.library.smu.edu.sg/context/sis_research/article/10906/viewcontent/2402.02592v2.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10906 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-109062025-01-02T08:51:57Z Unified training of universal time series forecasting transformers WOO, Gerald LIU, Chenghao KUMAR, Akshat XIONG, Caiming SAVARESE, Silvio SAHOO, Doyen Deep learning for time series forecasting has traditionally operated within a one-model-per-dataset framework, limiting its potential to leverage the game-changing impact of large pre-trained models. The concept of universal forecasting, emerging from pre-training on a vast collection of time series datasets, envisions a single Large Time Series Model capable of addressing diverse downstream forecasting tasks. However, constructing such a model poses unique challenges specific to time series data: i) cross-frequency learning, ii) accommodating an arbitrary number of variates for multivariate time series, and iii) addressing the varying distributional properties inherent in large-scale data. To address these challenges, we present novel enhancements to the conventional time series Transformer architecture, resulting in our proposed Masked Encoder-based Universal Time Series Forecasting Transformer (Moirai). Trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains, Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models. 2024-07-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9906 https://ink.library.smu.edu.sg/context/sis_research/article/10906/viewcontent/2402.02592v2.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Time series forecast Deep learning Time series transformer Artificial Intelligence and Robotics |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Time series forecast Deep learning Time series transformer Artificial Intelligence and Robotics |
spellingShingle |
Time series forecast Deep learning Time series transformer Artificial Intelligence and Robotics WOO, Gerald LIU, Chenghao KUMAR, Akshat XIONG, Caiming SAVARESE, Silvio SAHOO, Doyen Unified training of universal time series forecasting transformers |
description |
Deep learning for time series forecasting has traditionally operated within a one-model-per-dataset framework, limiting its potential to leverage the game-changing impact of large pre-trained models. The concept of universal forecasting, emerging from pre-training on a vast collection of time series datasets, envisions a single Large Time Series Model capable of addressing diverse downstream forecasting tasks. However, constructing such a model poses unique challenges specific to time series data: i) cross-frequency learning, ii) accommodating an arbitrary number of variates for multivariate time series, and iii) addressing the varying distributional properties inherent in large-scale data. To address these challenges, we present novel enhancements to the conventional time series Transformer architecture, resulting in our proposed Masked Encoder-based Universal Time Series Forecasting Transformer (Moirai). Trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains, Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models. |
format |
text |
author |
WOO, Gerald LIU, Chenghao KUMAR, Akshat XIONG, Caiming SAVARESE, Silvio SAHOO, Doyen |
author_facet |
WOO, Gerald LIU, Chenghao KUMAR, Akshat XIONG, Caiming SAVARESE, Silvio SAHOO, Doyen |
author_sort |
WOO, Gerald |
title |
Unified training of universal time series forecasting transformers |
title_short |
Unified training of universal time series forecasting transformers |
title_full |
Unified training of universal time series forecasting transformers |
title_fullStr |
Unified training of universal time series forecasting transformers |
title_full_unstemmed |
Unified training of universal time series forecasting transformers |
title_sort |
unified training of universal time series forecasting transformers |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9906 https://ink.library.smu.edu.sg/context/sis_research/article/10906/viewcontent/2402.02592v2.pdf |
_version_ |
1821237281582743552 |