Towards a General Time Series Forecasting Model with Unified Representation and Adaptive Transfer

  • Yihang Wang
  • , Yuying Qiu
  • , Peng Chen
  • , Kai Zhao
  • , Yang Shu
  • , Zhongwen Rao*
  • , Lujia Pan
  • , Bin Yang
  • , Chenjuan Guo
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

With the growing availability of multi-domain time series data, there is an increasing demand for general forecasting models pre-trained on multi- source datasets to support diverse downstream pre- diction scenarios. Existing time series foundation models primarily focus on scaling up pre-training datasets and model sizes to enhance generalization performance. In this paper, we take a different approach by addressing two critical aspects of general forecasting models: (1) how to derive unified representations from heterogeneous multidomain time series data, and (2) how to effectively capture domain-specific features to enable adaptive transfer across various downstream scenarios. To address the first aspect, we propose Decomposed Frequency Learning as the pre-training task, which leverages frequency-based masking and re-construction to decompose coupled semantic in- formation in time series, resulting in unified representations across domains. For the second aspect, we introduce the Time Series Register, which captures domain-specific representations during pre-training and enhances adaptive transferability to downstream tasks. Our model achieves the state-of-the-art forecasting performance on seven real-world benchmarks, demonstrating remark- able few-shot and zero-shot capabilities.

Original languageEnglish
Pages (from-to)64127-64151
Number of pages25
JournalProceedings of Machine Learning Research
Volume267
StatePublished - 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025

Fingerprint

Dive into the research topics of 'Towards a General Time Series Forecasting Model with Unified Representation and Adaptive Transfer'. Together they form a unique fingerprint.

Cite this