Paeformer: Patch-Wise Representation Learning with Autoencoder for Multivariate Time Series Forecasting

  • Kun Liu
  • , Zhongjie Duan
  • , Cen Chen*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Time series forecasting plays a critical role in various real-world applications, such as finance, climate science, and transportation. However, most existing studies adopt a channel-independent strategy, which, while avoiding the ambiguity of projecting multiple variates into indistinguishable channels, often neglects the cross-variate dependencies inherent in multivariate time series. This oversight limits the upper bound of forecasting accuracy. Therefore, effectively leveraging cross-variate relationships to obtain more expressive representations is a crucial yet underexplored challenge in time series forecasting. In this paper, we propose Paeformer, a novel model that captures generalized representations of time series patches by exploiting local cross-variate dependencies and applying implicit regularization via an overcomplete autoencoder framework. Specifically, we introduce a patch-based autoencoder composed of a Transformer-based encoder and an MLP-based decoder. The encoder captures local dependencies across variates, while the reconstruction loss computed on each patch is integrated into the overall loss function. This promotes consistent training between the encoder and decoder, and serves as an implicit regularization to constrain the high-dimensional representations of patches. Moreover, we replace the traditional feedforward decoding process with a novel patch-wise decoding mechanism, establishing a new paradigm of recurrent encoding and decoding based on patch-wise sequences. Experimental results on eight benchmark multivariate time series datasets demonstrate that Paeformer consistently outperforms all baseline methods, achieving state-of-the-art performance. Our code is publicly available at: https://github.com/iuaku/Paeformer.

Original languageEnglish
Title of host publicationECAI 2025 - 28th European Conference on Artificial Intelligence, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025 - Proceedings
EditorsInes Lynce, Nello Murano, Mauro Vallati, Serena Villata, Federico Chesani, Michela Milano, Andrea Omicini, Mehdi Dastani
PublisherIOS Press BV
Pages2794-2801
Number of pages8
ISBN (Electronic)9781643686318
DOIs
StatePublished - 21 Oct 2025
Event28th European Conference on Artificial Intelligence, ECAI 2025, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025 - Bologna, Italy
Duration: 25 Oct 202530 Oct 2025

Publication series

NameFrontiers in Artificial Intelligence and Applications
Volume413
ISSN (Print)0922-6389
ISSN (Electronic)1879-8314

Conference

Conference28th European Conference on Artificial Intelligence, ECAI 2025, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025
Country/TerritoryItaly
CityBologna
Period25/10/2530/10/25

Fingerprint

Dive into the research topics of 'Paeformer: Patch-Wise Representation Learning with Autoencoder for Multivariate Time Series Forecasting'. Together they form a unique fingerprint.

Cite this