TY - GEN
T1 - RecLGB
T2 - 2025 International Joint Conference on Neural Networks, IJCNN 2025
AU - Mei, Yuxin
AU - Han, Xu
AU - Han, Zhongming
AU - Han, Li
AU - Liu, Jing
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Time-series forecasting demands efficient modeling of long-range dependencies while maintaining computational practicality. Current methods, particularly deep learning approaches, often struggle with complexity and scalability when handling extensive historical sequences. We propose RecLGB, a hybrid framework that synergizes LightGBM's efficiency with a memory-augmented deep architecture. At its core, RecLGB integrates a recursive Variational Autoencoder (VAE) enhanced by a mixed attention mechanism, which preserves temporal order through linear inductive biases while dynamically capturing dependencies via self-attention. The recursive VAE compresses lengthy historical sequences into compact hierarchical representations, serving as an external memory for LightGBM to leverage without computational overload. RecLGB is evaluated on five real-world datasets, and experiments demonstrate RecLGB's superiority, achieving superior accuracy and faster inference than Transformer-based baselines. This work bridges deep sequential modeling with gradient-boosted trees, offering a scalable, interpretable solution for resource-constrained forecasting. Code is available at: https://github.com/Mayer-myx/RecLGB.
AB - Time-series forecasting demands efficient modeling of long-range dependencies while maintaining computational practicality. Current methods, particularly deep learning approaches, often struggle with complexity and scalability when handling extensive historical sequences. We propose RecLGB, a hybrid framework that synergizes LightGBM's efficiency with a memory-augmented deep architecture. At its core, RecLGB integrates a recursive Variational Autoencoder (VAE) enhanced by a mixed attention mechanism, which preserves temporal order through linear inductive biases while dynamically capturing dependencies via self-attention. The recursive VAE compresses lengthy historical sequences into compact hierarchical representations, serving as an external memory for LightGBM to leverage without computational overload. RecLGB is evaluated on five real-world datasets, and experiments demonstrate RecLGB's superiority, achieving superior accuracy and faster inference than Transformer-based baselines. This work bridges deep sequential modeling with gradient-boosted trees, offering a scalable, interpretable solution for resource-constrained forecasting. Code is available at: https://github.com/Mayer-myx/RecLGB.
KW - attention mechanism
KW - LightGBM
KW - time series forecasting
KW - Transformer
KW - VAE
UR - https://www.scopus.com/pages/publications/105023986682
U2 - 10.1109/IJCNN64981.2025.11228414
DO - 10.1109/IJCNN64981.2025.11228414
M3 - 会议稿件
AN - SCOPUS:105023986682
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - International Joint Conference on Neural Networks, IJCNN 2025 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 June 2025 through 5 July 2025
ER -