Learning from the Past: Adaptive Parallelism Tuning for Stream Processing Systems

  • Yuxing Han
  • , Lixiang Chen
  • , Haoyu Wang
  • , Zhanghao Chen
  • , Yifan Zhang
  • , Chengcheng Yang
  • , Kongzhang Hao
  • , Zhengyi Yang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Distributed stream processing systems rely on the dataflow model to define and execute streaming jobs, organizing computations as Directed Acyclic Graphs (DAGs) of operators. Adjusting the parallelism of these operators is crucial to handling fluctuating workloads efficiently while balancing resource usage and processing performance. However, existing methods often fail to effectively utilize execution histories or fully exploit DAG structures, limiting their ability to identify bottlenecks and determine the optimal parallelism. In this paper, we propose StreamTune, a novel approach for adaptive parallelism tuning in stream processing systems. StreamTune incorporates a pre-training and fine-tuning framework that leverages global knowledge from historical execution data for job-specific parallelism tuning. In the pre-training phase, StreamTune clusters the historical data with Graph Edit Distance and pre-trains a Graph Neural Network-based encoder per cluster to capture the correlation between the operator parallelism, DAG structures, and the identified operator-level bottlenecks. In the online tuning phase, Stream-Tu ne iteratively refines operator parallelism recommendations using an operator-level bottleneck prediction model enforced with a monotonic constraint, which aligns with the observed system performance behavior. Evaluation results demonstrate that StreamTune reduces reconfigurations by up to 29.6% and parallelism degrees by up to 30.8% in Apache Flink under a synthetic workload. In Timely Dataflow, StreamTune achieves up to an 83.3% reduction in parallelism degrees while maintaining comparable processing performance under the Nexmark benchmark, when compared to the state-of-the-art methods.

Original languageEnglish
Title of host publicationProceedings - 2025 IEEE 41st International Conference on Data Engineering, ICDE 2025
PublisherIEEE Computer Society
Pages3535-3548
Number of pages14
ISBN (Electronic)9798331536039
DOIs
StatePublished - 2025
Event41st IEEE International Conference on Data Engineering, ICDE 2025 - Hong Kong, China
Duration: 19 May 202523 May 2025

Publication series

NameProceedings - International Conference on Data Engineering
ISSN (Print)1084-4627
ISSN (Electronic)2375-0286

Conference

Conference41st IEEE International Conference on Data Engineering, ICDE 2025
Country/TerritoryChina
CityHong Kong
Period19/05/2523/05/25

Fingerprint

Dive into the research topics of 'Learning from the Past: Adaptive Parallelism Tuning for Stream Processing Systems'. Together they form a unique fingerprint.

Cite this