Handling Data Skew for Aggregation in Spark SQL Using Task Stealing

  • Zeyu He*
  • , Qiuli Huang
  • , Zhifang Li
  • , Chuliang Weng
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

In distributed in-memory computing systems, data distribution has a large impact on performance. Designing a good partition algorithm is difficult and requires users to have adequate prior knowledge of data, which makes data skew common in reality. Traditional approaches to handling data skew by sampling and repartitioning often incur additional overhead. In this paper, we proposed a dynamic execution optimization for the aggregation operator, which is one of the most general and expensive operators in Spark SQL. Our optimization aims to avoid the additional overhead and improve the performance when data skew occurs. The core idea is task stealing. Based on the relative size of data partitions, we add two types of tasks, namely segment tasks for larger partitions and stealing tasks for smaller partitions. In a stage, stealing tasks could actively steal and process data from segment tasks after processing their own. The optimization achieves significant performance improvements from 16% up to 67% on different sizes and distributions of data. Experiments show that involved overhead is minimal and could be negligible.

Original languageEnglish
Pages (from-to)941-956
Number of pages16
JournalInternational Journal of Parallel Programming
Volume48
Issue number6
DOIs
StatePublished - 1 Dec 2020

Keywords

  • Aggregation
  • Data skew
  • In-memory computing
  • Spark SQL

Fingerprint

Dive into the research topics of 'Handling Data Skew for Aggregation in Spark SQL Using Task Stealing'. Together they form a unique fingerprint.

Cite this