Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs

Aodong Chen, Fei Xu, Li Han, Yuan Dong, Li Chen, Zhi Zhou, Fangming Liu

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

GPUs have become the defacto hardware devices for accelerating Deep Neural Network (DNN) inference workloads. However, the conventional sequential execution mode of DNN operators in mainstream deep learning frameworks cannot fully utilize GPU resources, even with the operator fusion enabled, due to the increasing complexity of model structures and a greater diversity of operators. Moreover, the inadequate operator launch order in parallelized execution scenarios can lead to GPU resource wastage and unexpected performance interference among operators. In this paper, we propose Opara, a resource and interference-aware DNN Operator parallel scheduling frame work to accelerate DNN inference on GPUs. Specifically, Opara f irst employs CUDA Streams and CUDA Graph to parallelize the execution of multiple operators automatically. To further expedite DNN inference, Opara leverages the resource demands of operators to judiciously adjust the operator launch order on GPUs, overlapping the execution of compute-intensive and memory-intensive operators. We implement and open source a prototype of Opara basedonPyTorchinanon-intrusive manner. Extensive prototype experiments with representative DNN and Transformer-based models demonstrate that Opara outperforms the default sequential CUDA Graph in PyTorch and the state-of the-art operator parallelism systems by up to 1.68× and 1.29×, respectively, yet with acceptable runtime overhead.

Original languageEnglish
Pages (from-to)325-333
Number of pages9
JournalIEEE Transactions on Computers
Volume74
Issue number1
DOIs
StatePublished - 2025

Keywords

  • DNN inference
  • DNN operator parallelism
  • GPU resource utilization
  • scheduling

Fingerprint

Dive into the research topics of 'Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs'. Together they form a unique fingerprint.

Cite this