TY - JOUR
T1 - Opara
T2 - Exploiting Operator Parallelism for Expediting DNN Inference on GPUs
AU - Chen, Aodong
AU - Xu, Fei
AU - Han, Li
AU - Dong, Yuan
AU - Chen, Li
AU - Zhou, Zhi
AU - Liu, Fangming
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2025
Y1 - 2025
N2 - GPUs have become the defacto hardware devices for accelerating Deep Neural Network (DNN) inference workloads. However, the conventional sequential execution mode of DNN operators in mainstream deep learning frameworks cannot fully utilize GPU resources, even with the operator fusion enabled, due to the increasing complexity of model structures and a greater diversity of operators. Moreover, the inadequate operator launch order in parallelized execution scenarios can lead to GPU resource wastage and unexpected performance interference among operators. In this paper, we propose Opara, a resource and interference-aware DNN Operator parallel scheduling frame work to accelerate DNN inference on GPUs. Specifically, Opara f irst employs CUDA Streams and CUDA Graph to parallelize the execution of multiple operators automatically. To further expedite DNN inference, Opara leverages the resource demands of operators to judiciously adjust the operator launch order on GPUs, overlapping the execution of compute-intensive and memory-intensive operators. We implement and open source a prototype of Opara basedonPyTorchinanon-intrusive manner. Extensive prototype experiments with representative DNN and Transformer-based models demonstrate that Opara outperforms the default sequential CUDA Graph in PyTorch and the state-of the-art operator parallelism systems by up to 1.68× and 1.29×, respectively, yet with acceptable runtime overhead.
AB - GPUs have become the defacto hardware devices for accelerating Deep Neural Network (DNN) inference workloads. However, the conventional sequential execution mode of DNN operators in mainstream deep learning frameworks cannot fully utilize GPU resources, even with the operator fusion enabled, due to the increasing complexity of model structures and a greater diversity of operators. Moreover, the inadequate operator launch order in parallelized execution scenarios can lead to GPU resource wastage and unexpected performance interference among operators. In this paper, we propose Opara, a resource and interference-aware DNN Operator parallel scheduling frame work to accelerate DNN inference on GPUs. Specifically, Opara f irst employs CUDA Streams and CUDA Graph to parallelize the execution of multiple operators automatically. To further expedite DNN inference, Opara leverages the resource demands of operators to judiciously adjust the operator launch order on GPUs, overlapping the execution of compute-intensive and memory-intensive operators. We implement and open source a prototype of Opara basedonPyTorchinanon-intrusive manner. Extensive prototype experiments with representative DNN and Transformer-based models demonstrate that Opara outperforms the default sequential CUDA Graph in PyTorch and the state-of the-art operator parallelism systems by up to 1.68× and 1.29×, respectively, yet with acceptable runtime overhead.
KW - DNN inference
KW - DNN operator parallelism
KW - GPU resource utilization
KW - scheduling
UR - https://www.scopus.com/pages/publications/86000380089
U2 - 10.1109/TC.2024.3475589
DO - 10.1109/TC.2024.3475589
M3 - 文章
AN - SCOPUS:86000380089
SN - 0018-9340
VL - 74
SP - 325
EP - 333
JO - IEEE Transactions on Computers
JF - IEEE Transactions on Computers
IS - 1
ER -