TY - JOUR
T1 - Orchestrating optimization passes of machine learning compiler for reducing memory footprints of computation graphs
AU - Yu, Qianwei
AU - Nie, Pengbo
AU - Wang, Zihan
AU - Wan, Chengcheng
AU - Lin, Ziyi
AU - Jiang, He
AU - Zhao, Jianjun
AU - Qiao, Lei
AU - Chen, Le
AU - Chen, Yuting
N1 - Publisher Copyright:
© 2026 Elsevier B.V.
PY - 2026/4
Y1 - 2026/4
N2 - With the emergence of the needs of edge computing, there arises a demand for training and inferring deep learning (DL) models on memory-constrained devices. However, many DL models, namely computation graphs, have complex structure and plenty of parameters, incurring heavy memory consumption at runtime. Hence it is challenging but necessary to reduce their memory footprints at runtime. This paper proposes OPASS, a novel approach to perform hierarchical memory-constrained operator scheduling of machine learning models, and orchestrate optimization passes of Apache's TVM (a machine learning compilation framework) for lowering memory footprints of computation graphs, finally allowing the graphs to run on memory-constrained devices. Firstly, given a computation graph G, OPASS optimizes the graph heuristically and iteratively: OPASS learns the effects of passes on the graph; it then optimizes G iteratively — each iteration picks up a pass by the reduction of the memory footprint of G and as well the implicit effects of the pass for further optimizations, letting the pass be applied. The second core component of OPASS is its memory computation technique, named OPASSMem, which hierarchically schedules G's operators. It constructs a hierarchical computation graph and employs an iterative scheduling algorithm to progressively reduce memory footprints. We evaluate OPASS on REBENCH (a suite of computation graphs) and two real-world models (Transformer and ResNet). The results show the strength of OPASS: it reduces up to 90.83% of graph's memory footprints, outperforming TVM's default by 2.34×. Specifically, pass orchestration and graph scheduling reduce memory footprints by up to 54.34% and 81%, respectively.
AB - With the emergence of the needs of edge computing, there arises a demand for training and inferring deep learning (DL) models on memory-constrained devices. However, many DL models, namely computation graphs, have complex structure and plenty of parameters, incurring heavy memory consumption at runtime. Hence it is challenging but necessary to reduce their memory footprints at runtime. This paper proposes OPASS, a novel approach to perform hierarchical memory-constrained operator scheduling of machine learning models, and orchestrate optimization passes of Apache's TVM (a machine learning compilation framework) for lowering memory footprints of computation graphs, finally allowing the graphs to run on memory-constrained devices. Firstly, given a computation graph G, OPASS optimizes the graph heuristically and iteratively: OPASS learns the effects of passes on the graph; it then optimizes G iteratively — each iteration picks up a pass by the reduction of the memory footprint of G and as well the implicit effects of the pass for further optimizations, letting the pass be applied. The second core component of OPASS is its memory computation technique, named OPASSMem, which hierarchically schedules G's operators. It constructs a hierarchical computation graph and employs an iterative scheduling algorithm to progressively reduce memory footprints. We evaluate OPASS on REBENCH (a suite of computation graphs) and two real-world models (Transformer and ResNet). The results show the strength of OPASS: it reduces up to 90.83% of graph's memory footprints, outperforming TVM's default by 2.34×. Specifically, pass orchestration and graph scheduling reduce memory footprints by up to 54.34% and 81%, respectively.
KW - Hierarchical graph
KW - Memory footprint
KW - Memory-constrained devices
KW - Optimization passes
KW - Orchestration
UR - https://www.scopus.com/pages/publications/105027136617
U2 - 10.1016/j.sysarc.2026.103694
DO - 10.1016/j.sysarc.2026.103694
M3 - 文章
AN - SCOPUS:105027136617
SN - 1383-7621
VL - 173
JO - Journal of Systems Architecture
JF - Journal of Systems Architecture
M1 - 103694
ER -