TY - JOUR
T1 - Multi-Objective Deep Reinforcement Learning for Function Offloading in Serverless Edge Computing
AU - Yang, Yaning
AU - Du, Xiao
AU - Ye, Yutong
AU - Ding, Jiepin
AU - Wang, Ting
AU - Chen, Mingsong
AU - Li, Keqin
N1 - Publisher Copyright:
© 2008-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Function offloading problems play a crucial role in optimizing the performance of applications in serverless edge computing (SEC). Existing research has extensively explored function offloading strategies based on optimizing a single objective. However, a significant challenge arises when users expect to optimize multiple objectives according to the relative importance of these objectives. This challenge becomes particularly pronounced when the relative importance of the objectives dynamically shifts. Consequently, there is an urgent need for research into multi-objective function offloading methods. In this paper, we redefine the SEC function offloading problem as a dynamic multi-objective optimization issue and propose a novel approach based on Multi-objective Reinforcement Learning (MORL) called MOSEC. MOSEC can coordinately optimize three objectives, i.e., application completion time, User Device (UD) energy consumption, and user cost. To reduce the impact of extrapolation errors, MOSEC integrates a Near-on Experience Replay (NER) strategy during the model training. Furthermore, MOSEC adopts our proposed Earliest First (EF) scheme to maintain the policies learned previously, which can efficiently mitigate the catastrophic policy forgetting problem. Extensive experiments conducted on various generated applications demonstrate the superiority of MOSEC over state-of-the-art multi-objective optimization algorithms.
AB - Function offloading problems play a crucial role in optimizing the performance of applications in serverless edge computing (SEC). Existing research has extensively explored function offloading strategies based on optimizing a single objective. However, a significant challenge arises when users expect to optimize multiple objectives according to the relative importance of these objectives. This challenge becomes particularly pronounced when the relative importance of the objectives dynamically shifts. Consequently, there is an urgent need for research into multi-objective function offloading methods. In this paper, we redefine the SEC function offloading problem as a dynamic multi-objective optimization issue and propose a novel approach based on Multi-objective Reinforcement Learning (MORL) called MOSEC. MOSEC can coordinately optimize three objectives, i.e., application completion time, User Device (UD) energy consumption, and user cost. To reduce the impact of extrapolation errors, MOSEC integrates a Near-on Experience Replay (NER) strategy during the model training. Furthermore, MOSEC adopts our proposed Earliest First (EF) scheme to maintain the policies learned previously, which can efficiently mitigate the catastrophic policy forgetting problem. Extensive experiments conducted on various generated applications demonstrate the superiority of MOSEC over state-of-the-art multi-objective optimization algorithms.
KW - Serverless edge computing
KW - deep reinforcement learning
KW - function offloading
KW - multi-objective optimization
UR - https://www.scopus.com/pages/publications/85208364722
U2 - 10.1109/TSC.2024.3489443
DO - 10.1109/TSC.2024.3489443
M3 - 文章
AN - SCOPUS:85208364722
SN - 1939-1374
VL - 18
SP - 288
EP - 301
JO - IEEE Transactions on Services Computing
JF - IEEE Transactions on Services Computing
IS - 1
ER -