TY - JOUR
T1 - Cost-Efficient Continuous Edge Learning for Artificial Intelligence of Things
AU - Jia, Lin
AU - Zhou, Zhi
AU - Xu, Fei
AU - Jin, Hai
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2022/5/15
Y1 - 2022/5/15
N2 - The accelerating convergence of artificial intelligence (AI) and Internet of Things (IoT) has sparked a recent wave of interest in Artificial Intelligence of Things (AIoT). By exploiting the novel paradigm of edge intelligence, emerging computational intensive and resource demanding AIoT applications can be efficiently supported at the network edge. However, due to the limited resource capacity and/or power budget of the edge node, AIoT applications typically deploy compressed AI models to achieve the goal of low-latency and energy-efficient model inference. However, compressed models inherently suffer from the curse of data drift, i.e., the inference data at the deployment stage diverges from the training data at the training stage, leading to reduced model inference accuracy. To handle this issue, continuous learning has been proposed to periodically retrain the AI models on new data in an incremental manner. In this article, we investigate how to coordinate the edge and the cloud resources to perform cost-efficient continuous learning, with the goal of simultaneously optimizing the model performance (in terms of accuracy and robustness) and resource cost. Leveraging the Lyapunov optimization theory, we design and analyze a cost-efficient optimization framework for making online decisions upon admission control, transmission scheduling, and resource provisioning, for the dynamically arrived new data samples of various AIoT applications. We examine the effectiveness of the proposed framework on navigating the performance-cost tradeoff theoretically and empirically through trace-driven simulations.
AB - The accelerating convergence of artificial intelligence (AI) and Internet of Things (IoT) has sparked a recent wave of interest in Artificial Intelligence of Things (AIoT). By exploiting the novel paradigm of edge intelligence, emerging computational intensive and resource demanding AIoT applications can be efficiently supported at the network edge. However, due to the limited resource capacity and/or power budget of the edge node, AIoT applications typically deploy compressed AI models to achieve the goal of low-latency and energy-efficient model inference. However, compressed models inherently suffer from the curse of data drift, i.e., the inference data at the deployment stage diverges from the training data at the training stage, leading to reduced model inference accuracy. To handle this issue, continuous learning has been proposed to periodically retrain the AI models on new data in an incremental manner. In this article, we investigate how to coordinate the edge and the cloud resources to perform cost-efficient continuous learning, with the goal of simultaneously optimizing the model performance (in terms of accuracy and robustness) and resource cost. Leveraging the Lyapunov optimization theory, we design and analyze a cost-efficient optimization framework for making online decisions upon admission control, transmission scheduling, and resource provisioning, for the dynamically arrived new data samples of various AIoT applications. We examine the effectiveness of the proposed framework on navigating the performance-cost tradeoff theoretically and empirically through trace-driven simulations.
KW - Artificial Intelligence of Things (AIoT)
KW - Cloud-edge coordination
KW - Continuous learning
KW - Cost efficiency
KW - Edge intelligence
UR - https://www.scopus.com/pages/publications/85130637832
U2 - 10.1109/JIOT.2021.3104089
DO - 10.1109/JIOT.2021.3104089
M3 - 文章
AN - SCOPUS:85130637832
SN - 2327-4662
VL - 9
SP - 7325
EP - 7337
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 10
ER -