FIRE: A Failure-Adaptive RL Framework for Edge Computing Migrations

Marie Siew*, Shikhar Sharma, Zekai Li, Kun Guo, Chao Xu, Tania Lorido-Botran, Tony Q.S. Quek, Carlee Joe-Wong

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In edge computing, users’ service profiles are migrated between edge servers due to user mobility. Reinforcement Learning (RL) frameworks have been proposed to do so, often trained on simulated data. However, existing RL frameworks overlook occasional server failures, which although rare, impact latency-sensitive applications like AR/VR and real- time obstacle detection. These rare failures, being not adequately represented in historical training data, pose a challenge for data-driven RL algorithms. We introduce FIRE, a framework that adapts to rare events by training a RL policy in an edge computing digital twin environment. We propose FIRE-ImRE, an importance samplingbased Q-learning algorithm, which samples rare events proportionally to their impact on the value function. FIRE considers delay, migration, failure, and backup placement costs across individual and shared service profiles. We prove FIRE-ImRE’s boundedness and convergence to optimality. Next, we introduce novel deep Q-learning (FIRE-ImDQL) and actor critic (FIRE-ImACRE) versions of our algorithm to enhance scalability. We extend our framework to accommodate users with varying risk tolerances of rare failure events. Through trace-driven experiments, we show that FIRE reduces edge computing costs compared to vanilla RL and the greedy baseline in the event of failures.

Original languageEnglish
JournalIEEE Transactions on Services Computing
DOIs
StateAccepted/In press - 2025

Keywords

  • Edge computing
  • reinforcement learning
  • resilient resource allocation
  • service migration

Fingerprint

Dive into the research topics of 'FIRE: A Failure-Adaptive RL Framework for Edge Computing Migrations'. Together they form a unique fingerprint.

Cite this