Abstract
Efficient load balancing (LB) in cloud data centers is crucial for optimizing resource allocation and enhancing service delivery. However, LB for diverse tasks of different users is typically multi-objective, where the complexity of balancing multiple objectives poses significant challenges as user preferences for these objectives can change dynamically based on varying operational demands. Traditional multi-objective LB solutions often fall short in such dynamic environments due to their inability to adapt effectively to shifting priorities among different objectives. To address these limitations, this paper introduces the Multi-Objective Distributed Load Balancing (MODLB) framework, which incorporates a customized multi-objective version of the Twin Delayed Deep Deterministic policy gradient (TD3) algorithm, MOTD3, and a tailored Preference Alignment (PA) mechanism. This innovative approach allows MODLB to dynamically adjust to changing user preferences, facilitating real-time optimal decision-making. Comprehensive experimental results demonstrate that MODLB significantly outperforms state-of-the-art multi-objective reinforcement learning algorithms and traditional LB solutions across various simulated environments. Moreover, ablation studies confirm the crucial roles of the MOTD3 algorithm and the PA mechanism in enhancing MODLB's ability to navigate the Pareto frontier with higher precision, thereby effectively balancing the trade-offs between global response times and load distribution fairness.
| Original language | English |
|---|---|
| Article number | 111903 |
| Journal | Computer Networks |
| Volume | 275 |
| DOIs | |
| State | Published - Feb 2026 |
Keywords
- Cloud computing
- Deep reinforcement learning
- Load balancing
- Multi-object reinforcement learning