TY - JOUR
T1 - Federated Reinforcement Learning for Electric Vehicles Charging Control on Distribution Networks
AU - Qian, Junkai
AU - Jiang, Yuning
AU - Liu, Xin
AU - Wang, Qiong
AU - Wang, Ting
AU - Shi, Yuanming
AU - Chen, Wei
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024/2/1
Y1 - 2024/2/1
N2 - With the growing popularity of electric vehicles (EVs), maintaining power grid stability has become a significant challenge. To address this issue, EV charging control strategies have been developed to manage the switch between vehicle-to-grid (V2G) and grid-to-vehicle (G2V) modes for EVs. In this context, multiagent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control. However, existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network and ignore driver privacy. To deal with these problems, this article proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow (OPF) to distribute power flow in real time. A mathematical model is developed to describe the RDN load. The EV charging control problem is formulated as a Markov decision process (MDP) to find an optimal charging control strategy that balances V2G profits, RDN load, and driver anxiety. To effectively learn the optimal EV charging control strategy, a federated deep reinforcement learning algorithm named FedSAC is further proposed. Comprehensive simulation results demonstrate the effectiveness and superiority of our proposed algorithm in terms of the diversity of the charging control strategy, the power fluctuations on RDN, the convergence efficiency, and the generalization ability.
AB - With the growing popularity of electric vehicles (EVs), maintaining power grid stability has become a significant challenge. To address this issue, EV charging control strategies have been developed to manage the switch between vehicle-to-grid (V2G) and grid-to-vehicle (G2V) modes for EVs. In this context, multiagent deep reinforcement learning (MADRL) has proven its effectiveness in EV charging control. However, existing MADRL-based approaches fail to consider the natural power flow of EV charging/discharging in the distribution network and ignore driver privacy. To deal with these problems, this article proposes a novel approach that combines multi-EV charging/discharging with a radial distribution network (RDN) operating under optimal power flow (OPF) to distribute power flow in real time. A mathematical model is developed to describe the RDN load. The EV charging control problem is formulated as a Markov decision process (MDP) to find an optimal charging control strategy that balances V2G profits, RDN load, and driver anxiety. To effectively learn the optimal EV charging control strategy, a federated deep reinforcement learning algorithm named FedSAC is further proposed. Comprehensive simulation results demonstrate the effectiveness and superiority of our proposed algorithm in terms of the diversity of the charging control strategy, the power fluctuations on RDN, the convergence efficiency, and the generalization ability.
KW - Electrical vehicle (EV)
KW - federated learning (FL)
KW - optimal power flow (OPF)
KW - reinforcement learning
KW - vehicle-to-grid (V2G)
UR - https://www.scopus.com/pages/publications/85168723650
U2 - 10.1109/JIOT.2023.3306826
DO - 10.1109/JIOT.2023.3306826
M3 - 文章
AN - SCOPUS:85168723650
SN - 2327-4662
VL - 11
SP - 5511
EP - 5525
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 3
ER -