TY - JOUR
T1 - Multi-Agent Reinforcement Learning for Dynamic Resource Management in 6G in-X Subnetworks
AU - Du, Xiao
AU - Wang, Ting
AU - Feng, Qiang
AU - Ye, Chenhui
AU - Tao, Tao
AU - Wang, Lu
AU - Shi, Yuanming
AU - Chen, Mingsong
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - The 6G network enables a subnetwork-wide evolution, resulting in a 'network of subnetworks'. However, due to the dynamic mobility of wireless subnetworks, the data transmission of intra-subnetwork and inter-subnetwork will inevitably interfere with each other, which poses a great challenge to radio resource management. Moreover, most existing approaches require the instantaneous channel gain between subnetworks, which are usually difficult to be collected. To tackle these issues, in this paper we propose a novel effective intelligent radio resource management method using multi-agent deep reinforcement learning (MARL), which only needs the sum of received power, named received signal strength indicator (RSSI), on each channel instead of channel gains. However, to directly separate individual interference from RSSI is an almost impossible thing. To this end, we further propose a novel MARL architecture, named GA-Net, which integrates a hard attention layer to model the importance distribution of inter-subnetwork relationships based on RSSI and excludes the impact of unrelated subnetworks, and employs a graph attention network with a multi-head attention layer to exact the features and calculate their weights that will impact individual throughput. Experimental results prove that our proposed framework significantly outperforms both traditional and MARL-based methods in various aspects.
AB - The 6G network enables a subnetwork-wide evolution, resulting in a 'network of subnetworks'. However, due to the dynamic mobility of wireless subnetworks, the data transmission of intra-subnetwork and inter-subnetwork will inevitably interfere with each other, which poses a great challenge to radio resource management. Moreover, most existing approaches require the instantaneous channel gain between subnetworks, which are usually difficult to be collected. To tackle these issues, in this paper we propose a novel effective intelligent radio resource management method using multi-agent deep reinforcement learning (MARL), which only needs the sum of received power, named received signal strength indicator (RSSI), on each channel instead of channel gains. However, to directly separate individual interference from RSSI is an almost impossible thing. To this end, we further propose a novel MARL architecture, named GA-Net, which integrates a hard attention layer to model the importance distribution of inter-subnetwork relationships based on RSSI and excludes the impact of unrelated subnetworks, and employs a graph attention network with a multi-head attention layer to exact the features and calculate their weights that will impact individual throughput. Experimental results prove that our proposed framework significantly outperforms both traditional and MARL-based methods in various aspects.
KW - Graph neural network
KW - interference mitigation
KW - multi-agent DRL
KW - resource management
KW - subnetwork
UR - https://www.scopus.com/pages/publications/85139502837
U2 - 10.1109/TWC.2022.3207918
DO - 10.1109/TWC.2022.3207918
M3 - 文章
AN - SCOPUS:85139502837
SN - 1536-1276
VL - 22
SP - 1900
EP - 1914
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 3
ER -