TY - JOUR
T1 - Topic-aware influence maximization based on deep reinforcement learning and group relative optimization
AU - Zou, Yingqi
AU - Li, Guanyu
AU - Wang, Yanhao
AU - Ning, Bo
AU - Wang, Shaohan
N1 - Publisher Copyright:
© 2026
PY - 2026/4
Y1 - 2026/4
N2 - Information diffusion on social networks is increasingly complex and diverse. Identifying suitable users to recommend and spread specific information is critical for social network analysis. The Topic-aware Influence Maximization (TIM) problem aims to identify a seed set that maximizes the influence spread under a given topic distribution. However, existing TIM methods suffer from severe computational inefficiencies. Meanwhile, current Deep Reinforcement Learning (DRL)-based methods mostly ignore the interplay between network structure and topic heterogeneity. To address these challenges, this paper proposes GR-TIM, an end-to-end DRL-based framework for TIM. GR-TIM first estimates the pre-global influence using Graph Neural Networks (GNNs). Then, a group relative optimization strategy partitions users based on topic and community structures. We further leverage intra-group collaboration to apply the global-local optimization paradigm to agent training and inter-group competition to achieve adaptive seed selection for the target topic. Experiments on six real-world datasets demonstrate that GR-TIM outperforms state-of-the-art DRL-based methods in terms of multi-topic influence spread and reduces runtime by two to three orders of magnitude compared to existing simulation-based methods.
AB - Information diffusion on social networks is increasingly complex and diverse. Identifying suitable users to recommend and spread specific information is critical for social network analysis. The Topic-aware Influence Maximization (TIM) problem aims to identify a seed set that maximizes the influence spread under a given topic distribution. However, existing TIM methods suffer from severe computational inefficiencies. Meanwhile, current Deep Reinforcement Learning (DRL)-based methods mostly ignore the interplay between network structure and topic heterogeneity. To address these challenges, this paper proposes GR-TIM, an end-to-end DRL-based framework for TIM. GR-TIM first estimates the pre-global influence using Graph Neural Networks (GNNs). Then, a group relative optimization strategy partitions users based on topic and community structures. We further leverage intra-group collaboration to apply the global-local optimization paradigm to agent training and inter-group competition to achieve adaptive seed selection for the target topic. Experiments on six real-world datasets demonstrate that GR-TIM outperforms state-of-the-art DRL-based methods in terms of multi-topic influence spread and reduces runtime by two to three orders of magnitude compared to existing simulation-based methods.
KW - Deep reinforcement learning
KW - Graph neural networks
KW - Group relative optimization
KW - Social networks
KW - Topic-aware influence maximization
UR - https://www.scopus.com/pages/publications/105029227530
U2 - 10.1016/j.asoc.2026.114786
DO - 10.1016/j.asoc.2026.114786
M3 - 文章
AN - SCOPUS:105029227530
SN - 1568-4946
VL - 192
JO - Applied Soft Computing
JF - Applied Soft Computing
M1 - 114786
ER -