Learning structured communication for multi-agent reinforcement learning

Junjie Sheng, Xiangfeng Wang, Bo Jin, Junchi Yan, Wenhao Li, Tsung Hui Chang, Jun Wang, Hongyuan Zha

Research output: Contribution to journalArticlepeer-review

36 Scopus citations

Abstract

This work explores the large-scale multi-agent communication mechanism for multi-agent reinforcement learning (MARL). We summarize the general topology categories for communication structures, which are often manually specified in MARL literature. A novel framework termed Learning Structured Communication (LSC) is proposed by learning a flexible and efficient communication topology (hierarchical structure). It contains two modules: structured communication module and communication-based policy module. The structured communication module learns to form a hierarchical structure by maximizing the cumulative reward of the agents under the current communication-based policy. The communication-based policy module adopts hierarchical graph neural networks to generate messages, propagate information based on the learned communication structure, and select actions. In contrast to existing communication mechanisms, our method has a learnable and hierarchical communication structure. Experiments on large-scale battle scenarios show that the proposed LSC has high communication efficiency and global cooperation capability.

Original languageEnglish
Article number50
JournalAutonomous Agents and Multi-Agent Systems
Volume36
Issue number2
DOIs
StatePublished - Oct 2022

Keywords

  • Graph Neural Networks
  • Hierarchical Structure
  • Learning Communication Structures
  • Multi-agent Reinforcement Learning

Fingerprint

Dive into the research topics of 'Learning structured communication for multi-agent reinforcement learning'. Together they form a unique fingerprint.

Cite this