Learning Structured Communication for Multi-Agent Reinforcement Learning JAAMAS Track

Junjie Sheng, Xiangfeng Wang*, Bo Jin, Wenhao Li, Jun Wang, Junchi Yan, Tsung Hui Chang, Hongyuan Zha

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

This paper investigates multi-agent reinforcement learning (MARL) communication mechanisms in large-scale scenarios. We propose a novel framework, Learning Structured Communication (LSC), that leverages a flexible and efficient communication topology. LSC enables adaptive agent grouping to create diverse hierarchical formations over episodes generated through an auxiliary task and a hierarchical routing protocol. We learn a hierarchical graph neural network with the formed topology that facilitates effective message generation and propagation between inter- and intra-group communications. Unlike state-of-the-art communication mechanisms, LSC possesses a detailed and learnable design for hierarchical communication. Numerical experiments on challenging tasks demonstrate that the proposed LSC exhibits high communication efficiency and global cooperation capability.

Original languageEnglish
Pages (from-to)436-438
Number of pages3
JournalProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume2023-May
StatePublished - 2023
Event22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023 - London, United Kingdom
Duration: 29 May 20232 Jun 2023

Keywords

  • Graph Neural Networks
  • Hierarchical Structure
  • Learning to Communicate
  • Multi-Agent Reinforcement Learning

Fingerprint

Dive into the research topics of 'Learning Structured Communication for Multi-Agent Reinforcement Learning JAAMAS Track'. Together they form a unique fingerprint.

Cite this