Communication Learning for True Cooperation in Multi-Agent Systems

Communication Learning for True Cooperation in Multi-Agent Systems

Multi-Agent Reinforcement Learning (MARL) poses distinct challenges, including credit assignment and non-stationarity, which increase the difficulty of learning compared to Single-Agent Reinforcement Learning. Learning effective cooperative policy through decentralized learning relying solely on individual agents’ own knowledge and memory can be challenging, or outright impossible. The introduction of global/local communication among agents/agent teams can provide a solution to this problem by enabling agents to share information or/and intentions while maintaining scalability, thus effectively mitigating partial observability and non-stationarity, while facilitating true joint collaboration at the team level. In other words, communication allows dispersed agents to truly act as a group, rather than as a collection of individuals.

This project aims at investigating and contributing to the state-of-the-art in the field of communication learning, where multiple agents both learn a policy that dictates their actions, as well as a language with which they can communicate and influence each other’s’ actions to achieve true cooperation. The project will demonstrate the power of communication in both simulated and physical environment for a variety of multi-agent cooperative tasks, particularly complex tasks such as SMAC and Multi-Agent Pathfinding (MAPF), which require agents to develop joint maneuvering skills.

Related recent publications:

People

Guillaume SARTORETTI
Assistant Professor
Yutong WANG
Yutong WANG
Pamela WANG
Pamela WANG
Swasti KHURANA
Swasti KHURANA
Mingliang ZHANG
Mingliang ZHANG
Sichang SU
Sichang SU
Shuheng WANG
Shuheng WANG
Mingyu LU
Mingyu LU
Hanqi ZHAO H
Hanqi ZHAO