Decentralized Policy Coordination in Mobile Sensing with Consensual Communication

Sensors (Basel). 2022 Dec 7;22(24):9584. doi: 10.3390/s22249584.

Abstract

In a typical mobile-sensing scenario, multiple autonomous vehicles cooperatively navigate to maximize the spatial-temporal coverage of the environment. However, as each vehicle can only make decentralized navigation decisions based on limited local observations, it is still a critical challenge to coordinate the vehicles for cooperation in an open, dynamic environment. In this paper, we propose a novel framework that incorporates consensual communication in multi-agent reinforcement learning for cooperative mobile sensing. At each step, the vehicles first learn to communicate with each other, and then, based on the received messages from others, navigate. Through communication, the decentralized vehicles can share information to break through the dilemma of local observation. Moreover, we utilize mutual information as a regularizer to promote consensus among the vehicles. The mutual information can enforce positive correlation between the navigation policy and the communication message, and therefore implicitly coordinate the decentralized policies. The convergence of this regularized algorithm can be proved theoretically under certain mild assumptions. In the experiments, we show that our algorithm is scalable and can converge very fast during training phase. It also outperforms other baselines significantly in the execution phase. The results validate that consensual communication plays very important role in coordinating the behaviors of decentralized vehicles.

Keywords: communication; decentralized coordination; mobile sensing; reinforcement learning.

MeSH terms

  • Algorithms*
  • Communication
  • Learning*