A Robust Mean-Field Actor-Critic Reinforcement Learning Against Adversarial Perturbations on Agent States

IEEE Trans Neural Netw Learn Syst. 2023 Jun 5:PP. doi: 10.1109/TNNLS.2023.3278715. Online ahead of print.

Abstract

Multiagent deep reinforcement learning (DRL) makes optimal decisions dependent on system states observed by agents, but any uncertainty on the observations may mislead agents to take wrong actions. The mean-field actor-critic (MFAC) reinforcement learning is well-known in the multiagent field since it can effectively handle a scalability problem. However, it is sensitive to state perturbations that can significantly degrade the team rewards. This work proposes a Robust MFAC (RoMFAC) reinforcement learning that has two innovations: 1) a new objective function of training actors, composed of a policy gradient function that is related to the expected cumulative discount reward on sampled clean states and an action loss function that represents the difference between actions taken on clean and adversarial states and 2) a repetitive regularization of the action loss, ensuring the trained actors to obtain excellent performance. Furthermore, this work proposes a game model named a state-adversarial stochastic game (SASG). Despite the Nash equilibrium of SASG may not exist, adversarial perturbations to states in the RoMFAC are proven to be defensible based on SASG. Experimental results show that RoMFAC is robust against adversarial perturbations while maintaining its competitive performance in environments without perturbations.