QC_SANE: Robust Control in DRL Using Quantile Critic With Spiking Actor and Normalized Ensemble

IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6656-6662. doi: 10.1109/TNNLS.2021.3129525. Epub 2023 Sep 1.

Abstract

Recently introduced deep reinforcement learning (DRL) techniques in discrete-time have resulted in significant advances in online games, robotics, and so on. Inspired from recent developments, we have proposed an approach referred to as Quantile Critic with Spiking Actor and Normalized Ensemble (QC_SANE) for continuous control problems, which uses quantile loss to train critic and a spiking neural network (NN) to train an ensemble of actors. The NN does an internal normalization using a scaled exponential linear unit (SELU) activation function and ensures robustness. The empirical study on multijoint dynamics with contact (MuJoCo)-based environments shows improved training and test results than the state-of-the-art approach: population coded spiking actor network (PopSAN).