Entropy-Aware Model Initialization for Effective Exploration in Deep Reinforcement Learning

Sensors (Basel). 2022 Aug 4;22(15):5845. doi: 10.3390/s22155845.

Abstract

Effective exploration is one of the critical factors affecting performance in deep reinforcement learning. Agents acquire data to learn the optimal policy through exploration, and if it is not guaranteed, the data quality deteriorates, which leads to performance degradation. This study investigates the effect of initial entropy, which significantly influences exploration, especially in the early learning stage. The results of this study on tasks with discrete action space show that (1) low initial entropy increases the probability of learning failure, (2) the distributions of initial entropy for various tasks are biased towards low values that inhibit exploration, and (3) the initial entropy for discrete action space varies with both the initial weight and task, making it hard to control. We then devise a simple yet powerful learning strategy to deal with these limitations, namely, entropy-aware model initialization. The proposed algorithm aims to provide a model with high initial entropy to a deep reinforcement learning algorithm for effective exploration. Our experiments showed that the devised learning strategy significantly reduces learning failures and enhances performance, stability, and learning speed.

Keywords: deep reinforcement learning; entropy; exploration; model initialization.

MeSH terms

  • Algorithms*
  • Awareness
  • Entropy
  • Probability
  • Reinforcement, Psychology*