Adaptive Discount Factor for Deep Reinforcement Learning in Continuing Tasks with Uncertainty

Sensors (Basel). 2022 Sep 25;22(19):7266. doi: 10.3390/s22197266.

Abstract

Reinforcement learning (RL) trains an agent by maximizing the sum of a discounted reward. Since the discount factor has a critical effect on the learning performance of the RL agent, it is important to choose the discount factor properly. When uncertainties are involved in the training, the learning performance with a constant discount factor can be limited. For the purpose of obtaining acceptable learning performance consistently, this paper proposes an adaptive rule for the discount factor based on the advantage function. Additionally, how to use the advantage function in both on-policy and off-policy algorithms is presented. To demonstrate the performance of the proposed adaptive rule, it is applied to PPO (Proximal Policy Optimization) for Tetris in order to validate the on-policy case, and to SAC (Soft Actor-Critic) for the motion planning of a robot manipulator to validate the off-policy case. In both cases, the proposed method results in a better or similar performance compared with cases using the best constant discount factors found by exhaustive search. Hence, the proposed adaptive discount factor automatically finds a discount factor that leads to comparable training performance, and that can be applied to representative deep reinforcement learning problems.

Keywords: Tetris; discount factor; path planning; reinforcement learning; uncertainty.

MeSH terms

  • Algorithms*
  • Learning
  • Reinforcement, Psychology*
  • Reward
  • Uncertainty