End-to-End Autonomous Navigation Based on Deep Reinforcement Learning with a Survival Penalty Function

Sensors (Basel). 2023 Oct 23;23(20):8651. doi: 10.3390/s23208651.

Abstract

An end-to-end approach to autonomous navigation that is based on deep reinforcement learning (DRL) with a survival penalty function is proposed in this paper. Two actor-critic (AC) frameworks, namely, deep deterministic policy gradient (DDPG) and twin-delayed DDPG (TD3), are employed to enable a nonholonomic wheeled mobile robot (WMR) to perform navigation in dynamic environments containing obstacles and for which no maps are available. A comprehensive reward based on the survival penalty function is introduced; this approach effectively solves the sparse reward problem and enables the WMR to move toward its target. Consecutive episodes are connected to increase the cumulative penalty for scenarios involving obstacles; this method prevents training failure and enables the WMR to plan a collision-free path. Simulations are conducted for four scenarios-movement in an obstacle-free space, in a parking lot, at an intersection without and with a central obstacle, and in a multiple obstacle space-to demonstrate the efficiency and operational safety of our method. For the same navigation environment, compared with the DDPG algorithm, the TD3 algorithm exhibits faster numerical convergence and higher stability in the training phase, as well as a higher task execution success rate in the evaluation phase.

Keywords: actor–critic (AC) method; autonomous; reinforcement learning (RL); wheeled mobile robots (WMRs).

Grants and funding

This research was funded by the National Science Council of Taiwan, R.O.C., under Contract NSTC 111-2622-E-262-003.