An Information-Assisted Deep Reinforcement Learning Path Planning Scheme for Dynamic and Unknown Underwater Environment

IEEE Trans Neural Netw Learn Syst. 2023 Nov 21:PP. doi: 10.1109/TNNLS.2023.3332172. Online ahead of print.

Abstract

An autonomous underwater vehicle (AUV) has shown impressive potential and promising exploitation prospects in numerous marine missions. Among its various applications, the most essential prerequisite is path planning. Although considerable endeavors have been made, there are several limitations. A complete and realistic ocean simulation environment is critically needed. As most of the existing methods are based on mathematical models, they suffer from a large gap with reality. At the same time, the dynamic and unknown environment places high demands on robustness and generalization. In order to overcome these limitations, we propose an information-assisted reinforcement learning path planning scheme. First, it performs numerical modeling based on real ocean current observations to establish a complete simulation environment with the grid method, including 3-D terrain, dynamic currents, local information, and so on. Next, we propose an information compression (IC) scheme to trim the mutual information (MI) between reinforcement learning neural network layers to improve generalization. A proof based on information theory provides solid support for this. Moreover, for the dynamic characteristics of the marine environment, we elaborately design a confidence evaluator (CE), which evaluates the correlation between two adjacent frames of ocean currents to provide confidence for the action. The performance of our method has been evaluated and proven by numerical results, which demonstrate a fair sensitivity to ocean currents and high robustness and generalization to cope with the dynamic and unknown underwater environment.