Reinforcement Learning for Mobile Robotics Exploration: A Survey

IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):3796-3810. doi: 10.1109/TNNLS.2021.3124466. Epub 2023 Aug 4.

Abstract

Efficient exploration of unknown environments is a fundamental precondition for modern autonomous mobile robot applications. Aiming to design robust and effective robotic exploration strategies, suitable to complex real-world scenarios, the academic community has increasingly investigated the integration of robotics with reinforcement learning (RL) techniques. This survey provides a comprehensive review of recent research works that use RL to design unknown environment exploration strategies for single and multirobots. The primary purpose of this study is to facilitate future research by compiling and analyzing the current state of works that link these two knowledge domains. This survey summarizes: what are the employed RL algorithms and how they compose the so far proposed mobile robot exploration strategies; how robotic exploration solutions are addressing typical RL problems like the exploration-exploitation dilemma, the curse of dimensionality, reward shaping, and slow learning convergence; and what are the performed experiments and software tools used for learning and testing. Achieved progress is described, and a discussion about remaining limitations and future perspectives is presented.