Learning Mobile Manipulation through Deep Reinforcement Learning

Sensors (Basel). 2020 Feb 10;20(3):939. doi: 10.3390/s20030939.

Abstract

Mobile manipulation has a broad range of applications in robotics. However, it is usually more challenging than fixed-base manipulation due to the complex coordination of a mobile base and a manipulator. Although recent works have demonstrated that deep reinforcement learning is a powerful technique for fixed-base manipulation tasks, most of them are not applicable to mobile manipulation. This paper investigates how to leverage deep reinforcement learning to tackle whole-body mobile manipulation tasks in unstructured environments using only on-board sensors. A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed. It has an efficient framework decoupling visual perception from the deep reinforcement learning control, which enables its generalization from simulation training to real-world testing. Extensive simulation and experiment results show that the proposed mobile manipulation system is able to grasp different types of objects autonomously in various simulation and real-world scenarios, verifying the effectiveness of the proposed mobile manipulation system.

Keywords: deep learning; deep reinforcement learning; mobile manipulation.

MeSH terms

  • Algorithms
  • Calibration
  • Computer Simulation
  • Deep Learning*
  • Robotics*
  • Visual Perception