Learning adaptive reaching and pushing skills using contact information

Front Neurorobot. 2023 Sep 14:17:1271607. doi: 10.3389/fnbot.2023.1271607. eCollection 2023.

Abstract

In this paper, we propose a deep reinforcement learning-based framework that enables adaptive and continuous control of a robot to push unseen objects from random positions to the target position. Our approach takes into account contact information in the design of the reward function, resulting in improved success rates, generalization for unseen objects, and task efficiency compared to policies that do not consider contact information. Through reinforcement learning using only one object in simulation, we obtain a learned policy for manipulating a single object, which demonstrates good generalization when applied to the task of pushing unseen objects. Finally, we validate the effectiveness of our approach in real-world scenarios.

Keywords: adaptivity; contact information; pushing; reinforcement learning; task efficiency.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by the National Natural Science Foundation of China (U2013602 and 52075115), National Key R&D Program of China (2020YFB13134 and 2022YFB46018), Self-Planned Task (SKLRS202001B, SKLRS202110B, and SKLRS202301A12) of State Key Laboratory of Robotics and System (HIT), Shenzhen Science and Technology Research and Development Foundation (JCYJ20190813171009236), Basic Research on Free Exploration of Shenzhen Virtual University Park (2021Szvup085), and Basic Scientific Research of Technology (JCKY2020603C009).