Learning 3D Bipedal Walking with Planned Footsteps and Fourier Series Periodic Gait Planning

Sensors (Basel). 2023 Feb 7;23(4):1873. doi: 10.3390/s23041873.

Abstract

Reinforcement learning provides a general framework for achieving autonomy and diversity in traditional robot motion control. Robots must walk dynamically to adapt to different ground environments in complex environments. To achieve walking ability similar to that of humans, robots must be able to perceive, understand and interact with the surrounding environment. In 3D environments, walking like humans on rugged terrain is a challenging task because it requires complex world model generation, motion planning and control algorithms and their integration. So, the learning of high-dimensional complex motions is still a hot topic in research. This paper proposes a deep reinforcement learning-based footstep tracking method, which tracks the robot's footstep position by adding periodic and symmetrical information of bipedal walking to the reward function. The robot can achieve robot obstacle avoidance and omnidirectional walking, turning, standing and climbing stairs in complex environments. Experimental results show that reinforcement learning can be combined with real-time robot footstep planning, avoiding the learning of path-planning information in the model training process, so as to avoid the model learning unnecessary knowledge and thereby accelerate the training process.

Keywords: footstep planning; gait phase; humanoid; reinforcement learning.

MeSH terms

  • Algorithms
  • Fourier Analysis
  • Gait*
  • Humans
  • Learning
  • Walking*

Grants and funding

This work has been partially funded by Leju Robotics.