A multi-camera and multimodal dataset for posture and gait analysis

Sci Data. 2022 Oct 6;9(1):603. doi: 10.1038/s41597-022-01722-7.

Abstract

Monitoring gait and posture while using assisting robotic devices is relevant to attain effective assistance and assess the user's progression throughout time. This work presents a multi-camera, multimodal, and detailed dataset involving 14 healthy participants walking with a wheeled robotic walker equipped with a pair of affordable cameras. Depth data were acquired at 30 fps and synchronized with inertial data from Xsens MTw Awinda sensors and kinematic data from the segments of the Xsens biomechanical model, acquired at 60 Hz. Participants walked with the robotic walker at 3 different gait speeds, across 3 different walking scenarios/paths at 3 different locations. In total, this dataset provides approximately 92 minutes of total recording time, which corresponds to nearly 166.000 samples of synchronized data. This dataset may contribute to the scientific research by allowing the development and evaluation of: (i) vision-based pose estimation algorithms, exploring classic or deep learning approaches; (ii) human detection and tracking algorithms; (iii) movement forecasting; and (iv) biomechanical analysis of gait/posture when using a rehabilitation device.

Publication types

  • Dataset

MeSH terms

  • Gait
  • Gait Analysis*
  • Humans
  • Posture*
  • Walkers*
  • Walking

Grants and funding