Navigating Virtual Environments Using Leg Poses and Smartphone Sensors

Sensors (Basel). 2019 Jan 13;19(2):299. doi: 10.3390/s19020299.

Abstract

Realization of navigation in virtual environments remains a challenge as it involves complex operating conditions. Decomposition of such complexity is attainable by fusion of sensors and machine learning techniques. Identifying the right combination of sensory information and the appropriate machine learning technique is a vital ingredient for translating physical actions to virtual movements. The contributions of our work include: (i) Synchronization of actions and movements using suitable multiple sensor units, and (ii) selection of the significant features and an appropriate algorithm to process them. This work proposes an innovative approach that allows users to move in virtual environments by simply moving their legs towards the desired direction. The necessary hardware includes only a smartphone that is strapped to the subjects' lower leg. Data from the gyroscope, accelerometer and campus sensors of the mobile device are transmitted to a PC where the movement is accurately identified using a combination of machine learning techniques. Once the desired movement is identified, the movement of the virtual avatar in the virtual environment is realized. After pre-processing the sensor data using the box plot outliers approach, it is observed that Artificial Neural Networks provided the highest movement identification accuracy of 84.2% on the training dataset and 84.1% on testing dataset.

Keywords: feature selection; machine learning; mobile sensors; movement identification; virtual reality.

MeSH terms

  • Algorithms
  • Humans
  • Leg / physiology*
  • Machine Learning*
  • Movement
  • Neural Networks, Computer
  • Smartphone*
  • User-Computer Interface*