Multimodal decoding of error processing in a virtual reality flight simulation

Sci Rep. 2024 Apr 22;14(1):9221. doi: 10.1038/s41598-024-59278-y.

Abstract

Technological advances in head-mounted displays (HMDs) facilitate the acquisition of physiological data of the user, such as gaze, pupil size, or heart rate. Still, interactions with such systems can be prone to errors, including unintended behavior or unexpected changes in the presented virtual environments. In this study, we investigated if multimodal physiological data can be used to decode error processing, which has been studied, to date, with brain signals only. We examined the feasibility of decoding errors solely with pupil size data and proposed a hybrid decoding approach combining electroencephalographic (EEG) and pupillometric signals. Moreover, we analyzed if hybrid approaches can improve existing EEG-based classification approaches and focused on setups that offer increased usability for practical applications, such as the presented game-like virtual reality flight simulation. Our results indicate that classifiers trained with pupil size data can decode errors above chance. Moreover, hybrid approaches yielded improved performance compared to EEG-based decoders in setups with a reduced number of channels, which is crucial for many out-of-the-lab scenarios. These findings contribute to the development of hybrid brain-computer interfaces, particularly in combination with wearable devices, which allow for easy acquisition of additional physiological data.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Brain / physiology
  • Brain-Computer Interfaces*
  • Computer Simulation
  • Electroencephalography* / methods
  • Female
  • Heart Rate / physiology
  • Humans
  • Male
  • Pupil* / physiology
  • Virtual Reality*
  • Young Adult