Learning from Deep Stereoscopic Attention for Simulator Sickness Prediction

IEEE Trans Vis Comput Graph. 2023 Feb;29(2):1415-1423. doi: 10.1109/TVCG.2021.3115901. Epub 2022 Dec 29.

Abstract

Simulator sickness induced by 360° stereoscopic video contents is a prolonged challenging issue in Virtual Reality (VR) system. Current machine learning models for simulator sickness prediction ignore the underlying interdependencies and correlations across multiple visual features which may lead to simulator sickness. We propose a model for sickness prediction by automatic learning and adaptive integrating multi-level mappings from stereoscopic video features to simulator sickness scores. Firstly, saliency, optical flow and disparity features are extracted from videos to reflect the factors causing simulator sickness, including human attention area, motion velocity and depth information. Then, these features are embedded and fed into a 3-dimensional convolutional neural network (3D CNN) to extract the underlying multi-level knowledge which includes low-level and higher-order visual concepts, and global image descriptor. Finally, an attentional mechanism is exploited to adaptively fuse multi-level information with attentional weights for sickness score estimation. The proposed model is trained by an end-to-end approach and validated over a public dataset. Comparison results with state-of-the-art models and ablation studies demonstrated improved performance in terms of Root Mean Square Error (RMSE) and Pearson Linear Correlation Coefficient.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Attention
  • Computer Graphics
  • Humans
  • Motion Sickness*
  • Neural Networks, Computer
  • Virtual Reality*