Cognitive Load Prediction from Multimodal Physiological Signals using Multiview Learning

IEEE J Biomed Health Inform. 2023 Dec 22:PP. doi: 10.1109/JBHI.2023.3346205. Online ahead of print.

Abstract

Predicting cognitive load is a crucial issue in the emerging field of human-computer interaction and holds significant practical value, particularly in flight scenarios. Although previous studies have realized efficient cognitive load classification, new research is still needed to adapt the current state-of-the-art multimodal fusion methods. Here, we proposed a feature selection framework based on multiview learning to address the challenges of information redundancy and reveal the common physiological mechanisms underlying cognitive load. Specifically, the multimodal signal features (EEG, EDA, ECG, EOG, & eye movements) at three cognitive load levels were estimated during multiattribute task battery (MATB) tasks performed by 22 healthy participants and fed into a feature selection-multiview classification with cohesion and diversity (FS-MCCD) framework. The optimized feature set was extracted from the original feature set by integrating the weight of each view and the feature weights to formulate the ranking criteria. The cognitive load prediction model, evaluated using real-time classification results, achieved an average accuracy of 81.08% and an average F1-score of 80.94% for three-class classification among 22 participants. Furthermore, the weights of the physiological signal features revealed the physiological mechanisms related to cognitive load. Specifically, heightened cognitive load was linked to amplified δ and θ power in the frontal lobe, reduced α power in the parietal lobe, and an increase in pupil diameter. Thus, the proposed multimodal feature fusion framework emphasizes the effectiveness and efficiency of using these features to predict cognitive load.