Assessing robustness to adversarial attacks in attention-based networks: Case of EEG-based motor imagery classification

SLAS Technol. 2024 May 7:100142. doi: 10.1016/j.slast.2024.100142. Online ahead of print.

Abstract

The classification of motor imagery (MI) using Electroencephalography (EEG) plays a pivotal role in facilitating communication for individuals with physical limitations through Brain-Computer Interface (BCI) systems. Recent strides in Attention-Based Networks (ATN) have showcased remarkable performance in EEG signal classification, presenting a promising alternative to conventional Convolutional Neural Networks (CNNs). However, while CNNs have been extensively analyzed for their resilience against adversarial attacks, the susceptibility of ATNs in comparable scenarios remains largely unexplored. This paper aims to fill this gap by investigating the robustness of ATNs in adversarial contexts. We propose a high-performing attention-based deep learning model specifically designed for classifying Motor Imagery (MI) brain signals extracted from EEG data. Subsequently, we conduct a thorough series of experiments to assess various attack strategies targeting ATNs employed in EEG-based BCI tasks. Our analysis utilizes the widely recognized BCI Competition 2a dataset to demonstrate the effectiveness of attention mechanisms in BCI endeavors. Despite achieving commendable classification results in terms of accuracy (87.15%) and kappa score (0.8287), our findings reveal the vulnerability of attention-based models to adversarial manipulations (accuracy: 9.07%, kappa score: -0.21), highlighting the imperative for bolstering the robustness of attention architectures for EEG classification tasks.

Keywords: Adversarial attacks; Attention based networks; Brain–computer interfaces (BCI); Classification; Electroencephalography (EEG).