Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study

Front Hum Neurosci. 2021 Apr 28:15:636191. doi: 10.3389/fnhum.2021.636191. eCollection 2021.

Abstract

This study aims to decode the hemodynamic responses (HRs) evoked by multiple sound-categories using functional near-infrared spectroscopy (fNIRS). The six different sounds were given as stimuli (English, non-English, annoying, nature, music, and gunshot). The oxy-hemoglobin (HbO) concentration changes are measured in both hemispheres of the auditory cortex while 18 healthy subjects listen to 10-s blocks of six sound-categories. Long short-term memory (LSTM) networks were used as a classifier. The classification accuracy was 20.38 ± 4.63% with six class classification. Though LSTM networks' performance was a little higher than chance levels, it is noteworthy that we could classify the data subject-wise without feature selections.

Keywords: auditory cortex; decoding; deep learning; functional near-infrared spectroscopy (fNIRS); long short-term memories (LSTMs).