Sonification of scalp-recorded frequency-following responses (FFRs) offers improved response detection over conventional statistical metrics

J Neurosci Methods. 2018 Jan 1:293:59-66. doi: 10.1016/j.jneumeth.2017.09.005. Epub 2017 Sep 14.

Abstract

Background: The human frequency-following response (FFR) is a neurophonic potential used to examine the brain's encoding of complex sounds (e.g., speech) and monitor neuroplastic changes in auditory processing. Given the FFR's low amplitude (order of nanovolts), current conventions in literature recommend collecting several thousand trials to obtain a robust evoked response with adequate signal-to-noise ratio.

New method: By exploiting the spectrotemporal fidelity of the response, we examined whether auditory playbacks (i.e., "sonifications") of the neural FFR could be used to assess the quality of running recordings and provide a stopping rule for signal averaging.

Results and comparison with existing method: In a listening task over headphones, naïve listeners detected speech-evoked FFRs within ∼500 sweeps based solely on their perception of the presence/absence of a tonal quality to the response. Moreover, response detection based on aural sonifications offered similar and in some cases a 2-3× improvement over objective statistical techniques proposed in the literature (i.e., MI, SNR, MSC, F-test, Corr).

Conclusions: Our findings suggest that simply listening to FFR responses (sonifications) might offer a rapid technique to monitor real-time EEG recordings and provide a stopping rule to terminate signal averaging that performs comparably or better than current approaches.

Keywords: Data auralization; EEG detection algorithms; F-test; Mean-squared coherence (MSC); Objective audiometry.

MeSH terms

  • Acoustic Stimulation / methods*
  • Adult
  • Auditory Perception / physiology*
  • Brain / physiology*
  • Electroencephalography* / methods
  • Female
  • Humans
  • Male
  • Models, Statistical
  • Noise
  • Signal Detection, Psychological / physiology
  • Signal Processing, Computer-Assisted
  • Speech
  • Young Adult