A Realtime, Open-Source Speech-Processing Platform for Research in Hearing Loss Compensation

Conf Rec Asilomar Conf Signals Syst Comput. 2017 Oct-Nov:2017:1900-1904. doi: 10.1109/acssc.2017.8335694. Epub 2018 Apr 16.

Abstract

We are developing a realtime, wearable, open-source speech-processing platform (OSP) that can be configured at compile and run times by audiologists and hearing aid (HA) researchers to investigate advanced HA algorithms in lab and field studies. The goals of this contribution are to present the current system and propose areas for enhancements and extensions. We identify (i) basic and (ii) advanced features in commercial HAs and describe current signal processing libraries and reference designs to build a functional HA. We present performance of this system and compare with commercial HAs using "Specification of Hearing Aid Characteristics," the ANSI 3.22 standard. We then describe a wireless protocol stack for remote control of the HA parameters and uploading media and HA status for offline research. The proposed architecture enables advanced research to compensate for hearing loss by offloading processing from ear-level-assemblies, thereby eliminating the bottlenecks of CPU and communication between left and right HAs.

Keywords: Hearing aids; Open Speech Platform (OSP); speech and audio processing.