Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks

PLoS One. 2016 Jan 29;11(1):e0146917. doi: 10.1371/journal.pone.0146917. eCollection 2016.

Abstract

Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Language*
  • Memory, Long-Term*
  • Memory, Short-Term*
  • Neural Networks, Computer*

Grants and funding

This work has been supported by project CMC-V2: Caracterización, Modelado y Compensación de Variabilidad en la Señnal de Voz (TEC2012-37585-C02-01), funded by Ministerio de Economía y Competitividad, Spain.