Biologically Plausible Sparse Temporal Word Representations

IEEE Trans Neural Netw Learn Syst. 2023 Jul 6:PP. doi: 10.1109/TNNLS.2023.3290004. Online ahead of print.

Abstract

Word representations, usually derived from a large corpus and endowed with rich semantic information, have been widely applied to natural language tasks. Traditional deep language models, on the basis of dense word representations, requires large memory space and computing resource. The brain-inspired neuromorphic computing systems, with the advantages of better biological interpretability and less energy consumption, still have major difficulties in the representation of words in terms of neuronal activities, which has restricted their further application in more complicated downstream language tasks. Comprehensively exploring the diverse neuronal dynamics of both integration and resonance, we probe into three spiking neuron models to post-process the original dense word embeddings, and test the generated sparse temporal codes on several tasks concerning both word-level and sentence-level semantics. The experimental results show that our sparse binary word representations could perform on par with or even better than original word embeddings in capturing semantic information, while requiring less storage. Our methods provide a robust representation foundation of language in terms of neuronal activities, which could potentially be applied to future downstream natural language tasks under neuromorphic computing systems.