Design Space Exploration of Hardware Spiking Neurons for Embedded Artificial Intelligence

Neural Netw. 2020 Jan:121:366-386. doi: 10.1016/j.neunet.2019.09.024. Epub 2019 Sep 26.

Abstract

Machine learning is yielding unprecedented interest in research and industry, due to recent success in many applied contexts such as image classification and object recognition. However, the deployment of these systems requires huge computing capabilities, thus making them unsuitable for embedded systems. To deal with this limitation, many researchers are investigating brain-inspired computing, which would be a perfect alternative to the conventional Von Neumann architecture based computers (CPU/GPU) that meet the requirements for computing performance, but not for energy-efficiency. Therefore, neuromorphic hardware circuits that are adaptable for both parallel and distributed computations need to be designed. In this paper, we focus on Spiking Neural Networks (SNNs) with a comprehensive study of neural coding methods and hardware exploration. In this context, we propose a framework for neuromorphic hardware design space exploration, which allows to define a suitable architecture based on application-specific constraints and starting from a wide variety of possible architectural choices. For this framework, we have developed a behavioral level simulator for neuromorphic hardware architectural exploration named NAXT. Moreover, we propose modified versions of the standard Rate Coding technique to make trade-offs with the Time Coding paradigm, which is characterized by the low number of spikes propagating in the network. Thus, we are able to reduce the number of spikes while keeping the same neuron's model, which results in an SNN with fewer events to process. By doing so, we seek to reduce the amount of power consumed by the hardware. Furthermore, we present three neuromorphic hardware architectures in order to quantitatively study the implementation of SNNs. One of these architectures integrates a novel hybrid structure: a highly-parallel computation core for most solicited layers, and time-multiplexed computation units for deeper layers. These architectures are derived from a novel funnel-like Design Space Exploration framework for neuromorphic hardware.

Keywords: Artificial intelligence; Embedded systems; Neural coding; Neuromorphic computing; Power consumption; Spiking neural networks.

MeSH terms

  • Action Potentials / physiology*
  • Artificial Intelligence*
  • Brain / physiology*
  • Computer Systems
  • Computers*
  • Humans
  • Machine Learning
  • Neural Networks, Computer*
  • Neurons / physiology*