Neuromorphic LIF Row-by-Row Multiconvolution Processor for FPGA

IEEE Trans Biomed Circuits Syst. 2019 Feb;13(1):159-169. doi: 10.1109/TBCAS.2018.2880012. Epub 2018 Nov 7.

Abstract

Deep Learning algorithms have become state-of-the-art methods for multiple fields, including computer vision, speech recognition, natural language processing, and audio recognition, among others. In image vision, convolutional neural networks (CNN) stand out. This kind of network is expensive in terms of computational resources due to the large number of operations required to process a frame. In recent years, several frame-based chip solutions to deploy CNN for real time have been developed. Despite the good results in power and accuracy given by these solutions, the number of operations is still high, due the complexity of the current network models. However, it is possible to reduce the number of operations using different computer vision techniques other than frame-based, e.g., neuromorphic event-based techniques. There exist several neuromorphic vision sensors whose pixels detect changes in luminosity. Inspired in the leaky integrate-and-fire (LIF) neuron, we propose in this manuscript an event-based field-programmable gate array (FPGA) multiconvolution system. Its main novelty is the combination of a memory arbiter for efficient memory access to allow row-by-row kernel processing. This system is able to convolve 64 filters across multiple kernel sizes, from 1 × 1 to 7 × 7, with latencies of 1.3 μs and 9.01 μs, respectively, generating a continuous flow of output events. The proposed architecture will easily fit spike-based CNNs.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Membrane Potentials / physiology
  • Neural Networks, Computer
  • Neurons / physiology*