Low-Complexity Approximate Convolutional Neural Networks

IEEE Trans Neural Netw Learn Syst. 2018 Dec;29(12):5981-5992. doi: 10.1109/TNNLS.2018.2815435. Epub 2018 Apr 10.

Abstract

In this paper, we present an approach for minimizing the computational complexity of the trained convolutional neural networks (ConvNets). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with an efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero and one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low power, efficient hardware designs. We applied our approach on three use cases of different complexities: 1) a "light" but efficient ConvNet for face detection (with around 1000 parameters); 2) another one for hand-written digit classification (with more than 180 000 parameters); and 3) a significantly larger ConvNet: AlexNet with million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Humans
  • Linear Models
  • Neural Networks, Computer*
  • Pattern Recognition, Automated*
  • Signal Processing, Computer-Assisted*