A neural network of smooth hinge functions

IEEE Trans Neural Netw. 2010 Sep;21(9):1381-95. doi: 10.1109/TNN.2010.2053383. Epub 2010 Aug 3.

Abstract

Smooth hinging hyperplane (SHH) has been proposed as an improvement over the well-known hinging hyperplane (HH) by the fact that it retains the useful features of HH while overcoming HH's drawback of nondifferentiability. This paper introduces a formal characterization of smooth hinge function (SHF), which can be used to generate SHH as a neural network. A method for the general construction of SHF is also given. Furthermore, the work proves that SHH is better than HH in functional approximation, i.e., the optimal error of SHH approximating a general function is always smaller or equal to that of HH. Particularly, in the case that the SHF is generated via the integration of a class of sigmoidal functions, it is further proven that the corresponding SHH of the 2m SHFs would outperform a neural network with m of the sigmoidal function from which the SHF is derived. Any upper bound established on the approximation error of a neural network of m sigmoidal activation functions can hence be translated to the SHH of m SHFs by replacing m with [m/2]. The work also includes an algorithm for the identification of SHH making use of its differentiability property. Simulation experiments are presented to validate the theoretical conclusions to possible extent.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Models, Neurological*
  • Neural Networks, Computer*
  • Nonlinear Dynamics*