Feature decomposition architectures for neural networks: algorithms, error bounds, and applications

Int J Neural Syst. 2002 Feb;12(1):69-81. doi: 10.1142/S0129065702001011.

Abstract

In recent years, systems consisting of multiple modular neural networks have attracted substantial interest in the neural networks community because of various advantages they offer over a single large monolithic network. In this paper, we propose two basic feature decomposition models (namely, parallel model and tandem model) in which each of the neural network modules processes a disjoint subset of the input features. A novel feature decomposition algorithm is introduced to partition the input space into disjoint subsets solely based on the available training data. Under certain assumptions, the approximation error due to decomposition can be proved to be bounded by any desired small value over a compact set. Finally, the performance of feature decomposition networks is compared with that of a monolithic network in real world bench mark pattern recognition and modeling problems.

MeSH terms

  • Algorithms*
  • Neural Networks, Computer*
  • Speech Perception