Computer-aided optimal designs for improving neural network generalization

Neural Netw. 2008 Sep;21(7):945-50. doi: 10.1016/j.neunet.2008.05.012. Epub 2008 Jun 13.

Abstract

In this article we propose a new insight into the field of feed-forward neural network modeling. We considered the framework of the nonlinear regression models to construct computer-aided D-optimal designs for this class of neural models. These designs can be seen as a particular case of active learning. Classical algorithms are used to construct local approximate and local exact D-optimal designs. We observed that the so-called generalization of a neural network (the equivalent term, "predictive ability", is more familiar to statisticians) is improved when the D-efficiency of the chosen "learning set design" increases. We thus showed that the D-efficiency criterion can be the basis for a better strategy for the neural network learning phase than the standard uniform random strategy encountered in this field. Our proposition is based on two possible strategies: a One-Step Strategy or a Full Sequential Strategy. Intensive Monte Carlo simulations with an academic example show that the D-optimal "learning set design" strategies proposed lead to a substantial improvement in the use of neural network models.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Computer Simulation
  • Computer-Aided Design*
  • Feedback*
  • Generalization, Psychological*
  • Humans
  • Models, Neurological
  • Neural Networks, Computer*
  • Neurons / physiology
  • Nonlinear Dynamics