Dynamical recurrent neuro-fuzzy identification schemes employing switching parameter hopping

Int J Neural Syst. 2012 Apr;22(2):1250004. doi: 10.1142/S0129065712500049.

Abstract

In this paper we analyze the identification problem which consists of choosing an appropriate identification model and adjusting its parameters according to some adaptive law, such that the response of the model to an input signal (or a class of input signals), approximates the response of the real system for the same input. For identification models we use fuzzy-recurrent high order neural networks. High order networks are expansions of the first-order Hopfield and Cohen-Grossberg models that allow higher order interactions between neurons. The underlying fuzzy model is of Mamdani type assuming a standard defuzzification procedure such as the weighted average. Learning laws are proposed which ensure that the identification error converges to zero exponentially fast or to a residual set when a modeling error is applied. There are two core ideas in the proposed method: (1) Several high order neural networks are specialized to work around fuzzy centers, separating in this way the system into neuro-fuzzy subsystems, and (2) the use of a novel method called switching parameter hopping against the commonly used projection in order to restrict the weights and avoid drifting to infinity.

MeSH terms

  • Algorithms
  • Computer Simulation
  • Fuzzy Logic*
  • Humans
  • Learning / physiology*
  • Neural Networks, Computer*
  • Neurons / physiology*
  • Nonlinear Dynamics*