Discontinuous Neural Networks for Finite-Time Solution of Time-Dependent Linear Equations

IEEE Trans Cybern. 2016 Nov;46(11):2509-2520. doi: 10.1109/TCYB.2015.2479118. Epub 2015 Oct 2.

Abstract

This paper considers a class of nonsmooth neural networks with discontinuous hard-limiter (signum) neuron activations for solving time-dependent (TD) systems of algebraic linear equations (ALEs). The networks are defined by the subdifferential with respect to the state variables of an energy function given by the L 1 norm of the error between the state and the TD-ALE solution. It is shown that when the penalty parameter exceeds a quantitatively estimated threshold the networks are able to reach in finite time, and exactly track thereafter, the target solution of the TD-ALE. Furthermore, this paper discusses the tightness of the estimated threshold and also points out key differences in the role played by this threshold with respect to networks for solving time-invariant ALEs. It is also shown that these convergence results are robust with respect to small perturbations of the neuron interconnection matrices. The dynamics of the proposed networks are rigorously studied by using tools from nonsmooth analysis, the concept of subdifferential of convex functions, and that of solutions in the sense of Filippov of dynamical systems with discontinuous nonlinearities.