Nonclosedness of sets of neural networks in Sobolev spaces

Neural Netw. 2021 May:137:85-96. doi: 10.1016/j.neunet.2021.01.007. Epub 2021 Jan 27.

Abstract

We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly m-times differentiable activation function ρ, we construct a sequence of neural networks [Formula: see text] whose realizations converge in order-(m-1) Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order-(m-1) Sobolev spaces Wm-1,p for p∈[1,∞). We further show that these sets are not closed in Wm,p under slightly stronger conditions on the mth derivative of ρ. For a real analytic activation function, we show that sets of realized neural networks are not closed in Wk,p for anyk∈N. The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function's derivative with weights increasing inversely proportional to the Lp approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.

Keywords: Closedness; Fixed-architecture neural networks; Neural network expressivity; Sobolev space.

MeSH terms

  • Neural Networks, Computer*