Approximation capabilities of neural networks on unbounded domains

Neural Netw. 2022 Jan:145:56-67. doi: 10.1016/j.neunet.2021.10.001. Epub 2021 Oct 19.

Abstract

There is limited study in the literature on the representability of neural networks on unbounded domains. For some application areas, results in this direction provide additional value in the design of learning systems. Motivated by an old option pricing problem, we are led to the study of this subject. For networks with a single hidden layer, we show that under suitable conditions they are capable of universal approximation in Lp(R×[0,1]n) but not in Lp(R2×[0,1]n). For deeper networks, we prove that the ReLU network with two hidden layers is a universal approximator in Lp(Rn).

Keywords: Benefit of depth; Unbounded domain; Universal approximation.

MeSH terms

  • Learning*
  • Neural Networks, Computer*