Random Polynomial Neural Networks: Analysis and Design

IEEE Trans Neural Netw Learn Syst. 2023 Jul 4:PP. doi: 10.1109/TNNLS.2023.3288577. Online ahead of print.

Abstract

In this article, we propose the concept of random polynomial neural networks (RPNNs) realized based on the architecture of polynomial neural networks (PNNs) with random polynomial neurons (RPNs). RPNs exhibit generalized polynomial neurons (PNs) based on random forest (RF) architecture. In the design of RPNs, the target variables are no longer directly used in conventional decision trees, and the polynomial of these target variables is exploited here to determine the average prediction. Unlike the conventional performance index used in the selection of PNs, the correlation coefficient is adopted here to select the RPNs of each layer. When compared with the conventional PNs used in PNNs, the proposed RPNs exhibit the following advantages: first, RPNs are insensitive to outliers; second, RPNs can obtain the importance of each input variable after training; third, RPNs can alleviate the overfitting problem with the use of an RF structure. The overall nonlinearity of a complex system is captured by means of PNNs. Moreover, particle swarm optimization (PSO) is exploited to optimize the parameters when constructing RPNNs. The RPNNs take advantage of both RF and PNNs: it exhibits high accuracy based on ensemble learning used in the RF and is beneficial to describe high-order nonlinear relations between input and output variables stemming from PNNs. Experimental results based on a series of well-known modeling benchmarks illustrate that the proposed RPNNs outperform other state-of-the-art models reported in the literature.