Convergence Analysis of Novel Fractional-Order Backpropagation Neural Networks With Regularization Terms

IEEE Trans Cybern. 2024 May;54(5):3039-3050. doi: 10.1109/TCYB.2023.3247453. Epub 2024 Apr 16.

Abstract

Fractional-order derivatives have the potential to improve the performance of backpropagation (BP) neural networks. Several studies have found that the fractional-order gradient learning methods may not converge to real extreme points. The truncation and the modification of the fractional-order derivative are applied to guarantee convergence to the real extreme point. Nonetheless, the real convergence ability is based on the assumption that the algorithm is convergent, which limits the practicality of the algorithm. In this article, a novel truncated fractional-order BP neural network (TFO-BPNN) and a novel hybrid TFO-BPNN (HTFO-BPNN) are designed to solve the above problem. First, to avoid overfitting, a squared regularization term is introduced into the fractional-order BP neural network. Second, a novel dual cross-entropy cost function is proposed and employed as a loss function for the two neural networks. The penalty parameter helps to adjust the effect of the penalty term and further alleviates the gradient vanishing problem. In terms of convergence, the convergence ability of the two proposed neural networks is first proved. Then, the convergence ability to the real extreme point is further analyzed theoretically. Finally, the simulation results effectively illustrate the feasibility, high accuracy, and good generalization ability of the proposed neural networks. Comparative studies among the proposed neural networks and some related methods further substantiate the superiority of the TFO-BPNN and the HTFO-BPNN.