Adaptively Customizing Activation Functions for Various Layers

IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):6096-6107. doi: 10.1109/TNNLS.2021.3133263. Epub 2023 Sep 1.

Abstract

To enhance the nonlinearity of neural networks and increase their mapping abilities between the inputs and response variables, activation functions play a crucial role to model more complex relationships and patterns in the data. In this work, a novel methodology is proposed to adaptively customize activation functions only by adding very few parameters to the traditional activation functions such as Sigmoid, Tanh, and rectified linear unit (ReLU). To verify the effectiveness of the proposed methodology, some theoretical and experimental analysis on accelerating the convergence and improving the performance is presented, and a series of experiments are conducted based on various network models (such as AlexNet, VggNet, GoogLeNet, ResNet and DenseNet), and various datasets (such as CIFAR10, CIFAR100, miniImageNet, PASCAL VOC, and COCO). To further verify the validity and suitability in various optimization strategies and usage scenarios, some comparison experiments are also implemented among different optimization strategies (such as SGD, Momentum, AdaGrad, AdaDelta, and ADAM) and different recognition tasks such as classification and detection. The results show that the proposed methodology is very simple but with significant performance in convergence speed, precision, and generalization, and it can surpass other popular methods such as ReLU and adaptive functions such as Swish in almost all experiments in terms of overall performance.