Robust Stochastic Gradient Descent With Student-t Distribution Based First-Order Momentum

IEEE Trans Neural Netw Learn Syst. 2022 Mar;33(3):1324-1337. doi: 10.1109/TNNLS.2020.3041755. Epub 2022 Feb 28.

Abstract

Remarkable achievements by deep neural networks stand on the development of excellent stochastic gradient descent methods. Deep-learning-based machine learning algorithms, however, have to find patterns between observations and supervised signals, even though they may include some noise that hides the true relationship between them, more or less especially in the robotics domain. To perform well even with such noise, we expect them to be able to detect outliers and discard them when needed. We, therefore, propose a new stochastic gradient optimization method, whose robustness is directly built in the algorithm, using the robust student-t distribution as its core idea. We integrate our method to some of the latest stochastic gradient algorithms, and in particular, Adam, the popular optimizer, is modified through our method. The resultant algorithm, called t-Adam, along with the other stochastic gradient methods integrated with our core idea is shown to effectively outperform Adam and their original versions in terms of robustness against noise on diverse tasks, ranging from regression and classification to reinforcement learning problems.