Accelerating Sequential Minimal Optimization via Stochastic Subgradient Descent

IEEE Trans Cybern. 2021 Apr;51(4):2215-2223. doi: 10.1109/TCYB.2019.2893289. Epub 2021 Mar 17.

Abstract

Sequential minimal optimization (SMO) is one of the most popular methods for solving a variety of support vector machines (SVMs). The shrinking and caching techniques are commonly used to accelerate SMO. An interesting phenomenon of SMO is that most of the computational time is wasted on the first half of iterations for building a good solution closing to the optimal. However, as we all know, the stochastic subgradient descent (SSGD) method is extremely fast for building a good solution. In this paper, we propose a generalized framework of accelerating SMO through SSGD for a variety of SVMs of binary classification, regression, ordinal regression, and so on. We also provide a deep insight about why SSGD can accelerate SMO. Experimental results on a variety of datasets and learning applications confirm that our method can effectively speed up SMO.