DC Proximal Newton for Nonconvex Optimization Problems

IEEE Trans Neural Netw Learn Syst. 2016 Mar;27(3):636-47. doi: 10.1109/TNNLS.2015.2418224. Epub 2015 Apr 21.

Abstract

We introduce a novel algorithm for solving learning problems where both the loss function and the regularizer are nonconvex but belong to the class of difference of convex (DC) functions. Our contribution is a new general purpose proximal Newton algorithm that is able to deal with such a situation. The algorithm consists in obtaining a descent direction from an approximation of the loss function and then in performing a line search to ensure a sufficient descent. A theoretical analysis is provided showing that the iterates of the proposed algorithm admit as limit points stationary points of the DC objective function. Numerical experiments show that our approach is more efficient than the current state of the art for a problem with a convex loss function and a nonconvex regularizer. We have also illustrated the benefit of our algorithm in high-dimensional transductive learning problem where both the loss function and regularizers are nonconvex.

Publication types

  • Research Support, Non-U.S. Gov't