SparseNet: Coordinate Descent With Nonconvex Penalties

J Am Stat Assoc. 2011;106(495):1125-1138. doi: 10.1198/jasa.2011.tm09738.

Abstract

We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the Supplementary Materials section.

Keywords: Degrees of freedom; LASSO; Nonconvex optimization; Regularization surface; Sparse regression; Variable selection.